Research Projects

SHRUTI: Industrial-Strength Formaly Certified SAT Solving

Advances in Boolean satisfiability (SAT) technology have made it possible for SAT solvers to be routinely used in the verification of large industrial problems, including problems from safety-critical domains such as the railways, avionics, and automotive industries. However, the use of SAT solvers in such domains requires some explicit form of assurance for the results since SAT solvers can and sometimes have bugs.

For example, in the SAT 2007 competition, the solver SATzilla CRAFTED reported incorrect outcomes on several problems.

Two alternative methods can be used to provide assurance. First, the solver could be proven correct once and for all, but this approach had limited success. For example, Lescuyer et al. formally designed and verified a SAT solver using the Coq theorem prover, but without any of the techniques and optimizations used in modern solvers. Smith and Westfold use a correct-by-construction approach to simultaneously derive code and correctness proofs for a family of SAT solvers, but their performance falls again short of the current state of the art. Reasoning about the optimizations makes the formal correctness proofs exceedingly hard. This was shown in the work of Maric, who verified at the pseudo-code level the algorithms used in the ARGO-SAT solver but did not verify the actual solver itself. In addition, the formal verification has to be repeated for every new SAT solver (or even a new version of a solver), or else the users are locked into using the specific verified solver.

Alternatively, a proof checker can be used to validate eachindividual outcome of the solver independently; this requires the solver to produce a proof trace that is viewed as a certificate justifying its outcome. This approach was popularized by the Certified Unsatisfiable Track of the SAT 2007 competition and was used in the design of several SAT solvers such as tts, Booleforce, PicoSAT, and zChaff. However, the corresponding proof checkers are typically implemented by the developers of the solvers whose output they check, which can lead to problems in practice. In fact, the checkers booleforce-res, picosat-res and tts-rpt, reported both “proof errors” and “program errors” on some of the benchmarks, although it is unclear what these errors signify.

Confidence can be increased if the checker is proven correct, once and for all. This is substantially simpler than proving the solver correct, because the checker is comparatively small and straightforward, and avoids system lock-in, because the checker can work for all solvers that can produce proof traces in the agreed format. This approach originates in the formal development of a proof checker for zChaff and Minisat proof traces by Weber and Amjad, and we follow it here as well. However, we depart considerably from Weber and Amjad in how we design and implement our solution. Their checker replays the derivation encoded in the proof trace {inside} an LCF-style theorem provers such as HOL~4 or Isabelle. Since the design and implementation of these provers relies on using the primitive inference rules of the underlying theorem prover, assurance is very high. However, their checker can run
only inside the supporting prover, and not as a standalone tool, and performance bottlenecks become prominent when the size of the problems increases. Our checker SHRUTI, is formally designed and verified using the higher-order logic based proof assistant
Coq, but we never use Coq for execution; instead we automatically extract an OCaml program from the formal development that can be compiled and used independently of Coq. This prevents the user to be locked-in to a specific proof assistant, and allows us to wrap SHRUTI around an industrial-strength but untrusted solver, to provide an industrial-strength certified solver that can be used as regular component in a SAT-based verification work flow.

Our aim is not a fully formal end-to-end certification, which would in an extreme view need to include correctness proofs for file operations, the compiler, and even the hardware. Instead, we focus on the core of the checker, which is based on the resolution inference rule, and formally prove its design correct. We then rely on Coq’s program extraction mechanism and some trivial glue code as trusted components to build the entire checker. This way we are able to combine a high degree of assurance (much the same way as Amjad and Weber did) with high performance. SHRUTI is significantly (up to 32 times) faster than Amjad’s checker implemented in HOL 4.

Selective State Retention Design using Symbolic Simulation

Addressing both standby and active power is a major challenge in developing System-on-Chip designs for battery-powered products. Powering off sections of logic or memories loses internal register and RAM states so designers have to weigh up the benefits and costs of implementing state retention on some or all of the power gated subsystems where state recovery has significant real-time or energy cost, compared to resetting the subsystem and re-acquiring state from scratch. Library IP and EDA tools can support state retention in hardware synthesized from standard RTL, but due to the silicon area costs there is strong interest in only retaining certain selective state for example the “architectural state” of a CPU to implement sleep modes. Currently there is no known rigourous technique for checking the integrity of selective state retention, and this is due to the complexity of checking that the correctness of the design is not compromised in any way. The complexity is exacerbated due to the interaction between the retained and the non-retained state, and exhaustive simulation rapidly becomes infeasible.

In this work we carried out a study based on symbolic simulation for assisting the designers to design and implement selective retention correctly. The main finding of our study is that the programmer visible state or the architectural state of the CPU needs to be implemented using retention registers whilst other micro-architectural enhancements such as pipeline registers, TLBs and caches can be implemented using normal registers without retention. This has a profound impact on power and area savings for chip design.

By selectively retaining the state of the programmer’s “architectural” model and not the increasing proportion of extra state, one can incorporate energy-efficient sleep modes. To the best of our knowledge this is the first study in the area of rigourous design and implementation of selective state retention.

Symbolic Simulation Based Fault Injection

One effective fault injection approach involves instrumenting the RTL in a controlled manner to incorporate fault injection, and evaluating the behaviour of the faulty RTL whilst running some benchmark programs. This approach relies on checking the
effects of faults whilst the design is executing a specific binary image, and therefore the true impact of the fault is limited by the shadow of the program image. Another limitation of this approach is the use of extra hardware for fault injection which is not needed during the fault-free running of the design. We use the specification and verification framework of STE to carry out fault injection in an efficient way. We exploit the expressive power of STE to specify properties that capture transient faults in the circuit, and these are verified against the golden RTL – one that has been formally verified by us in a fault-free scenario. The benefit of injecting faults in the properties, rather than the model, enables us to check for faults efficiently during model checking, since we have fewer symbolic variables to deal with. Properties use symbolic variables, and they encode the values that are placed at different points in the circuit during simulation. When these properties express a fault, the outcome of the model checking run is a Boolean formula – a counter-example that encapsulates the data inter-dependency amongst circuit nodes, symbolically rather than in terms of logic 0 and 1. To demonstrate the capability of our proposed methodology, we develop the necessary properties to analyse the effect of transient faults on a 32-bit multi-cycle RISC processor. Our approach can be applied generally to any faulty processor, to compute precise inter-dependencies among different functional units, thereby lending better understanding of the faulty behaviour in an efficient way.

Symmetry Reduction for STE Model Checking

Have developed a structured modelling language that makes it possible to design digital circuits in a manner that preserves certain interesting properties of circuit structure for example symmetries. The design of this modelling language is based on the design of an abstract data type of structured circuits, and a type system that enforces symmetric circuits to be constructed using the rules of the type system. Preserving symmetry through syntactic discipline reduces the problem of symmetry discovery in circuits to mere type checking rather than using graph isomorphism checks lending substantial improvement in time and memory in the overall verification cycle using model checking. We use STE model checking technique and we have developed a set of sound inference rules that enable the user to do sound property reduction. Symmetry in circuit models is mirrored by symmetry in STE properties. Once properties have been decomposed into several smaller properties, they can be partitioned into equivalence classes, and one can verify one representative from each equivalence class and conclude that the original STE property has been verified as well. This leads to very powerful reductions on several interesting and large circuit designs like memories.

Machine Learning

By using a combination of inductive learning methods like neural networks and knowledge representation techniques using first order logic, one can get the best of both the worlds. The language of the first order logic can be used for representing the knowledge and the learning techniques from neural networks can be used to refine this knowledge. This allows one to refine the original body of knowledge and express it using rules in first order logic. In the Master’s thesis we devised a method of transforming a subset of first order logic programs to a spiking neural network and then used a Hebbian style learning algorithm inspired from the work on neuro-biological models of learning, to refine the neural network before doing rule extraction.