A Brief History of Formal Verification

Dr Ashish Darbari
7 min readJun 28, 2019

In this article I review the history of formal verification — a field with fascinating history of over 70 years. I’ve been a user of formal methods for over two decades and though I’ve not had a career as illustrious as some of the people whose contributions I cover in this article; I’ve certainly enjoyed using formal methods in all its glory from interactive theorem proving to state-of-the-art model checking and equivalence checking. Along the way I made my own little contributions.

The foundation of formal verification

As conventional simulation-based testing has increasingly struggled to cope with design complexity, somewhere in parallel, strategies centred around formal verification methods have quietly evolved.

Edsger Wybe Dijkstra famously coined the phrase, “Testing shows the presence, not the absence of bugs.” In April 1970, he challenged the design community to think differently. Even though the remark was made in the context of software verification, Dijkstra’s call to action has had a much wider influence. Nor was Dijkstra alone on this train of thought. This quiet revolution in the USA and the UK can be traced back to the 1950s. In 1954, Martin Davis developed the first computer-generated mathematical proof for a theorem for a decidable fragment of first-order logic, called Pressburger arithmetic. The actual theorem was that the product of two even numbers is even.

The notion of mathematical proof has been the cornerstone of formal methods for 70 years!

Theorem proving

In the late 1960s, first-order theorem provers were applied to the problem of verifying the correctness of computer programs written in languages such as Pascal, Ada, and Java. Notable among such early verification systems was the Stanford Pascal Verifier. It was developed by David Luckham at Stanford University and was based on the Stanford Resolution Prover developed using J.A. Robinson’s Resolution Principle.

In Edinburgh, Scotland in 1972, Robert S. Boyer and J. Strother Moore built the first successful machine-based prover, Nqthm. It became the basis for ACL2, which could prove mathematical theorems automatically for logic based on a dialect of Pure Lisp. Almost at the same time, Sir Robin Milner built the original LCF system for proof checking at Stanford University. Descendants of LCF make up a thriving family of theorem provers, the most famous ones being HOL 4, built by Mike Gordon and Tom Melham; HOL Light, built by John Harrison; and Coq, built at INRIA in France drawing on original work done by Gérard Huet and Thierry Coquand.

Both ACL2 and the various provers such as HOL 4 and HOL Light have been used extensively for digital design verification of floating point units — most notably, AMD and Intel. John Harrison, Josef Urban, and Freek Wiedijk have written a useful article covering the history of interactive theorem proving.

Timeline of theorem proving

Another notable theorem prover that is not considered part of the LCF family is PVS. It was developed by Sam Owre, John Rushby, and Natrajan Shankar at SRI International and has been used extensively during the verification of safety-critical systems, especially on space-related work at NASA.

Theorem provers have long proved valuable and were once seen as the formal ‘tools of choice’. But they had a major shortcoming. If they couldn’t prove a theorem they couldn’t say why. There was no way of generating a counter-example or any form of explanation as to why a conjecture could not be a lemma. This limited their applicability to formal experts with solid foundations in computer science and mathematics. So much so, that many engineers used to quip, “You need a PhD in formal just to use formal tools.”

Model checking

Whilst theorem proving was gaining attention for proving the properties of infinite systems, it clearly had shortcomings. These began to be addressed by a number of people during the 1970s. Particularly notable in this group were Allen Emerson and Edmund Clarke at Carnegie Mellon University in the USA, and J.P. Quielle and Joseph Sifakis in France. Also, in the late 1970s and early 1980s, Amir Pnueli, Owicki, and Lamport began work toward creating a language to capture the properties of concurrent programs using temporal logic.

In 1981, Emerson and Clarke combined the state-space exploration approach with temporal logic in a way that provided the first automated model checking algorithm. It was fast, capable of proving properties of programs, and, more importantly, could provide counter-examples when it could not prove. Moreover, it happily coped with partial specifications.

Model checking finds bugs and builds exhaustive proofs through state-space search using automatic solvers

During the mid-1980s, several papers were written showing how model checking could be applied to hardware verification and the user base for formal began to grow. Soon, however, a new challenge emerged: the size of the hardware designs that could be verified with model checking was being limited because of the explicit state-space reachability.

Although, we can today verify designs as big as 1 billion gates (10 to the power 120 million states), in the 1980s we could only verify designs with hundreds of states.

Axiomise reports finding bugs in seconds in designs with 338 million flip flops (1.1 billion gates)

Around this time, Randall Bryant, from CMUs electrical engineering department, began playing with the idea of circuit-level simulation mainly for logic simulation for transistors. Specifically, he considered using the idea of a three-valued simulation.

Timeline of model checking

Bryant explored the use of symbolic encoding in compressing simulation test vectors. The biggest challenge was an efficient encoding of these symbols and the Boolean formulas made on them. In response, Bryant invented ordered Binary decision diagrams (OBDDs). Ken McMillan, a graduate student working with Edmund Clarke on his PhD, got to know of this and in 1993, symbolic model checking was born.

OBDDs provide a canonical form for Boolean formulas that is often substantially more compact than the conjunctive or disjunctive normal form. Also, very efficient algorithms have been developed for manipulating them. To quote Ed Clarke from this paper: “Because the symbolic representation captures some of the regularity in the state space determined by circuits and protocols, it is possible to verify systems with an extremely large number of states — many orders of magnitude larger than could be handled by the explicit-state algorithms.”

Together with Carl J. Seger, Bryant went on to invent another very powerful model checking technique called symbolic trajectory evaluation (STE). It has been used extensively for processor verification at companies such as Intel for more than 20 years. Several generations of Pentium floating point units have been verified using STE. Intel continues to use STE to today.

The main benefit that STE provides over conventional model checking is that it employs a three-valued logic (0,1, X) with partial order over a lattice to carry out symbolic simulation over finite time periods. The use of ‘X‘ to denote an unknown state in the design and perform Boolean operations on ‘X’ provided an automatic fast data abstraction. When coupled with BDDs, this provided a very fast algorithm for finite-state model checking. A quick primer on STE is available here. The main disadvantage of STE is that it is unable to specify properties over unbounded time periods. This limited its applicability to verifying digital designs for finite bounded time behaviour.

Other prominent developments in formal technology still in use that include model checking and some form of theorem proving for verification of concurrent systems include:

However, all of these languages and supported formal verification tools have primarily been applied to software or the high-level modelling of systems, and less so to hardware. Using them requires a substantial background in mathematics.

Equivalence checking

Both theorem proving and model checking continues to be used for different reasons on both hardware and software, but the third form of formal verification is also now being used extensively. That form is equivalence checking.

Equivalence checking is one of the main formal technologies in use today

This method relies on comparing two models of design and produces an outcome that either proves they are equal or provides a counter-example to show when they disagree. Its early forms of this were done for combinational hardware designs, but scalable equivalence checkers now exist for sequential equivalence checking. These are used widely, most notably for combinational equivalence checking of an RTL and a netlist, but also for sequential netlist synthesis verification. It is now common practice for hardware designers to use equivalence checkers to compare that an unoptimized digital design is functionally the same as an optimized one where the optimization may have applied power-saving features such as clock gating.

This quick tour should give you a good sense of how formal verification techniques have steadily evolved, rising to each challenge before them. This is an established approach now coming into its own because of how it helps us confront increasing complexity. Formal verification also has, to take us back to the beginning, an elegance to it that is perhaps not as fully recognized as that in the designs it enables.

Do you use formal verification? Share your thoughts.

We love enabling people in the use of formal verification through a combination of training, consulting and services offering.

--

--

Dr Ashish Darbari

Ashish is the CEO of Axiomise — a formal verification training, consulting, and services company. He has been working in formal verification for over 20 years.