Reflections on verification

Is the glass half empty or half full?

It is the week before DVCon – industry’s premier design verification conference. I will be delivering a tutorial on formal verification challenges for datapath and control on Thursday 28 Feb and will be presenting a research paper on applying formal in a smart manner to achieve scalable results on Wednesday 27 Feb — so its natural that verification is on my mind.

I’ve been reflecting a lot this week on challenges and opportunities for validation & verification and wondering if the glass is half empty or half full?

Is the glass half full or half empty?

In an industry where innovation in architecture, new designs and new technologies plays a vital role — test and verification are still playing a catch up. The consequence of this is an evolving landscape of a mix of verification technologies ranging from directed testing, constrained random simulation, formal verification to emulation and FPGA based testing. The fact that this is an evolving landscape is good, but that the best combination of technologies is not always deployed is causing a lot of growing pains for the industry.

Verification: A Necessary Evil

The only time verification makes news is when there is a lack of it and when a catastrophe hits us. Whether it is the infamous Intel FDIV bug, the Ariane 5 disaster, or the numerous security flaws that made news all of last year, verification gets in the news for all the wrong reasons.

Ariane 5 launch and explosion

Verification is neither taught at schools and universities nor discussed in the same way as architecture and design. If you were to sample the top 20 UK universities you’re not likely to find many computer science or EEE departments with courses on test & verification. It becomes all too obvious when companies who need this talent go about hiring and cannot find much homegrown talent educated in verification. I’m not sure what are the trends in other countries but surely in the UK there is a major shortfall of fresh talent coming out of the universities.

The necessity of validation and verification in general is not very well understood outside the remit of domain experts.

I recently had a first hand experience of this exchanging emails with numerous UK university lecturers and professors — notwithstanding the fact that my recent interactions as a visiting professor in the University of Southampton didn’t highlight anything that different.

Chip design or for that matter software design takes the lion’s share of the glory; verification is largely left out of the limelight. Verification has never been considered an attractive career option. If you ever worked in a semi-conductor company you would know that architecture and design mean everything, and architects and designers are considered celebrities.

In fact you don’t have to be inside one of these companies, just look around — the buzz is all about the new machine learning architectures, or the euphoria around RISC V. Everything is about how these designs and architectures will play a vital role and how machine intelligence will takeover the mankind.

Talk about RISC V and you will hear about an open source elegant architecture, ease of customization, and power-efficiency and performance. You’ll not find however test and verification being discussed the same way. Of course, we have great test & verification conferences, and also RISC V specific events where verification does get discussed, but the remit of discussion is often focused within these conferences and limited to the experts in the area.

Verification: An Expensive Affair

The cost of testing and verification is significant — by some estimates verification can be about 70% of the overall cost of a chip design.

Whilst, it is true that there are fixed costs such as the cost of tools (CAD), cost of compute farms, and other operational costs — the most significant of them all is the human cost — cost of labor, time, test bench development, running tests, and debug and fixing bugs.

Debug dominates the cost of verification itself with some estimating it to be at least 50% of the overall cost of verification. There is a huge bias in these estimates as predominantly the choice of verification is simulation — and that’s also my reasons for thinking that these costs are unreasonably high and need optimization!

Stimulus: The Challenge with Dynamic Simulation

Whereas dynamic simulation has been the main workhorse of verification it is no secret that it is inherently incomplete, and as designs become complex it simply cannot find all the bugs even with trillions, or quadrillions of simulation cycles.

The reason is because simulation based testing relies on finding good input stimulus which needs to be explicitly applied and for complex concurrent systems devising these stimulus is way too complex. Constrained random simulation is the best known dynamic simulation technology based on randomizing stimulus. Randomization is great but not adequate.

Bringing up constrained random test bench environments is not cheap; there is a huge learning curve and though the benefits of code reuse and object-oriented programming certainly offer benefits they are outnumbered by the challenges brought by the problem of incomplete stimulus generation. Please don’t tell me that code and functional coverage will solve this problem entirely as it doesn’t — it only measures stimulus quality and provides indications where gaps may be but the quality of coverage depends on the coverage specification.

What goes around comes around?

Even when system designers use assembly code and specific driver based testing, the amount of coverage that can be obtained is a function of input programs and that is if coverage is collected in that testing. From my experience, most verification engineers collect coverage for constrained random simulation not as much for directed testing or driver/assembly/C++ based testing. Verification is therefore limited to the quality of input programs being tested.

With the advent of portable stimulus the focus on requirements based testing is gaining center ground. It is not clear what its impact will be in the next few years.

Emulator based testing which relies on booting the full OS stack, and application software together with the hardware design also has the same inherent challenge as simulation, although it is orders of magnitude faster than simulation.

Formal Methods: Predictable, Scalable, Exhaustive

Using formal methods to design and verify systems is a much more compelling thought; and the good thing is that when applied properly formal verification can be the best friend for both designers as well as verification engineers.

The reason formal verification is a great verification technology is because you don’t need to apply explicit stimulus — the tools do this for you automatically, At the same time the formal verification proof engines build an exhaustive proof of bug absence through systematic state-space search — something not possible with any simulation-based verification.

Constrained-random simulation relies on randomness while formal verification does systematic search

Formal verification is also great at finding corner case bugs much more quickly than any other form of verification when supported by great methodology. The phrase “formal verification” is synonymous with property checking sometimes referred to as static checking; though the field of formal is not limited to just that and has other important and interesting flavors such as equivalence checking and theorem proving as well.

Adoption of Property Checking

Although the trend towards using assertions and deploying formal property checking is increasing as noted by Harry Foster recently; in my view it is not sufficient to deliver what is needed in terms of qualifying rigorous functional verification.

We need more designers using formal early in the design cycle and more end-to-end functional verification carried out independently by verification engineers. Although I don’t expect the entire industry to start using theorem proving, I certainly expect more users to get on with formal property checking, and equivalence checking — both of these techniques are scalable, predictable and exhaustive when supported with good methodologies.
Wilson Research Group Functional Verification Study 2018, Taken from Mentor — A Siemens Business

In a recent announcement, Axiomise reported finding random reordering bugs in a billion gate design with formal verification in about 100 seconds of proof time. This design had 338 million state-bits which means 10 to the power 120 million reachable states! This particular design configuration was so deep sequentially that it would have taken 10.8 million cycles to read the first input data out.

Having to simulate such designs for all possible input data combinations is just not possible with techniques rooted in dynamic simulation. The best you can expect for is to be lucky with an input stimulus that can hit one of the buggy cases.
Formal can find bugs in designs with 1.1 billion gates in seconds!

When Formal Goes Agile

Recently, I talked about an agile way of applying formal methods for design verification using a methodology called ADEPT FV. The key idea in that article is to start using formal methods in the first hour of coding and continue using it until you tape out with different levels of effort and returns in each stage.

When formal methods are used in a systematic manner assisted with great methodologies bugs are found early in the design cycle, the debug traces are much shorter, and the ability of establishing bug absence conclusively is greatly enhanced. This provides the much needed assurance required for automotive, secure, embedded and cyber-physical systems.

Concluding Thoughts

So, how can we make verification both efficient and cost effective? One simple answer is — use the correct combination of verification technologies — and use the right technology at the right time for the right reasons. Easier said than done.

Formal methods provides a real opportunity to make verification cool, cost effective and efficient. While adoption of formal methods in industry is certainly increasing it is still not the first choice for end-to-end functional verification — largely because there is a lack of good knowledge and methodologies. However, things are changing in the positive direction albeit slowly.

At Axiomise, we are making a difference by enabling engineers and managers understand the real potential of formal methods.

I believe the glass is half full.

In the coming weeks and months, we will talk more about applications of formal verification to digital design, starting with a brief history of formal.

Do you use formal verification? Do you want to learn how formal can change your life for better? Contact me at ashish.darbari@axiomise.com, and if you’re in the silicon valley next week, I would be happy to meet you in person.