Image for post
Image for post

Managing code coverage and functional coverage are complex tasks, especially for large SoC designs, and becomes particularly daunting when having to manage this type of data for multiple SoC designs in-process at the same time. This is complicated by the fact that such SoCs often leverage highly configurable silicon IP which can take on multiple forms from one SoC project to the next.

One problem that I’ve had to deal with when managing functional coverage and code coverage results is how to respond to situations where you may (for example) have satisfied your target number for code coverage but yet have not achieved adequate functional coverage, or vice versa. …


Image for post
Image for post

In short: Because the tests are generated randomly.

When you have a test that can generate 1 out of 1000 different types of stimulus at any given moment, it is not safe to assume that permutation #379, which you consider to be necessary to confirm some vital functionality of the chip design, was in fact generated. Assuming all 1000 permutations of stimulus are equally likely to be generated, permutation #379 would have a 0.1% chance of ever being generated.

The diagram depicts this circumstance: state space of a design (the big circle), random tests (the grey clouds), and particular locations in the state space of the design that are considered to be high-priority testing points (small greyed out dotted line circles). …


Image for post
Image for post

TL;DR:

Over time chips became too complex to profitably design and manufacture.[¹]

By getting computers to do some “guided guessing” (i.e. constrained random generation of tests) chip design engineers were able to be luckier than before in finding critical bugs before a chip went into high-volume manufacturing, thus preventing most chips design projects from experiencing certain failure.

Longer Form

Over time chips became too complex to profitably design and manufacture.[¹]

The diagram below illustrates this phenomenon:

Graph showing chip design complexity versus design productivity
Graph showing chip design complexity versus design productivity
Chip Design Complexity vs Chip Designer Productivity[¹]

What should you gleam from this diagram is that between 1980 and 2010 the number of logic gates per chip grew by 58%/yr while the number of transistors a design engineer could successfully utilize per month grew only by 21%/yr. …


Image for post
Image for post

is the process of taking an implementation of a chip at some level of abstraction and confirming that the implementation meets some specification or reference design. [¹]

The purpose of verification is to identify and correct design defects in the chip before it goes into manufacturing. There’s a verification step for each step in the chip design process, as shown in the diagram below.

Image for post
Image for post
Chip Design Process[²]

The purpose of verification is to identify and correct design defects in the chip before it goes into manufacturing. …


Image for post
Image for post

I’m going to write a few blogs about my chip design verification experience. I hope that you will find what I write interesting.

If you have any feedback about what I write, feel free to comment.

This is what I’m going to write about in a few blogs:

Michael Green

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store