February 23, 2011 -- No matter the industry, the introduction of automation technology always tends to produce angst. Think of the story of American folk hero John Henry who raced against a steam-powered hammer and won, only to drop dead with the hammer in his hands. Compared to hammering rail spikes, designing and verifying new computer chips present engineers a subtler challenge. It's not a fear of being replaced by new tools that keeps these engineers up at night. Rather, their teeth gnashing is about how to best apply the new automation techniques without losing the benefits of older, more manual processes. In the design and verification space, this tension has been worked out many times — for example, as designers moved from gate-level design descriptions to more abstract design descriptions. Today, specifically in the verification space, this transition is occurring as new automation is introduced into the verification stimuli-generation process.
Over a decade ago, advances in design and verification complexity led to the introduction of constrained random generation, which increased automation of stimulus generation. Two key problems, in particular, paved the way to acceptance of more automation. First, verifying a large complex design required much more verification stimuli than could reasonably be created manually, in the form of directed tests, during a typical verification cycle. Second, even very experienced verification engineers were unable to envision some of the more complex verification scenarios — cases that were allowed by the design specification, but not obvious. Constrained random stimulus generation proved to be a useful antidote. It increased by an order of magnitude or more the amount of verification stimuli that an engineer could create in a given time period. In addition, because the technique produces stimulus across the full stimulus space, it often spits out valid but non-obvious cases, which often leads to bugs being discovered during simulation rather than in the lab or the field. This is good news since it's always get cheaper the further upstream you can catch bugs in the design cycle.
Even as it mitigated challenging verification problems, the constrained random stimulus approach introduced a few new issues. For starters, it removed much of the verification engineer's control over exactly what stimuli were produced. This is a positive outcome in many regards because new tools produced important, but non-obvious, stimuli that engineers were unlikely to think of, However, the change necessitated a different way — functional coverage — of tracking test completion. Functional coverage was applied as a means for the verification engineer to efficiently describe the collection of important stimulus cases, and to leverage automation in tracking the stimulus cases applied to the design. The benefit of this automation is that together, constrained-random stimuli generation and functional coverage now allow a verification engineer to create an order of magnitude more verification stimuli over the course of a given verification cycle.
The rolling snowball of complexity
When it comes to the semiconductor industry, complexity is rather like a snowball rolling downhill, especially since the manufacturing wizards continue to forestall the expiration date of Moore's Law. This means it will be increasingly difficult to achieve coverage closure, which in turn will lead to even more demand for automation in the form of intelligent testbench automation. Questa inFact from Mentor Graphics is one such intelligent testbench automation tool. Cadence and Synopsys each have their own versions of this type of technology and several start-ups are bringing their own solutions to the market, as well. The amount of activity in this space points to the increasingly ubiquitous challenge of reaching functional coverage closure with pure-random generation of stimuli.
Algorithms in Questa inFact comprehend the structure of the stimulus model and are aware of the stimulus functional coverage goals. These algorithms prioritize stimuli that target the goals described by the coverage model while eliminating redundant stimuli. Once a given coverage goal is satisfied, stimulus generation reverts to randomness, ensuring the continued appearance of "surprising" stimuli outside of the functional coverage model. The net result is another order of magnitude gain, this time in the rate of functional coverage closure. In addition, the elimination of redundant stimulus helps to make the march toward coverage closure more predictable.
It's not enough to improve speed and efficiency in reaching a given goal. Another challenge is how to simultaneously find ways to increase the quality of results. The metric to watch is functional coverage, which gauges completeness in a coverage-driven verification flow.
With pure-random stimulus, it is quite typical for 95% to 99% coverage to be acceptable for considering verification complete. The problem is that for designs with hundreds of millions of gates or more, and the associated sprawling functional coverage model, even if the 95% to 99% target is achieved, the number of uncovered cases is quite large. Furthermore, the cases most likely to remain uncovered are those likely to expose a design bug. To see why, consider the following: A coverage model describes typical and corner cases that the verification engineer believes to be important. In many designs, the corner cases are statistically more likely to uncover a design bug. For example, the coverage model for packet length may describe several very small packet sizes, several very large packet sizes, and several medium-size packet-size ranges. From a design perspective, very large packets and very small packets are more likely to trigger a corner-case bug. Unfortunately, from a random-generation perspective, these cases are the least likely to be produced.
Figure 1. Corner-case stimulus generation probabilities.
It is often the case that expectations are calibrated, consciously or not, to the capabilities and limitations of the available technology. I often find this to be the case with functional coverage models. It's not uncommon for a verification engineer to state that the size of a given cross-coverage has deliberately been limited because it couldn't reasonably be covered with purely-random stimuli. Improving the granularity of the functional coverage model is one of the most useful outcomes of applying intelligent testbench automation. Since intelligent testbench automation can achieve coverage closure 10 to 100 times faster than previously possible, the granularity of the coverage model can be increased by a factor of 2 to 5 while still improving time to coverage closure.
Nearly every verification engineer I talk to can point to some aspect of the current design that is important to verify but simply doesn't fit into the available schedule. Intelligent testbench-automation facilitates fitting more verification into the available schedule in two ways. First, the raw boost in efficiency provided by automated functional coverage closure means that simulation cycles will be available to tackle an expanded scope of verification. Secondly, when purely random stimuli are used, a huge amount of human effort is typically spent analyzing coverage results, identifying coverage holes, and determining whether the coverage holes can be addressed by running more random simulation or whether a directed test should be created instead. Automated functional-coverage closure eliminates this manual effort, freeing up verification engineers to focus on their primary task: developing new verification scenarios and analyzing failures.
One example of extending the scope of verification is explicitly targeting sequential coverage, an improvement over the sequences of stimuli generated by the inherent redundancy in pure random generation. Indeed, the concern is that efficiently targeting functional coverage closure will result in generation of less sequential stimuli. The fact is that sequential coverage is often omitted from functional coverage models because, while pure-random generation results in sequences of stimuli, achieving comprehensive coverage of these sequences is much more difficult than achieving coverage of non-sequential functional coverage of the same size. With intelligent testbench automation, coverage of interesting sequential coverage can be targeted efficiently and explicitly.
Finally, if there is still time left in the verification schedule after the granularity of the coverage model has been improved and the scope of verification has been extended, running simulation with fully random stimulus is a reasonable option to bug-prospect for cases far outside the coverage model. At their core, intelligent testbench-automation tools rely on random generation. Consequently it is typically simple to enable pure-random mode, and beyond that, throughout the process of functional-coverage closure the benefits of random generation have already been recognized. That is, once a given functional individual functional-coverage goal is met (i.e., a coverpoint is covered), subsequent stimuli generation for that data field is done randomly.
Man and machine working together
Applying automation to a process must always be done with an eye toward whether the existing process-completeness metrics accurately capture the desired result. Automated functional-coverage closure using intelligent testbench automation is no exception. For best results, functional-coverage metrics must be re-evaluated to determine whether they accurately express the desired scope of verification, or simply represent what was feasible to achieve using purely random stimuli. Completion metrics must also be reevaluated in the face of automation that enables efficient achievement of 100% functional coverage.
To go back to the John Henry story, a better metric might have considered how to make the most of man and machine working together, perhaps by improving both the speed and accuracy of those driven rail spikes. Sure, America would have been deprived of a folk hero. But John Henry's life would have been spared and I'm guessing those railroad tracks would have been laid even faster.
A similar complementary pairing of skilled engineers and intelligent testbench automation can boost the comprehensiveness of verification, raise the efficiency of verification engineers and make the process of coverage closure predictable.
By Matthew Ballance.
Matthew Ballance is a Verification Technologist at Mentor Graphics, specializing in the inFact Intelligent Testbench Automation tool. He has 12 years of experience in the EDA industry, and has previously worked in the areas of hardware/ software co-verification and transaction-level modeling.
Go to the Mentor Graphics Corp. website to learn more.