August 5, 2008 -- There has been considerable talk recently about the desire to have a unified coverage metric that could combine both formal verification and dynamic verification. In addition, Accellera is pursuing an effort to create a standards API that will allow coverage data to be created and shared between tools. To create a standard metric requires an understanding of the attributes of coverage within the different verification domains.
Three types of coverage
Functional verification involves stimulating the design, propagating the effects to an observable point and checking the response for the correct behavior. This is independent of the form of the functional verification undertaken and applies equally to simulation methods and formal verification. The completeness of this process is dependent on the completeness with which the design is stimulated (i.e. all possible behaviors have been exercised), the ability to ensure that the effects (both good and bad) of exercising the behaviors are propagated to an observable point and on the completeness of the checking mechanism (i.e. all relevant design behaviors have been checked against the specification). Three different coverage terms are used to capture the "completeness" of these three components:
- Stimulus coverage: This provides a measure of completeness of stimulation the design has been put through, and is a direct measure of controllability of the design. This is the most common interpretation of the term coverage in the world of functional verification. More specifically, the term "coverage" is widely used to denote the stimulus coverage in a simulation based framework and a number of metrics have been proposed, such as code and functional coverage.
- Observed coverage: This is the classic production test situation, where the objective is to ensure that all fabricated parts behave identically to the reference. To do this, every point in the design must be both controllable and observable. Any differences on the observed outputs are indicative of a failed part. There have been limited developments in this area for functional verification.
- Response checking coverage: This provides a measure of completeness of the response checking mechanism that is part of the verification framework. An example of this would be a completeness measure of a set of properties.
The ultimate goal for a metric is to provide a single measure of completeness of verification overall, taking into account design stimulation, propagation and response checking. The term "functional qualification" has been coined to represent this combined notion.
Simulation-based verification method
Simulation-based verification is a combination of both directed and constrained random techniques. Directed testing is used to target specific behavior. This ensures that "known behavior" is tested at a minimum level. Constrained random testing complements this by using volume vector testing to flood the design with large numbers of pseudo-random vectors in order to measure more abstract elements of the design such as corner case behavior, maximum data rate handling, and unforeseen conditions meant to more accurately depict real world data flows.
Simulation-based verification is a sampling process of all of the possible input sequences that could be applied to the design. Several coverage metrics are used to measure the stimuli completeness including:
- Code-coverage metrics (line, conditional, expression, block ).
- Finite State Machine metrics (state coverage, transition coverage).
- Transaction-based metrics.
- Functional-coverage points based metrics.
- Assertion-coverage metrics.
Considering that response checking coverage has not been adopted in the simulation world, stimulus coverage is synonymously perceived (albeit erroneously) as overall verification coverage. This will result in an over-optimistic evaluation of the completeness of the verification process. Put another way, the coverage metrics in use today assume that full result checking is taking place and that all errors are successfully propagated to an output. Both of these assumptions are false, meaning there are serious holes in the simulation metrics being used today.
Formal verification method
In formal verification, the correctness of a design is verified with respect to a desired behavior by checking whether the mathematical state of the design satisfies the specification of this behavior, expressed in terms of a temporal logic formula (specified by a set of properties) or a finite automaton.
The verification completeness of a given design is then dependent on:
- The completeness of the property set against which the design is formally verified,
- the accuracy of the constraints (the set of legal stimulus under which the design gets analyzed), and
- whether formal analysis was complete or partial. A partial analysis means that the computation was not able to complete due to resource constraints such as time or memory.
Formal analysis attempts to analyze the conformance of the design against the specification under all legal stimulus. When this analysis is completed, the stimuli coverage for the design is 100%, i.e. the design has been sensitized for all possible legal input scenarios. Often, this computation may hit resource limits, in which case, the stimuli coverage is not 100%. Incomplete formal analysis may be measured in terms of the number of cycles of analysis the design has gone through, under a restricted set of inputs (e.g., a subset of inputs could be tied to a constant), or when an arbitrary starting state is used. These preclude the guarantee that the property has been analyzed for all states that the design could get into.
The most accurate way to measure the stimuli coverage for partial formal analysis would be to determine the fraction of all reachable states that has been covered, but this itself is a hard problem.
Measuring the exhaustiveness of a specification in formal verification ("do more properties need to be written?") has a similar flavor to measuring the exhaustiveness of the input sequences in simulation-based verification ("do more input sequences need to be created?"). Coverage metrics in use today tend to follow these natural ways of looking at the completeness problem, and are, therefore, likely to have a different flavor.
Combining coverage metrics from multiple verification techniques
An effective means of measuring overall verification coverage which combines both formal and simulation metrics remains elusive. As discussed, the traditional metrics used in simulation based approaches are not quite applicable for formal. Similarly, the standard "partial" results from formal have no counterpart in the simulation world. Having said that the metrics cannot be combined directly, it is clear that the two technologies can leverage each other in a number of different ways:
In the top down view, simulation and formal can be leveraged in a systematic way as dictated by a verification plan. Effective verification planning adds predictability into the verification flow by specifying exactly what needs to be tested. Planning also provides the necessary structure to identify key areas where complimentary verification technologies can be applied effectively. By making sure that the verification tasks are tied to the specification and assigning appropriate verification method (simulation or formal analysis) to each such task, one can significantly reduce, if not eliminate, any redundant verification effort.
In the bottom-up view, formal analysis and simulation methods can co-operate to help improve overall coverage. For instance, formal analysis can establish that certain coverage targets are not reachable for any legal input sequence. This eliminates unnecessary simulation effort targeting those areas. In addition, formal analysis can create specific scenarios automatically, which when simulated will fulfill specific coverage targets. Another example where formal analysis can reduce the simulation effort is where a design block is fully verified for a specific mode (perhaps the most complex one). The simulation effort can then be targeted to provide coverage in untested modes. On the other hand, simulation can provide a sequentially deep "seed state(s)" (a set of states which would serve as the starting point for formal analysis). The idea being that it would be practically difficult (if not infeasible) to reach these seed states using formal analysis.
The notion of combining coverage from different verification methods by simply combining some coverage metric numbers is too simplified and there is no known practical and technically sound way to achieve this today. Different verification methods can be leveraged in a top-down way by appropriately planning them in the verification plan and in a bottom-up way by using the methods to provide results for each other. When creating the APIs for coverage data, it is important to bear in mind the differences between these two classes of tools and to ensure that the mechanism exist to convey all of the available information across the interface.
By Brian Bailey and Rajeev Ranjan.
Brian Bailey is founder and owner of Brian Bailey Consulting and Rajeev Ranjan is with Jasper Design Automation.
Go to the Jasper Design Automation website to learn more.