August 5, 2008 -- Shrinking silicon geometries enable larger SoC-type designs in terms of raw gate size, and many of today's applications take advantage of this trend. An important point that is often missed is the accompanying growth in verification complexity. Indeed, the verification task for a design that is twice as big is actually more than doubled. The verification team has to deal with a bigger state-space and the application, which is what the verification environment attempts to mimic, gets much "bigger."
Simply building faster tools, such as simulators, will not solve this problem. Rather, it requires capabilities and associated methodologies that make it easier to set up complex verification environments — environments that, in the end, ensure that the application on the chip works as expected. Fortunately, SystemVerilog provides a compelling advantage in addressing the complexity challenge. And, not simply as a new language for describing complex structures, but as a platform for enabling advanced methodologies and automation.
Each of the three key aspects of SystemVerilog has a significant role. The synthesizable design constructs that have been added to SystemVerilog make it possible for designers to code at a higher level of abstraction, often mapping more accurately to the function they are designing and the way they think about it. The new assertions capability lets users very concisely describe a behavior that needs to be checked. But it is the verification aspect that provides the biggest bang for the buck, as evidenced by its rapid adoption.
The verification component of SystemVerilog brings high-level programming capability to design and verification teams. In the past, many teams utilized a C/C++ testbench — native or SystemC-based — to drive a more efficient, realistic test of the design. SystemVerilog brings structure to this process by providing a standard object-oriented language with which to do the same. Tools can now be developed to support a more standard, structured process in a way that is not intimidating to the engineers who previously coded in Verilog or VHDL and are not familiar with a language such as C++.
The SystemVerilog testbench (SVTB) language still resembles Verilog code for the most part. In addition, it includes built-in support for functionality that is commonly needed during verification, such as constrained randomization and coverage monitoring. The relatively simple notion of constrained randomization allows engineers to develop sophisticated test scenarios with very few lines of code. It is also a natural progression for the object-oriented model to spur standard class libraries and related OVM (Open Verification Methodology) and VMM (Verification Methodology Manual) methodologies, both of which enable engineers to create modular, reusable verification environments in which components communicate with each other via standard transaction-level modeling interfaces. It also enables intra- and inter-company reuse through a common methodology and classes for virtual sequences and block-to-system reuse. This reuse can be extended to off-the-shelf verification IP components that can be used to verify specific functionality such as bus protocols like USB.
Clearly, the object-oriented nature and constrained randomization capabilities of SystemVerilog provide a significant leap forward in verification technology. However, all this cool stuff also brings new challenges for the tools that support the verification environment, especially those used for debug and analysis. Two of the main challenges are the debug of dynamic behavior and static comprehension of the source code.
The first thing that comes to mind when engineers examine or debug the dynamic behavior of their designs is waveforms. Some debug tools have taken behavior analysis to a significantly advanced level by letting engineers examine dynamic activity within the context of the source code itself and to trace a specific behavior back in time with the push of a button. This analysis relies on the well-understood notion of easily recording (dumping) value-change data from simulation. The data is usually recorded in a highly-optimized, dedicated database, such as SpringSoft's Fast Signal Database (FSDB). Once the simulation data has been recorded, tools accessing this database can provide specialized views and engines that automate and make more efficient the process of evaluating and debugging design behaviors. When debug tools also have access to the design source code, they can put two-and-two together to automatically trace to the root cause of problem behaviors. This state-of-the-art in design debug and analysis is well accepted today and continually evolving.
Unfortunately, this process is not applicable to testbenches. To start, there is really no concept of waveforms or value-changes in programmatic testbench code. Instead, SystemVerilog testbenches have classes that can be created at any point in time with functions that are called to perform a particular task (such as drive a random transaction into the design). Most of these functions execute in zero-time. Hence the notion of value-changes and representation of these changes using traditional waveforms do not apply, at least not directly.
The SystemVerilog verification component is, for all intent and purpose, a software language. Designers and verification engineers alike rely on debug tools to understand how the design and verification environment is set up. Traditional hardware description languages (HDLs) are highly structured, and as such can be easily represented hierarchically in schematics or state diagrams. Not only are these contextually appropriate for the task at hand, but they present information in a way that makes is possible for engineers to more easily comprehend. By contrast, software programs such as SystemVerilog and C++ have classes that are created, instantiated, and extended everywhere. For engineers, especially those that come from the hardware domain, it's no easy feat to make sense of it all. The burden now falls to debug tools, which are tasked with inferring data and creating static views that are both useful and intuitive.
How this is addressed today and drawbacks
The obvious next question is how are these verification challenges being addressed today. Studies show that SystemVerilog is becoming a widely adopted element of verification (testbench) methodologies. Today, there are two primary strategies employed to help engineers comprehend, analyze, and debug SystemVerilog testbench environments.
One approach utilizes the built-in support in the language for logging information. The two constructs employed are $display and printf, whether used directly or through a pre-packaged class library such as OVM or VMM. Both allow engineers to log information to text files. The whole idea is to record some history into these log files, which can be analyzed after simulation to get a sense of what the testbench was doing through time. Remember, the design data can be recorded into the debug database for visualization and analysis in a debug tool. However, for the testbench data, engineers must revert to the low-level text file logs, and then manually (and painfully) correlate to what the design is doing on the time axis. The result is a disparate flow that relies on low-tech, text-based recording of testbench activity as illustrated in Figure 1.
Figure 1: A flow based on text-based logging of testbench activity.
Another strategy often employed by engineers is to use the simulator's interactive capability in a GDB debugger-like fashion. As with GDB debuggers for C/C++, engineers can set breakpoints as well as inspect variable values and stack traces at a particular time. There are several drawbacks with this approach, however. First, the engineer has to know when and where to set breakpoints, so that the simulator stops at the simulation time and/or condition targeted for further probing. Often, this involves guesswork and requires several iterations to get to the exact point. Moreover, to get to the breakpoint, the simulator run could take hours, or even worse, days because it has to simulate the whole environment up to that point. It is often not practical to consume valuable engineering resources waiting for the simulator to reach the desired breakpoint.
A better way forward
Now, let's discuss how strategies employed in the software domain can help engineers meet head on the challenges of testbench verification and debug. It is clear that simply extending the traditional hardware debug techniques to testbench debug is not sufficient or even feasible. Gaining insight into what is going on in the testbench during simulation requires a new approach that builds upon the logging and interactive concepts previously discussed. The key is to make the logging process much more sophisticated and automated so that most of the debug and analysis of testbench activity can be done at that level. The goal is to utilize an advanced logging mechanism to pinpoint the location of a problem. If the problem is identified to be on the testbench side and more details are needed, engineers would then go into a tightly integrated interactive mode.
Logging… done properly
Logging has been widely used in systems and software. For example, operating systems log information all the time for later analysis and debug if needed. Similarly, most software systems log information. So it is no surprise that logging is a key pillar in SystemVerilog testbench debug and analysis.
The dominant SystemVerilog methodologies in use today provide some basic libraries that enable users to log information from their testbenches. The problem has been in visualizing the information, however, whether instrumented using raw SVTB syntax such as $display and printf or specialized base classes. All logging done through these mechanisms typically ends up in text files.
To make debug of the design and testbench together a practical, efficient process, the logging mechanism must be flexible in terms of usage and the resulting output automatically captured in the same debug database as the design results (such as the de-facto standard FSDB format). This is fundamental to enabling advanced visualization, debug and analysis functionality. The proposed flow and usage are shown in Figure 2.
Figure 2: A flow based on logging user-instrumented information into FSDB database accessed by Springfsoft's Verdi™ Automated Debug and Siloti™ Visibility Automation systems.
The task to log information, for example, $fsdbLog, needs to be highly flexible, allowing engineers to insert it anywhere in their code, including existing base class libraries that are intended for logging. The logging task must not only capture messages, but also severities, variable states, etc. as properties or attributes of the message. In addition, the call-stack must be automatically captured to leverage in further debug automation. The upside of this approach is that since all the data goes into the same debug database as the one used for HDL recording, visualization support can be added to the debug system to analyze this logged information alongside other data, such as HDL value-change and assertion states. The net result is a unified system that lets engineers observe what is going on in the entire environment. As shown in Figure 3, the data is visualized in standard waveforms as well as via specialized applications such as a time-synchronized table view which, like a spreadsheet, can be filtered, configured, etc.
Figure 3: Logged data can be visualized in waveforms as well as spreadsheet-like table view.
Special-purpose features can be added to these views to help engineers easily identify messages of interest among the logged data. For example, advanced filtering and highlighting can be used to filter or colorize specific messages based on some condition (e.g., highlight in red any messages that have "ERROR" as their label and "address=5"). Logged message viewing applications could also enable engineers to quickly search and locate messages that match user-specified search criteria.
The automatic capture of the call-stack during logging provides unique opportunities for further automating debug. For example, a logged message can be synchronized with the source code using drag-and-drop from the waveform to the source code, which could then jump to where the message originated. In addition to the obvious comprehension advantages of this capability, it can also be used to quickly set breakpoints at the right place to drive interactive simulation from the debugger.
… to interactive
Despite the drawbacks discussed earlier, interactive simulation is often the only mechanism available for delving into the details of testbench code. While logging can provide a coarse high-level view of testbench activity, interactive simulation of testbenches can provide the GDB-like data that is required to understand their behavior, such as the values of variables at a specified point in time and detailed thread information. Most simulators, when invoked in interactive mode, typically have access to all this information, albeit in a more primitive manner.
By bridging the ability to log messages with a unified design-testbench debug system, engineers can effectively use logging at the outset to determine the testbench code (location and time) that needs to be analyzed in more detail. With such a flow (shown in Figure 4), a logged message can be dragged-and-dropped into the source code view so engineers can set a breakpoint, and then invoke interactive simulation in the background with the source-code view of the debugger serving as the master cockpit. In this way, engineers can drive the simulator to a specific time or breakpoint, so that values, call stacks, and thread information can be inspected (automatically or user-driven). This mode of operation is very similar to the GDB use model deployed by C/C++ programmers.
Figure 4: The use of a unified and full-featured debug system to drive interactive testbench simulation can allow for more user-friendly set-up and visualization and analysis of results.
There are several compelling advantages of using the debugger to drive the simulator and display its results. Engineers can use the same environment to debug and analyze the behavior of the design as well as the testbench message logs. Additionally, debug environments provide a more user-friendly and familiar environment to drive, view, and analyze the testbench itself. For example, as shown in Figure 4, having variable watch and stack views alongside the source code can greatly enhance the user experience when debugging testbench code.
…. and comprehension
As discussed, by leveraging the testbench capabilities of SystemVerilog, engineers can create more sophisticated scenarios to test designs, while at the same time increasing coverage. But, on the flip side, the task of understanding the structure and function of such complex testbenches can be daunting.
Debuggers have always excelled at providing a platform for comprehending HDL source code. Commonly-used features, such as design browsing with an instance-based hierarchical representation and tracing of loads and drivers, are built upon a knowledge database that is automatically extracted from the source code. While some of this same functionality can be extended to testbench code, the more exciting opportunity lies in building on this knowledge-driven foundation to take testbench comprehension even farther. Again, many of the ideas proposed here take advantage of practices that have already proven to be successful in the software domain. For example, we've discussed the drag-and-drop of messages captured during simulation to the source code and the automatic identification of the code from where the message originated. These help to close the loop between the source code and simulation.
Design code is typically built hierarchically with lower-level modules instantiated at higher levels and some modules instantiated multiple times. Conceptually, this can be represented in a tree-like fashion from the top-level module all the way down to the lower-level modules. Testbench code however, like C++ and other object-oriented languages, is primarily made up of declarations of classes, functions, and variables. During testbench debug and analysis, engineers want a quick way to navigate to a class, function, variable, or the newer SystemVerilog constraint and coverage code. Debug and analysis tools must be able to import this type of code and display a meaningful representation that takes into account the declaration-centric nature of testbench code (see Figure 5). This hierarchical representation must also be linked to the actual source code so that when a class, function, or other entity is selected, the corresponding source code is also displayed.
Figure 5: An instance-based hierarchy representation and UML-like class inheritance and relationship view are critical to SystemVerilog testbench code comprehension.
Given the object-oriented nature of SVTB code, engineers can easily reuse existing code and create reusable code themselves. Classes are often derived from existing base or parent classes. This inheritance allows them to retain all the capabilities of the parent while at the same time allowing for variables or functions to be replaced with new ones, or entirely new ones to be added. While declaration-based views can be enhanced to show some class hierarchy, most classes have complex relationships with other classes, particularly as engineers understandably take advantage of SVTB object-oriented-ness (reusability) in its purest sense. To represent this "organic" nature of classes, the concept of UML class diagrams can be borrowed from the software world. Figure 5 shows a tree-like structure to illustrate the hierarchical and inheritance relationship for a user-selected class and its members.
Chips are getting bigger with over 100 million gates and approaching one billion transistors. This creates an astronomical challenge for the engineers trying to comprehend the complex structure and behavior of these designs and the surrounding verification environment used to verify them. Not surprisingly, testbench creation is becoming a vital part of the hardware verification flow and as complex as the chip designs themselves. As a result, engineers are turning to the SystemVerilog language to address the advanced requirements of designing and verifying designs of this scale.
The SVTB component provides a higher-level software-like environment targeted specifically at verification, enabling engineers to increase testbench coverage within the same language and infrastructure. And, while its object-oriented nature provides powerful capabilities, SVTB debug requires software-like tools in order to comprehend the complex class inheritance relationships that users will ultimately develop to take full advantage of the language. This convergence of larger, more complex designs and SystemVerilog-driven verification methodologies not only requires more EDA tool performance and capacity to scale with design size, but advanced levels of automation to deal with the abstract and dynamic nature of testbench verification and debug.
Fortunately, the sophistication of existing HDL debug and analysis platforms provides the bridge for integrating new innovations that address the unique requirements of comprehending complex testbench behaviors. Paramount in this scenario is the notion of message logging for testbench activity during simulation, coupled with flexible mechanisms for recording into specialized databases. This process is fundamental to enabling advanced visualization and analysis techniques, on-demand calculation of design values, and seamless transition to interactive simulation for more detailed GDB-like analysis of testbench code.
By Bindesh Patel and Amanda Hsiao.
Bindesh Patel is Technical Marketing Manager at SpringSoft USA where he is responsible for defining future verification products. Amanda Hsiao is Technical Manager at SpringSoft USA where she is responsible for product management and technical direction of the Verdi Debug Automation System.
Go to the SpringSoft, Inc. website to learn more.