SOCcentral Feature Articles on ESL
ESL Synthesis Solution Improves Productivity for DSP Designs Realized in ASICs and FPGA Devices
The use of digital signal processing (DSP) in electronic products is increasing at a phenomenal rate. FPGAs, with their multi-million equivalent gate counts and DSP-centric features can offer dramatic performance increases over standard DSP chips. They also offer an attractive alternative for small and medium volume production. FPGAs also make very powerful prototyping and verification vehicles for real-time emulation of DSP algorithms. However, there are areas of challenge and requirement for creating portable algorithmic IP for both FPGAs and ASICs. This article illustrates how an ESL synthesis methodology can significantly reduce the time and effort to implement either technology.
Read the entire article from Synplicity, Inc. on SOCcentral.
Complexity and Software Drive ESL Solutions
As system-on-chip (SoC) designs grow ever more complex, developers are turning to electronic-system-level (ESL) solutions. ESL provides tools and methodologies that let designers describe and analyze chips on a level of abstraction at which they can functionally describe behavior without resorting to the details of the hardware (RTL) implementations. This article explores the critical success factors of ESL tools with respect to the objectives of design cycle and risk reduction for highly complex hardware/software SoCs.
Read the entire article from Synopsys, Inc. on SOCcentral.
Rapid SoC Hardware/Software Co-Development Using Transaction Level Modeling
Software processing and storage requirements are now leading drivers of SoC architecture - and of the hardware costs associated with the deployment of additional processing resources. For instance, where the leading edge 250nm SoC deployed a single microprocessor and one or two digital signal processors (DSP), the leading edge 90nm SoC deploys two or three of each, together with considerably more memory and more complex communication protocols. Consequently, the architectural development effort at 90nm (again, according to IBS) is more than 19x that at 250nm, with the design cost running into millions of dollars, and exceeding that of the very considerable 90nm physical design effort. Finally, functional verification constitutes the single largest effort and expense in 90nm SoC hardware design - approximately 40% - and still first-time success is all too often elusive.
This growth in effort threatens to adversely affect both the economics and the timely delivery of advanced SoC design. The design methodologies developed for earlier SoC technology are inadequate to the task of designing a multiprocessor SoC. Transaction level modeling (TLM) methodology has been devised to solve these problems. To understand how, we must first examine the major SoC design tasks to be performed before hardware implementation.
Read the entire article from CoWare, Inc. on SOCcentral.
Communication Transactions Come First
Transaction level modeling itself consists of several abstraction layers. At the top is the algorithmic level where a timeless model is used in order to design the algorithm and represent the overall operation of the system. Next layer down is a model where communication aspects start to materialize. In the programmer's view, communication is implicit; modeled with a blocking interaction scheme where one functional block waits for another to complete. This allows for an early distribution of compute tasks and a partial ordering of their response allowing architects an initial view of the system and its cooperative progression. Programmer's View with Timing (PVT) takes the block interconnection realization further with a truly concurrent non-blocking communication system model that includes high-level estimates of the time it takes for each component to finish its designated task. This refinement of the communication infrastructure continues with Instruction Accurate (IA) Cycle-Callable (CC) models, and then Bus-Functional Models (BFM) of the software that communicates with the hardware RTL models at the Cycle-Accurate (CA) level.
This model refinement from the abstract to the concrete implementation is what makes TLM such a viable means for trade-off analysis. It permits fast simulation since a good deal of the communication requirements and the system performance can be tested and analyzed without the needless details. This leads to a tremendous speedup of verification, one to two orders of magnitude larger than the BFM and ISS verification models. In addition, the transaction level modeling forms a foundation through which architects and implementers, as well as software and hardware engineers, can collaborate on system development.
Read the entire article from Novas Software, Inc. on SOCcentral.
Are You Building Your ESL Design Flow on Sand?
To date, behavioral synthesis solutions based on sequential programming languages, e.g. C/C++/SystemC, have been the only high-level options above RTL. While these approaches raise the level of abstraction of design, they have significant limitations, including poor quality of synthesis results except for the narrow application spaces that they can efficiently address. Consequently, except for niche applications, C/C++ and SystemC have primarily been used for algorithm modeling, performance assessment and verification. It is the rare chip development team that does not write RTL to produce real silicon.
You have to ask yourself: when was the last time you saw a benchmark from someone doing behavioral synthesis that did not involve math algorithms, such as imaging and filters? Well, you probably haven't seen one -- the reason lies in how these solutions raise the level of abstraction -- and the challenges of synthesizing these higher level constructs.
Read the entire article from Bluespec, Inc. on SOCcentral.
The Real Challenge of System-Level Design
If you ask designers of embedded systems or systems-on-chip what would be the tool of their dreams, they would certainly describe a tool that could take whatever abstract system-level model (mathematical model, algorithm, state chart, class diagram, schematics, etc.) and would convert it directly into implementation-ready hardware and software descriptions.
Although such a dream may not come true any time soon, it is significant of the main problem these designers face today: the gap from specification to implementation. How to get from an ideal view of the system, the models listed above, very often non-executable and thus non-verifiable, to descriptions of hardware and software implementations with all their real-world constraints (programming abstraction, runtime environments, timing, performance, cost, power, area, physics, etc.)?
Read the entire article from CoFluent Design on SOCcentral.
Creating Power-Efficient Application Engines for SoC Designs
Increasingly, highly integrated consumer products such as cellular phones incorporating a still camera and video playback, or HDTV-quality DVD players, must execute complex algorithms and process voluminous data content. The very high performance requirement of these devices can be met by the deployment of multiple microprocessors and digital signal processors in system-on-chip (SoC) designs. Problematically, this multiprocessor approach can exceed the power and cost constraints of the application.
However, the SoC's performance, power and cost targets can be achieved by the use of an application engine. An application engine is a custom hardware/software system, typically consisting of a combination of a processor and dedicated hardware accelerators, and optimized to execute a specific algorithm or suite of algorithms.
Read the entire article from Synfora, Inc. on SOCcentral.