The relentless progression of silicon technology enables ever-more sophisticated consumer electronics at prices that drop quickly with adoption. This pressure for smaller, faster, more functional products at lower prices ignores the inherent complexities of small geometries below 100 nanometers, and DFM becomes one more costly but essential step in the SOC design process. This is all part of the complex equation that determines SOC design costs--and ultimately, profits. The more advanced the technology, the higher the risk and costs for electronics companies, but also the fastest, coolest, lowest cost products.
Employing DFM and advanced process technologies are just two of the necessary but expensive elements of SOC design. While both are likely worth the investment, a significant gain can be realized from employing a new technology that dramatically improves designer productivity and actually lowers design costs. That technology is application engine synthesis, or AES. By deploying AES in conjunction with advanced silicon processes and DFM, SOC designers can get the best of both by reducing design cost and by getting the best product to market. At this time, only Synfora provides a software tool for automating application engine design. However, it could be expected that other companies will develop tools for automating all or part of the process.
The complexity and cost risks of SOC design
The cost of designing and producing SOCs can be estimated, but always with a high degree of uncertainty. That's because these designs are highly complex and are designed by multiple teams in multiple locations. It is incredibly difficult to manage the massive design and verification tasks, and there are often schedule changes that cause last-minute delays or failure to meet the original targets.
The financial rewards of creating a new SOC may also be estimated, typically in terms of projected shipment volumes and expected profit. But those estimates are also uncertain, as competition, mis-prioritized features, failure to meet area targets, and a dozen other factors could affect the results achieved.
In the end, the consumer drives design decisions on cost, performance, features, and ultimately, the schedules for time-to-market, time-to-volume, and time-to-profit. Given this reality, the most crucial issues for companies creating SOCs are significantly increasing designer productivity and reducing risks and costs while still achieving their business goals.
Cost breakdown for a typical SoC
In this example, we have generally itemized SOC design costs as fixed and variable. The fixed costs, which can be reduced slightly if at all, total an average of $3 million dollars, and include $1.5 million for SOC fabric items; $500,000 for physical design and DFM; and $1 million for mask and prototype costs.
The variable cost items are estimated at $12.5 million dollars: $10 million for design and verification and $2.5 million for tools and machines.
Clearly, variable costs dominate, and the majority of those costs are for manual design and verification. This huge cost has been a major catalyst for the trend of outsourcing design. However, the biggest cost savings in developing SOCs comes not from outsourcing, but from automating manual design.
Cost reduction through automated design
The use of SOCs has become common as these devices provide the best combination of design time, risk, and cost with silicon cost, performance, and power. In addition, many SOCs are based on a re-usable platform with unique application engines that perform a required function.
The figure below shows a typical architecture for a platform SOC comprising multiple application engines that perform the functions required for the final product.
In this case, the application engines encode and encrypt a video and audio stream that can be used in a security camera application. The blue components may be re-used without affecting the functionality of the SOC. The green components are the application engines designed for a specific purpose, defining the functionality of the SOC.
The design process comprises three steps and is illustrated below.
The first step in the process is to understand the reference algorithm, which specifies the chip's functionality without considering its implementation. This can often be downloaded from the standards body and is widely available. This abstract model cannot be efficiently implemented in silicon.
In Step 2, the engineer would design an implementation algorithm. The reference algorithms give wide latitude to designers about how to implement the standard, and this is where key product differentiation is built in. This algorithm tended to be abstract in terms of parallelism and clock cycles, but was refined for silicon implementation. The implementation algorithm can end up as the executable specification used by the hardware engineers used to create the RTL.
The implementation algorithm is the truly differentiated IP that gives a company advantage over its competitors in the standards-oriented market for consumer electronics.
Today, the third step of the process is manual hardware design. This is the most time-consuming part of the design process, yet it provides no competitive advantage. Typically, this can take six months and a team of more than 30 people.
With AES, this step is automated. Instead of re-implementing in RTL, the RTL is synthesized directly from the implementation algorithm. The synthesis tool can explore multiple design alternatives and allow the engineer to select the one that is best suited to the application. Best of all, this step can be completed in just a few hours.
Steps 1 and 2 are manual and remain unchanged when using AES. Step 3 is building RTL from the implementation algorithm, and this can take four to six engineers about six to nine months for one product. That equals about four-and-one-half engineer years for one product. If an engineering team is developing two or three SoCs, the necessary resources are high. With AES, the process is automated and reduced to mere months. With AES, different derivatives are easy to create; just re-run it using different constraints.
The methods and advantages of AES
In today's SOC development process, designers handcraft all or parts of application engines. That's because a standard processor cannot meet the desired performance for a product such as a video recorder.
For multimedia, video, audio, and wireless applications, the typical off-the-shelf or custom processor cannot provide the target performance at the desired cost. For example, consider a real-time, 30-frames-per-second MPEG2 encoding/ decoding chip used to deliver HDTV-quality images for a TIVO-like product. In terms of performance, implementation is a challenge, even on a Pentium-class processor running at 2GHz. And, the result would be unacceptable for a product that would sell for less than about $400. Most chips, such as this MPEG2, have small parts of the program that consume a huge amount of execution time. For the MPEG2, significant time is spent on motion estimation and discrete cosine transform (DCT).
To achieve extremely high performance at low cost, designers typically develop dedicated hardware accelerators for these portions of the program, while the remainder of the program executes on a CPU (see figure below).
Complex SOCs comprising multiple handcrafted application engines represent a huge expense and long development process. AES is a new and highly effective technology that accelerates the design process and cuts design costs.
AES is ideally suited for algorithm-based designs for audio, video, imaging, security, wireless, and other chips that use application engines for specific functionality. Examples include MPEG2, H.264, MP3, and imaging or graphics pipelines. AES is not suited for designing general-purpose CPUs such as a Pentium or PowerPC; system CPUs like ARM; or off-the-shelf IP such as memories or USB controllers.
With AES, a designer can implement their application (for example a video encoder) directly from a C-based algorithm to automatically design a complete application engine containing a CPU (if needed) and one or more hardware accelerators. In addition, AES automatically designs the hardware with the required characteristics for performance, cost, power, and cycle-time. AES also automatically partitions the application into hardware and software for optimum performance.
While it is easy to generate RTL from C, it is difficult to generate RTL that is both competitive with manual design and able to go through timing closure and place-and-route in a single pass. With AES, this can be achieved by using a pre-verified architecture template that ensures that the RTL produced complies with best design practices that will result in first-pass timing and physical closure.
AES reduces verification time through automated block verification
It is estimated that verification requires up to seventy percent of design time. With AES, there is automatic generation of the RTL testbench and test vectors from the C testbench. As there are a few properties of RTL that can't be tested at the C implementation algorithm level (for example, behavior when the RTL stalls due to unavailability of data), AES provides for adding more test cases to the testbench for what is called perturbation testing.
AES reduces design time for a single application engine significantly. In addition, automated AES enables meaningful design space exploration, and AES provides a consistent architecture schema that enables re-use for higher quality.
That's because the AES design methodology is based on architecture templates that are carefully designed for high quality, single-pass timing closure and place-and-route.
AES enables the ESL design methodology
For SOCs encompassing a CPU, application engines, and more, AES is a significant enabler of the electronic system-level (ESL) design methodology. It can co-exist with system-level simulation and verification. In addition to RTL, AES can provide bit-accurate and cycle-accurate models for fast system-level simulation, and it can generate SystemC interfaces so that these models can be plugged into the SystemC model for the entire system for system-level simulation and verification. This provides the best of both worlds: designers who are more familiar with C can use C for design, and verification engineers can use SystemC for system-level verification.
AES mitigates the risk and increases productivity
AES reduces schedule delays because it can generate verified RTL in just a few days. AES reduces design risks because a designer can quickly learn of all the area, performance, and power options available, and then use AES to quickly design blocks for the specified mix of performance, area, and power. Automated block design also enables smaller design teams to handle larger designs, eliminating the risks and extra time involved with partitioning design work to engineers in multiple locations.
With AES, designers can focus on design, not implementation
Writing lots of RTL code can be drudgery. AES automates this work, enabling designers to focus on the crucial, high-value elements of SOC development: the right implementation algorithm, system architecture, memory structures, data storage, etc. AES also simplifies physical design with architecture templates designed for fast timing and physical closure. The designer selects the micro-architecture that fits the target performance, which eliminates the need for back-end tuning. The AES architecture templates also support DFM because they can be restricted to eliminate low-yield structures.
Today, semiconductor and consumer electronics companies face many challenges in designing SOCs that are faster, smaller, and of lower cost. Application engine synthesis is a new technology that can help significantly reduce design time and costs while increasing designer productivity, all to accelerate time to market.
By Vinod Kathail, Ph.D.
Vinod Kathail, Ph.D., is co-founder, CTO, and vice-president of engineering at Synfora, Inc.. He came to Synfora from Hewlett-Packard Laboratories, where he was most recently the R&D Program Manager and a principal scientist in the Compiler and Architecture Research (CAR) Program, responsible for its PICO project. Kathail received his Doctor of Science degree in Electrical Engineering and Computer Science from MIT, his M. Tech. Degree from Indian Institute of Technology (IIT), and his B. Tech. degree from Maulana Azad College of Technology.
Go to the Synfora, Inc. website to learn more.