November 6, 2008 -- As process geometries shrink to 45nm and below, every phase of the design cycle is affected. This is most evident at the physical implementation stage, where floorplanning, placement, routing and optimization must be fine-tuned to accommodate incredibly small feature sizes and the process variations inherent in the fabrication process. Today’s EDA tools must be able to physically implement a complex design and still meet tight timing, power, signal integrity and manufacturing requirements. These elements are often at odds with each other, and unless handled efficiently by design engineers and a suite of EDA tools, can ultimately affect quality of results (QoR) and time-to-market.
Every advancement in deep submicron process technology brings with it a host of new challenges, each built on requirements of previous generations. The move to 45nm is no exception. Though previously a concern, multi-corner, multi-mode (MCMM) designs are becoming commonplace at 45nm and below, particularly in portable consumer electronics such as cell phones and PDAs. Each company that markets these products boasts of a growing list of independent features, all contained in one device. Each one of these features requires the device to use a different mode for voice, graphics, Internet connections and streaming video, all in a small portable system. MCMM designs present tough manufacturing challenges at 45nm and below with process variation or corners that complicate fabrication and cause costly iterations. Each tool in the physical implementation suite — floorplanning, placement, routing and optimization — must be attuned to the complexities of MCMM design, or a convergence of performance and manufacturing problems will cause inordinate delays.
"Careful floorplanning is essential for today’s complex, hierarchical designs," says Yukti Rao, Senior Product Manager at Magma Design Automation. "With simpler designs, you can floorplan by focusing solely on place and route. For larger designs, it’s a more complex problem, because you need to figure out the boundaries of the blocks in the floorplan, and that takes hierarchical floorplanning. For a large design, even if you have determined what the blocks are, the problem still exists as how to place them to satisfy timing, power, area, routability and corners in the manufacturing process. In our Hydra floorplanner, we provide auto-interactive floorplanning. The floorplanner arrives at a solution automatically, but the user guides the tool to handle areas that are unique to the design. Users can instantiate relative floorplan constraints, which lets them direct the tool with constraints that define where certain pieces of the design must go. The rest of the floorplanning is then done automatically. This is particularly important in design reuse, where you have already proven that certain portions of the floorplan work and you simply want to build the next generation on that foundation."
According to Magma, Hydra allows the designer to maintain familiarity with the floorplan as the design evolves over time. This repeatability and focus on localized changes to the floorplan enable the designer to monitor physical changes such as die area or routability congestion caused by changes in the architecture, additional functionality in RTL, or changes in timing or power constraints.
A common denominator
Most of the major EDA vendors provide a universal background technology which all of the tools in the implementation flow use. This enables concurrent design and optimization at different stages in the flow. After all, if each tool worked independently, then each would optimize for its intent, such as timing, power, area or manufacturability. It’s an understatement to say that this would cause massive convergence problems at the end of the implementation flow.
"We have parallelized the timing and optimizing engine inside our Olympus-SoC place-and-route system," says Sudhakar Jilla, Director of Marketing, Place & Route Group at Mentor Graphics. "This allows us to handle today’s MCMM designs with a place and route engine that takes timing into account, as well the multi-voltage flows that are part of low power design. We address the challenges of MCMM designs with a combination of key technologies collectively referred to as task-oriented parallelism. This is a fine-grained technique that allows parallelization of the most compute intensive analysis and optimization tasks within the place-and-route timing kernel."
According to Mentor Graphics, a compact data structure with an unlimited number of virtual timing graphs makes its Olympus-SoC system efficient for complex MCMM analysis. To fully utilize advanced multi-core processors, the system employs dataflow analysis that allows parasitic extraction, delay, MCMM signal integrity, timing, and power analysis tasks to be done in parallel on many CPUs without the locking and synchronization overhead inherent in traditional architectures.
Figure 1. This chart shows that one analysis run for a typical 15 million gate, 45-nm design increases from 100 to 200 hours as it starts with 24 mode/corner scenarios, then adds two voltage modes, and finally increases to 48 scenarios. With some chips reaching 100 to 150 million gates and even more scenarios, physical analysis is becoming a critical bottleneck in achieving design closure and getting new designs to tapeout.
The underlying database
While Mentor Graphics uses a parallelizing technology to bind its timing and place and route technologies, Magma Design Automation relies on a single database approach, which they claim is unique in the EDA industry. According to Magma, the database enables continuous power, timing and area tradeoffs throughout the RTL-to-GDSII flow with a unified data model architecture and embedded analysis engines.
"In order to handle the requirements of MCMM design, it’s necessary to have a common thread that binds all the elements of physical implementation," says Jonathan Smith Product Director at Magma Design Automation. "The unified data model is a single binary which contains all the design data. I’d contrast that to an open access database where you have to continued access from many points in the flow. The unified data model gets constantly updated as you implement and optimize the design. You get more accurate timing information, for instance, as you progress through the design. Our biggest customers cite our short turnaround times as the reason they like the unified data model approach. They say their time to tapeout is comparatively short when compared to the multiple database approach."
Clearly, each of the major EDA vendors claims a unique technological advantage that speeds turn-around time and enhances QoR. Synopsys says that at 45nm and below, QoR and design closure are best achieved by having a routing system that takes timing, power, area and manufacturability into account simultaneously, with the pertinent data for each requirement updated throughout the design flow.
"All of these design concerns are important and they can be at odds," says Saleem Haider Senior Director of Marketing for Physical Design and DFM at Synopsys. "Trade-offs are sometimes necessary, which is where optimization comes into play. As the design is refined, that information is fed to our router, Zroute. By simultaneously considering the impact of manufacturing rules, as well as timing and other design goals, Zroute delivers high QoR and improved manufacturability.”
Figure 2. According to Synopsys, the Zroute technology in its IC Compiler focuses on lithography hot-spot avoidance for improved DFM. Zroute's architecture enables "lithography-friendly" routing to avoid manufacturing problems.
A convergence of technologies
No matter which EDA company or approach an engineer chooses, it’s clear that multiple tools working in sync are the only way to bring convergence to a complex 45nm design. With each new generation of process technology, another design concern—timing, power, routability and manufacturability, is added to the mix. Engineers are therefore faced with the challenge of ensuring QoR when choosing and using multiple implementation tools, while keeping in mind the task at hand, namely getting a feature-laden, user friendly system to market. Shrinking process technologies are responsible for amazing advances in electronics products across a broad range of applications. They also might be responsible for receding hairlines, graying temples and yet another night of cold pizza eaten by the blue glow of the late night workstation.
By Mike Donlin, Senior Editor, SOCcentral.com