Page loading . . .

  
 Category: SOCcentral Feature Articles & Columns: Feature Articles: Sunday, November 23, 2014
A Third Way in FPGA Development  
Contributor: Mentor Graphics Corp.
 Printer friendly
 E-Mail Item URL

June 1, 2011 -- At first, the claim that FPGA tools and methodologies can impose standardization on everything from design creation to sign-off seems sound. Given the competition among vendors and the relative maturity of the electronic design automation (EDA) industry, it's reasonable to guess that FPGA development processes will evolve toward standard and predictable flows. But talk to anyone getting their hands dirty in FPGA design and verification work and you'll soon hear that, at least historically, standardization is easier said than done.

The reason mostly comes down to the rapid accumulation of design and project complexity; the realities that plague FPGA designs. As just one anecdote: more than half of all FPGA designs now contain at least one embedded processor, according to a 2010 survey commissioned by Mentor Graphics go.mentor.com/srx4. In such an environment, an update from one point tool will invariably break the flow with another, or a switch to a different FPGA will force a re-working of the entire process.

Rising complexity, of course, isn't going away. What can help, though, is a broader perspective on design, one that encompasses everything from the earliest design creation steps to the final FPGA pin assignment planning for the PCB. Few would argue about the core tenets of any big-picture solution: selecting the right tools and then building a predictable flow that facilitates design creation, IP reuse, advanced verification, and direct links between synthesis and PCB. But how specifically do you go about meeting these goals?

Developing your flow

The first impulse for most FPGA design teams is to choose either the proprietary bundled software from the FPGA vendor or an aggregation of tools from multiple vendors selected by various end-users. The problem is that the bundled software cannot be used when re-targeting the design to a different FPGA vendor. A switch occasionally is necessary when the original device doesn't work out for one reason or another, or when the next-generation product demands a faster, cheaper, or more power-efficient device from another provider. On the other hand, a multi-vendor collection of point solutions inevitably will have compatibility problems. So, what is the best approach?

If the goal is to facilitate what's required among product architects, FPGA designers, verification engineers, and board designers throughout the development life cycle, then tools up and down the flow need to communicate in an integrated fashion. Specific examples include:
  • Electronic system level (ESL), RTL design entry, and IP reuse solutions should have built-in compatibility with downstream synthesis to ensure a streamlined flow and an optimal netlist.
  • Verification requires advanced methodologies and needs to be integrated at not just the RTL, but at the synthesized gate level. This ensures that "what was designed is what was implemented."
  • The FPGA implementation process, in turn, must have a feedback loop to and from the PCB realm, providing a means to quickly attain optimal system performance.

Some FPGA houses may attempt to meld these steps into a truly standardized device-independent flow by relying on (presumably) well-written scripts and the efforts of a dedicated CAD team. But this is a potentially inefficient time- and resource-consuming challenge.

Another option is to select a suite of tools that offer built-in product integration across the entire design process, from concept to PCB. Figure 1 illustrates this type of flow, which is best achieved by selecting a comprehensive end-to-end solution. Such an approach comes with a host of advantages: tool compatibility, technical support for the entire flow, and a partner that can suggest advanced methodologies.

Figure 1. A fully integrated, vendor-independent FPGA flow.


RTL development and IP reuse

Companies are constantly seeking easier forms of logic design for today's FPGA designs, which now can approach the complexity of ASICs. Reuse of third-party and internal IP is a given for most companies, although it can proceed haphazardly. Developers can benefit from a systematic approach of recycling large IP modules, small design blocks, and commonly used routines or functions. While a truly standardized, corporate-wide methodology should be management-driven within the enterprise, EDA partners also can help.

Technology solutions are available to measure the completeness and quality of code, to generate graphical visualizations that accelerate design understanding, and to provide a means of creating an IP repository for company-wide reuse. Equally important, designers can use the same cockpit to integrate IP within their larger designs and connect to simulation and synthesis environments. The result is a streamlined and standardized method for RTL design, reuse, synthesis, and verification.

ESL for FPGA

Still, methodical RTL reuse may not deliver all the productivity boost needed for some designs and project schedules. Some designers are moving toward higher levels of abstraction, for at least some portions of their designs, to achieve more efficient design creation. While claims of productivity gains from ESL design have sometimes been exaggerated in the past, current ESL solutions are now credible and reliable, with industry-proven results.

High-level synthesis, for example, has built a track record for itself in both the ASIC and FPGA realms. Designers can now take an algorithmic description written in an industry-standard language such as C++ and generate RTL code constrained with performance, area, latency, and/or throughput requirements. However, system level designers also must consider how well integrated their ESL solutions are with the rest of the implementation flow. RTL code can be generated with specific constraints but how consistently can downstream RTL synthesis implement an optimal netlist? And how easily can one analyze the design and results from original algorithm to timing reports? This is where system-wide thinking really begins to pay off. A vendor that offers a comprehensive algorithmic-to-RTL synthesis solution has made a commitment to provide synergistic, thoroughly integrated tools that can meet flow requirements from start to finish.

Integrated verification

Verification techniques for FPGAs have traditionally lagged those for ASICs because FPGAs are historically less complex, and also because designs can be put into hardware almost immediately to debug at virtually no cost. Now, though, the complexity of many FPGAs approaches that of ASICs and lab testing is not terribly efficient in capturing the obscure, hard-to-find bugs. For this reason, FPGA designers are turning to advanced methodologies that include assertions, code coverage, and automated testbench generation. These solutions address the corner cases that are so difficult to reproduce by any other means.

After RTL is verified, however, the development team needs to confirm that the original design intent was implemented as expected in the final hardware. Formal equivalence checking is a popular method in which the RTL design is mathematically proven to be functionally identical to the gate-level design. This eliminates the need to run excessively long gate-level simulations to accomplish essentially the same result. But communication and compatibility between RTL synthesis and the formal equivalence checker are non-trivial.

For example, synthesis may optimize state machines or merge registers to optimize the gate-level netlist, while of course preserving original functionality. If the formal checker is not aware that is, unable to recognize these optimizations, it may flag false mismatches, necessitating user interpretation of the results. Integration between point tools helps automate the process and ensures accurate reporting of mismatches.

From synthesis to PCB

From the standpoint of the end-to-end flow, synthesis creates the bridge to real hardware. By integrating synthesis with system-level design, IP reuse, and verification, a development team begins to unify and standardize its concept-to-board process. But even with all the integration hooks, synthesis still must be "leading-edge" in its approach to analysis, optimization, and multi-vendor support.

With design size and complexity continuing to increase, designers need to be able to analyze design issues, constraint issues, and quality-of-results (QoR) bottlenecks. Optimization technologies such as physical synthesis must be available to meet aggressive performance goals and not just for devices supplied by a few select FPGA vendors. In order to retain a truly target-independent flow, QoR technologies have to be available for all major FPGA devices.

All of the concepts mentioned up to now pertain expressly to FPGAs, but a concept-to-board flow is not complete without a discussion of PCB integration. Addressing FPGA issues in isolation from board design can lead to problems at the back end of the design cycle. Because the FPGA has to interact with many other components on the PCB, its board placement and other logistics can make or break a product roll-out.

Consider I/O assignments, for example. FPGA and PCB designers often don't agree on optimal pin assignments. From the FPGA designer's perspective, convoluted I/O assignments can hamstring device QoR goals or FPGA internal routing. Conversely, the PCB designer must accommodate poorly-planned I/O assignments by using longer traces, additional board layers, and more vias. These compromises can add up to longer routing times, signal-integrity issues and, potentially, a severe degradation in system performance.

A proper I/O planning solution can enable a bi-directional, FPGA/PCB co-design process to find a well-balanced pin assignment scheme. The enabling technology ensures a feedback loop between FPGA synthesis and the PCB realm to ensure that requirements are met with the least number of iterations.

When the balance between FPGA and PCB pin constraint is hard to find, pin-aware FPGA physical synthesis may be an answer. This type of synthesis takes physical characteristics of the device and pin assignments into account and has a better chance at achieving design closure for a heavily pin-constrained design. At an early stage, logic blocks are optimized not only in terms of their estimated routing resources and estimated placement on the device, but also in terms of signals associated to device pins. Physical synthesis performs a series of physical optimizations such as retiming, register replication, and resynthesis to improve timing of the netlist all while taking into account clock and I/O constraints, as show in Figure 2 below. The "pin-aware" netlist lightens the load for place-and-route, allowing for shorter place-and-route run-times and faster design closure.

Figure 2. Pin-Aware physical synthesis improves overall FPGA/ PCB system performance by taking device characteristics and pin constraints into account.


In summary

The bottom line is that there is no alternative to design complexity. Product development teams will forever be deciding to switch targets, even at the midway point in the development cycle, in search of faster or cheaper silicon. And enterprises that try to manage multiple flows from multiple vendors will forever be overwhelmed by the diverse methodologies.

There is, however, an alternative to the binary view of the design flow. That is, designers aren't limited to proprietary tools from FPGA vendors (on the one hand) or a hodgepodge of commercial products from various tool vendors (on the other). There are solutions, aimed at CAD managers and project leads and available today, that offer a unified tool environment to span and streamline the flow from concept to PCB. Just remembering that such solutions exist is the starting point to restoring sanity to your design flow.

By Ehab Mohsen.

Ehab Mohsen is a technical marketing engineer for FPGA synthesis at Mentor Graphics. Based in Fremont, Calif. Contact him at ehab_mohsen@mentor.com.

Go to the Mentor Graphics Corp. website to learn more.

Keywords: FPGAs, field programmable gate arrays, FPGA design, PCB design, EDA, EDA tools, electronic design automation, SOCcentral, Mentor Graphics,
488/33942 6/1/2011 2358 2358
Add a comment or evaluation (anonymous postings will be deleted)

Designer's Mall

0.9355469



 Search for:
            Site       Current Category  
   Search Options

Subscribe to SOCcentral's
SOC Explorer
Newsletter
and receive news, article, whitepaper, and product updates bi-weekly.

Executive
Viewpoint

Verification Contortions


Dr. Lauro Rizzatti
Verification Consultant
Rizzatti, LLC

Executive
Viewpoint

Deep Semantic and Formal Analysis


Dr. Pranav Ashar
CTO, Real Intent

SOCcentral Job Search

SOC Design
ASIC Design
ASIC Verification
FPGA Design
CPLD Design
PCB Design
DSP Design
RTOS Development
Digital Design

Analog Design
Mixed-Signal Design
DFT
DFM
IC Packaging
VHDL
Verilog
SystemC
SystemVerilog

Special Topics/Feature Articles
3D Integrated Circuits
Analog & Mixed-Signal Design
Design for Manufacturing
Design for Test
DSP in ASICs & FPGAs
ESL Design
Floorplanning & Layout
Formal Verification/OVM/UVM/VMM
Logic & Physical Synthesis
Low-Power Design
MEMS
On-Chip Interconnect
Selecting & Integrating IP
Signal Integrity
SystemC
SystemVerilog
Timing Analysis & Closure
Transaction Level Modeling (TLM)
Verilog
VHDL
 
Design Center
Tutorials, Whitepapers & App Notes
Archived Webcasts
Newsletters



About SOCcentral.com

Sponsorship/Advertising Information

The Home Port  EDA/EDA Tools  FPGAs/PLDs/CPLDs  Intellectual Property  Electronic System Level Design  Special Topics/Feature Articles  Vendor & Organization Directory
News  Major RSS Feeds  Articles Online  Tutorials, White Papers, etc.  Webcasts  Online Resources  Software   Tech Books   Conferences & Seminars  About SOCcentral.com
Copyright 2003-2013  Tech Pro Communications   1209 Colts Circle    Lawrenceville, NJ 08648    Phone: 609-477-6308
553.488  1