September 1, 2009 -- With the ever-growing complexity of IP and accompanying design tools, it’s imperative that SOC designers carefully weigh their options when selecting and integrating IP. For example, with the number of IP cores in embedded SOC designs growing, the bus structures to handle them are becoming more complex and time-consuming to design. As a result, multiple processors are competing for memory resources, causing memory access problems. These are just some of the challenges facing designers today when selecting and integrating IP.
It should be no surprise that the biggest contributors to SOC design cost continue to be architectural development (define the solution) and verification (knowing it is right). For many designs, there is a software part of the design extending the applications markets for an individual SOC. Many designers address this need with a platform strategy of compatible SOCs that extends the life of the design by allowing subsystems and software to be reused in future SOC designs. The software design effort exceeds that of an individual hardware platform, so reuse of proven hardware/software subsystems across several SOCs improves the cost-effectiveness of the platform. It also speeds time-to-market with subsequent SOC designs, while encouraging a growing ecosystem of application solutions for a platform. Many SOC designs have progressed to where more than 100 IP blocks are included in the SOC. In addition to reuse of simple IP cores, subsystem reuse is also a key requirement. And updated market requirements may require late changes with IP added or deleted while keeping tight schedules. As such, designers need to include IP from any source.
As designers implement coming generations of SOCs, reusing hardware and software across many SOCs in one family of platforms, the following requirements must be addressed:
- Universal connectivity to IP cores employing a multitude of core protocols and topologies allowing integration of IP from internal and external sources, even late in the design. IP cores may be single cores or subsystems of IP cores.
- On-chip network synthesis to further automate design by treating the on-chip network as one IP block, while supporting hierarchical subsystems (increasing IP reuse). The number of independent power and frequency domains is increasing to meet battery life and/or ‘green’ power regulations. And performance, power and area results must be excellent.
- Electronic System Level (ESL) design abstraction allowing architecture definition and performance analysis.
- Flexibility in the architecture encompassing Quality of Service (QoS) to match bandwidth and latency requirements of the IP cores with the available bandwidth from the memory subsystem.
- Advanced services such as error management, firewalls and debug ports.
Designers have access to a broad variety of interconnect solutions today from EDA and IP companies, as well as internally developed solutions. Most of these approaches leverage a GUI providing abstraction in defining the connectivity among the IP cores. Definition of the interface protocol, bandwidth and latency requirements defines the required bridging and buffering, allowing selection of the width and type of network fabric. By treating the on-chip network as one IP core that includes bridges, power/ frequency domain crossings and FIFOs for buffering latency, the synthesis is streamlined. The domain crossing bridges are a natural partition between subsystems allowing parallel development by different design teams with efficient integration of proven subsystems.
By using one common GUI that supports either RTL or SystemC, designers can have a common ground between the SOC architects defining the SOC and the RTL designers implementing the SOC design. This approach allows for quick architecture definition that allows performance analysis while also supporting the late addition or deletion of IP cores. With today’s SOC designs using 100 or more IP cores, the abstraction offered by SystemC allows a quick architectural implementation that can be used for performance analysis to highlight the critical areas where design efforts optimize performance, power and area.
Many early interconnect solutions support today’s commonly available AHB and APB IP cores, but don’t offer the efficiency of non-blocking operations. A common work around methodology of using multiple layers as sub-networks may increase the number of wires and gates of the overall SOC. Similarly, multi-ported memory controllers allow better latency for individual IP cores, but often require buffering at the IP core to meet bandwidth requirements while significantly increasing the wiring congestion as the number of memory system ports increases. With higher-frequency IP cores and faster memory subsystems, SOC implementations are moving from today’s interconnect implementation frequencies of 200MHz or less, to 266MHz or faster, with a non-blocking implementation that uses fewer wires and gates minimizing area while simplifying timing closure.
When the on-chip network supports non-blocking operation, quality-of-service (QoS) is a natural next step to support flow control from the IP cores that use memory services to the memory subsystem. The service requirements at the edge of the on-chip network can be efficiently mapped against the bandwidth supported by the memory subsystem. The IP core requirements for memory bandwidth, latency and QoS priority are set up and the effective memory bandwidth is improved from a typical multi-port controller efficiency of 50% to 60% by addition of a memory scheduler supporting run time QoS, with an improvement to 70% to 85% efficiency. Buffering is centralized in one place in the memory subsystem rather than in distributed FIFOs close to demanding IP cores. These designs can satisfy the most demanding high-contention traffic cases without incurring "overdesign" penalties in which "worst case" analysis dominates the design.
Understanding these cases and optimizing the system early in the design cycle is another benefit to SOC architects as early SystemC use gives guidance to the high impact areas of the design, allowing focus of design cost where application benefits are best.
With more open platforms supporting financial transactions as well as HD video, firewalls provide secure software operation and digital rights management with error reporting and debug capability to streamline development and deployment over the lifecycle of the SOC platform.
Designers' needs for on-chip network IP to meet the performance of SOCs in design for production in coming years is not just connecting the increasing number IP blocks together, but doing so with data-flow-services — such as QoS and firewalls — supporting future extensions to the family of compatible platform SOCs. And at the end of the day, every design opportunity starts and ends with analysis of performance, area and power to insure competitiveness.
By Jack Browne.
Jack Browne is Senior Vice President of Sales and Marketing for Sonics, Inc. Prior to joining Sonics, Jack served in several executive roles at MIPS Technologies, including Executive Vice President of Worldwide Sales and Executive Vice President of Marketing. Earlier in his career, he was the head of Motorola's 68000 processor marketing team. An acknowledged industry spokesman, he has written more than 100 papers for industry publications and presented at more than 100 industry conferences.
Go to the Sonics, Inc. website to learn more.