October 17, 2011 -- In recent years, our industry has seen only incremental improvements in memory performance compared to significant improvements in processing performance. The increase in processing performance (via architectural innovations such as instruction level parallelism, pipelining, the issuing of multiple instructions per cycle, and, lately the advent of multicore architectures) has created an ever-increasing need for higher-performance memories. Despite huge increases in on-chip memory capacity (obviating the need for slow off-chip memory accesses), SOC architects and designers are struggling to meet the performance requirements of today's data-hungry applications. In particular, even embedded memories have become a bottleneck and require higher performance.
Of late, increasing embedded memory performance has been no simple task. Historically, circuit techniques and advances in lithography have been used as the sole "hammer" to increase embedded memory performance. These traditional approaches to improving memory performance require expensive area trade-offs and are difficult and costly to implement due to the protracted development time and expensive silicon-validation process. The industry has long been aware that the continued use of these traditional approaches will not solve the memory-performance bottlenecks occurring in today's products. But, until now, there were no viable alternatives. This is where Memoir Systems comes into play.
Memoir has pioneered a new approach — Algorithmic Memory technology — that provides a new "chisel" to increase memory performance. We use logic synthesis instead of circuit techniques which enables us to solve the memory-performance bottleneck at a higher level of abstraction. How do we do this?
It should be noted that boosting memory clock speeds is no longer a practical option. The only effective way to increase performance is to build multi-ported memories — memories that can support multiple memory operations per cycle. So, we combine the capabilities of existing embedded memories built using traditional circuit techniques along with our algorithms (synthesized in hardware) to build a multi-ported memory. Our algorithms include a variety of techniques such as caching, virtualization, pipelining and data encoding that are woven together to operate seamlessly in order to achieve up to 10X more memory operations per second (MOPS).
Since Algorithmic Memory technology builds on existing embedded memories and is implemented in logic, it can significantly shorten the development time of new memories — traditionally ranging from 6 to 12 months — to a matter of days. This represents a 100X shorter development time. In addition, it allows memories to be architected and analyzed in real-time (within seconds) with the press of a button compared to the traditional analysis of a custom memory (for area, power and other characteristics), which typically takes weeks. This represents a 1000X reduction in memory architecture analysis time.
Memoir‘s technology can also be leveraged for significant reductions in area and power. For example, Memoir can synthesize a new higher-performance memory by using a lower-performance high-density memory (which typically has lower area and power). The new memory synthesized in this manner achieves the same performance as a high-performance memory built using circuits alone but can have lower total area and power. In addition, our Algorithmic Memory technology delivers configuration versatility. It is capable of synthesizing unique memory configurations with any combination of read/ write interfaces, using only single- and dual-port memory types along with standard cell libraries.
The benefits of Memoir's new technology are varied and wide-reaching. It delivers radically increased performance and/or reduced area and power savings with only a minor overhead of logic added to the existing embedded memory. We have also addressed the requirement for this technology to be readily adopted into a current SOC design environment by addressing the ease of interfacing, ready integration and rapid implementation issues.
First, Memoir's Algorithmic Memories present standard SRAM and/or eDRAM memory interfaces and can be used as drop-in replacements for existing embedded memories. Second, Algorithmic Memories can be readily integrated into an existing EDA or standard SOC (ASIC, ASSP, GPP, FPGA) design flow because they are built from standard embedded memory IP types and standard cells. Third, Algorithmic Memory technology is implemented as RTL-level IP, not a physical IP delivered as GDSII. This makes our technology process, node and foundry-independent allowing us to rapidly implement and provide an Algorithmic Memory on older, as well as advanced, process nodes.
A significant aspect of Memoir's Algorithmic Memory technology is that it addresses the memory problem from a system-level. For the first time, SOC designers have a technology platform to define memories that can deliver up to 10X more MOPS, generate these memories in days rather than months, and do "what-if analysis" in seconds rather than weeks. We believe that Memoir's Algorithmic Memory technology empowers SOC architects, makes memory performance a configurable entity, and ushers in a completely new way to address memory performance challenges. That's why we refer to these technology innovations as Memory 2.0.
By Adam Kablanian
Adam Kablanian is Chief Executive Officer of Memoir Systems, Inc. He was a co-founder of Virage Logic and served as President, CEO and then Chairman until March, 2008. Later, Adam was co-founder and CEO of iCON Communications, a Wimax broadband ISP in Armenia, which he successfully sold in 2009. Adam has also served on numerous boards in the EDA Industry (Sequence Design) and Application SW (IconApps) companies and is currently a board member of Ambature LLC which develops technologies that will significantly improve the efficiency of electrical energy consumption, distribution and usage.
Go to the Memoir Systems, Inc. website to learn more.