Charlie Cheng, President and CEO, Kilopass Technology, Inc.
The statement that memory is a formative technology for electronics systems is non-controversial. Going one step further by stating that memory is increasingly important to electronics systems does not raise eyebrows either. Indeed, memory is omnipresent in electronics systems, from wearables to network routers, and often plays a key role in overall system performance. Demand for memory will continue to expand rapidly in the coming years driven by the surging number of connected devices and the Internet backbone required to support them. The appetite for memories, with increasingly exacting specifications, seems insatiable as growing revenues and profits of the semiconductor memory makers suggest.
Moore’s Law has mostly driven the evolution of memory over approximately the last 30 years. Improvements in memory performance, power and area have mostly been the direct result of process evolution. These advances were the outcome of a combination of shrinking device dimensions and the introduction of new materials. Foundries, predominantly located in Asia, were the source of most of these improvements. On the design side, things have been stable compared to the constant revolution that has taken place on the fabrication side. The most common architecture for a SRAM memory cell is still the six-transistor (6T) cell first introduced many years ago.
Despite these continuous improvements on the fabrication side, memory has become a bottleneck in many systems. Pushing process geometries and introducing new materials is no longer sufficient to meet the power, performance and area (PPA) demands of System-on-Chip (SoC) devices.
Where will the improvements come from?
Next-Generation Memories
In the memory space, plenty of research has been invested in “next-generation” memories to find replacements for DRAM and flash memories. The list of candidates includes resistive memory (ReRAM), phase-change memory (PCRAM), ferroelectric RAM (FeRAM), and magnetic spin-torque-transfer RAM (STT RAM). These memories are showing promising results in terms of power consumption and reliability.
The requirements of increasingly complex computing systems are driving the need for advanced high-density memories. Memory solutions that dissipate low-power levels while sustaining high data rates with characteristics such as high scalability and high endurance are enticing candidates to replace the traditional solutions. Accordingly, the growth rate of next-generation memory market is high.
However, the enthusiasm generated by those new technologies must be mitigated by their high risk. Take for example the PCRAM. Not long ago, it appeared as the leading contender to become the substitute of choice for DRAM and NAND only to be eclipsed by the STT RAM and ReRAM over the last couple of years. STTRAM and ReRAM are starting to find limited commercial applications but are still stifled by their small capacity.
Pundits are forecasting that these new memories will improve scalability and reach price parity with DRAM and NAND in the next five years. The emerging memories would start taking over enterprise storage by 2020, resulting in improved storage performance in data centers.
The new emerging memories option is clearly revolutionary because it implies the transition to new technologies and materials. But the history of semiconductors has consistently shown that the odds are poor when betting against the ability of engineers to reach the next level of performance by leveraging the standard silicon process technology.
Predictions of the demise of Moore’s Law and the need to replace the traditional RAM and NAND devices with new technologies have been proven wrong several times. Hence, evolutionary paths that leverage the huge investments in silicon design and manufacturing technologies should be explored.
Leveraging Existing Silicon Process Technology
On the traditional silicon side, the lack of invention in memory design or creative input on the design of cells is striking. Technological innovation inside of the memory bit cells has been limited. As stated above, over the last 20 to 30 years, the main breakthroughs in memory devices have been on the process and materials side.
Process gains that steer memory capacity and performance improvements have been largely driven out of the Asia Pacific area. Data published by IC Insights, based on 300mm wafer capacity in December 2014, indicates that 74 percent of the fab capacity is located in the combination of Taiwan, Korea, China and Japan (see figure 1).

Figure 1 caption: A combination of Taiwan, Korea, China and Japan top the list of the top 10 IC fab capacity leaders, according to IC Insights.
This is in striking contrast to North America that is home to only 15 percent of the worldwide fab capacity. It is fair to say that recent silicon memory technology advances are the results of the manufacturing engineering prowess of the Asian foundries. These companies have been able to create a unique ecosystem, leadership and engineering talent pool.
The scalability of RAM and NAND is becoming increasingly challenging, compounded by cost concerns with the cost per transistor taking longer and longer to drop below the previous generation. As a result, 28nm is expected to be a long-running node with the cost crossover point extending significantly beyond the two-year cadence that had been the norm for multiple generations.
In other words, the complexities and difficulties of manufacturing at the new process node create a cost structure that provides little incentive to manufacture at smaller geometries.
While Asia leads in manufacturing innovation and investment, North America dominates on the R&D spending for semiconductor companies. Referring once again to IC Insights data, it is clear that the largest concentration of investment for semiconductor comes from the USA. Intel and Qualcomm are the leaders and represent approximately 50 percent of the Top 10’s R&D spending (see figure 2). Also in the Top10 are Broadcom, Micron and NVIDIA. For four out of these five companies, their R&D expenses represent more than 20 percent of their total sales.
R&D investments from North America have not resulted in improvements in the fundamental design of the SRAM bit cell.

Figure 2 caption: According to IC Insights, North America dominates R&D spending for semiconductor companies.
In an era dominated by mobile applications and their inherent tight power budgets, the leakage of transistors has become a significant issue and counteracting it has contributed to the complexity of the SRAM bit cell. Power consumption is a difficult problem with no cost-effective solution in sight. Reductions in the minimum supply voltage seem to have run their course given the limitations created by several non-idealities (see figure 3). The design margin for the traditional SRAM is quickly getting reduced to zero, rendering further voltage scaling impossible.

Figure 3 caption: The impact on non-idealities quantified by Vdd on SRAM design margins under different CMOS technologies is huge.
The Opportunity is Large
With the 28nm node poised to have a long lifecycle, it could represent a tipping point for SRAM development. The option of going to FinFET implies higher mask and wafer costs, plus the drawback of a less mature, thus lower yielding process. With SRAM representing 30 to 50 percent of the area of SoCs, a new bit cell that reduces area, reduces leakage current, while maintaining performance could keep the 28nm node viable for another generation of SoC design.
For example, consider the prospects for a new SRAM technology based on a 28nm node that delivers area similar to 16/14nm FinFET. Assume a multiprocessor SoC with large L2 and L3 caches that cover 30 to 40 percent of the die area. The overall die area would be smaller on the 16/14nm FinFET node but only marginally so.
However, this would be offset by the higher die cost and much higher tapeout cost of the FinFET process. Throw in the added benefit of reusing the 28nm IP for most standard interfaces and the right 28nm SRAM memory can deliver the same performance as 16/14nm FinFET at half the cost.
This means there is an opportunity for a company to bring to market a new disruptive memory bit cell that will extend the life of the 28nm process while delivering low-power dissipation and high performance. It is time for the design community to complement what Clayton Christensen calls the “treadmill innovation” of extending Moore’s Law. Maybe Silicon Valley will live up to its name and will focus the best minds in the industry to deliver this disruptive innovation.
Over the next few years, it will be an exciting time to be in the memory industry. A strong partnership between foundries and design teams should result in new ways to discover disruptive memory bit cells and other fundamental devices to enable new applications at lower cost and lower risk to design teams.