Paper Submitted by Dr. Carlos Macian, Senior Director AI Strategy & Products, eSilicon Corporation

Have you had a look lately at an autonomous driving SoC? Have you noticed, besides the cool machine learning stuff, the grocery list of third-party IP that goes into it? It is a long, long list, composed of pieces both big and small and quite diverse. For example, interface and peripheral PHYs and controllers, memories, on-chip interconnects, PVT monitors and PLLs. In some highly-publicized examples, the list includes eSilicon memories, by the way. For something developed under such secrecy and that aims to be so differentiating, it is counterintuitive that half of it shall actually be composed of other people’s mainstream IP.

Third-party standard IP, by definition, is non-differentiating and so does not add any value, right?
Yet, it pretty much drives the cost of an SoC these days:
• It represents between 40 and 50 percent of the overall area and power. It drives unit price on two fronts. First, silicon cost is directly proportional to area and tightly related to yield, which is also dependent on area. Second, key IP brings associated royalties that ultimately increase the unit price.

• It is also a determinant for NRE cost, where it represents anywhere from 30 to 50 percent. In fact, it is the single most expensive portion of the NRE, barring only the mask sets themselves in 7nm and beyond. And it doesn’t even stop there. There is a large component of the workload associated with IP integration, IP test and debug, and IP vendor management.

Why Third-Party IP?
So, if it is so expensive and cumbersome, why do people rely so heavily on third-party IP? Quite simply, because SoCs are too complex to build non-vital pieces by yourself. This IP eliminates the development, verification and, to some extent, the integration work associated with support and infrastructure functions that surround the “secret sauce” in your chip.

Standards play a key role in this, for they specify both the functionality as well as the interfaces that must be supported, thereby facilitating interoperability and interconnectivity. Interoperability is key not only to ensure that equipment from different vendors will work correctly together, but also to allow an OEM to replace suppliers with minimum disruption. Common interfaces are the reverse side of the same coin, allowing that different pieces of equipment not only perform the same function, but also are accessed in the same way.

However, the current world of SoC design standards, necessary as they are, is not sufficient. What if the IP follows different metal stacks? Or if the IP allows different operating ranges? Or it provides deliverables in different formats or for different tools, e.g., GDSII vs. OASIS or IBIS vs. SPICE? Those aspects, deeply related to the physical implementation of the SoC, may seem mundane but have a huge impact on the design effort as well as on the design schedule. It is because of this that having a comprehensive, harmonized, compatible set of IP is fundamental. Many call this a technology platform. Typically, platforms are for a certain node and for a certain market niche, such as eSilicon’s neuASIC™ 7nm platform for machine learning ASIC design or its highly configurable 7nm networking IP platform. But even this level of flexibility is being pushed to achieve higher efficiency in the design process and easier IP integration. Enter eSilicon’s ASIC Chassis.

eSilicon ASIC Chassis
Ultimately, most AI and networking ASICs share many IP similarities – a similar set of high-speed interfaces (e.g., PCIE, Ethernet, HBM2), on-chip interconnects (AMBA comes to mind), processing elements (RISC-V or ARM cores), peripherals (USB, MIPI, I2C, JTAG) and infrastructure IP (PVT monitors, PLL, etc.). They are all fundamental to a well-functioning chip, but as we said before, not part of the product’s “secret sauce” that will provide differentiation in the marketplace. So, most customers would rather outsource the integration and verification of all of these components and focus on their own proprietary core logic.

eSilicon has developed the ASIC Chassis to serve this need:
• Take a well-integrated IP platform
• Functionally, logically and physically
• Parameterize it to select your own subset
• Include homogenous DFT architecture, IP test, configuration and monitoring
• Pre-verify the full ASIC Chassis: Every element standalone but also as part of the resulting I/O ring
• Present a set of standardized interfaces to core logic to facilitate integration

This template-like approach brings ease of integration to the next level and is what most customers are truly after.

Pros and Cons of Standards
So now we’re truly done, right? Well, not really. Have you considered if standards and hence mainstream IP are actually hurting your ability to compete in the marketplace?
Yes, standards facilitate integration, but at what price? How many features contained in the standard do you really use? How many modes of operation? And while we’re at it, are those modes of operation optimum for your particular architecture? The only possible answer is “no”. Standards represent a certain amount of overhead for any design, which the architect will gladly accept, as long as:
• The overhead is relatively minor
• And/or the cost/risk of developing an optimized IP far outweighs the incremental gain

Risk is typically a determining factor when it comes to key IP, such as a SerDes. Most customers would prefer to use an unmodified, silicon-proven, off-the-shelf copy of the IP; but speaking from experience, even for this complex IP, modifications are quite frequent.

There is a whole cohort of IP, on the other hand, where optimizations are low risk and the benefits are substantial. Consider a data center training and inference ASIC. Chips like this have a large amount of on-die memory, sometimes as much as 512MB! Very frequently, the amount of different memory configurations is very small, with thousands upon thousands of repeated instances per chip. eSilicon develops its own memory compilers and also has the capability to optimize specific memory instances to target minimum area and power and even optimize the access pattern, as in the example in Table 2. The end result of such optimizations is 38 percent power savings and 16 percent area savings. On a bigger than 500 square mm chip in 7nm technology, this represents more than 35 square mm of saved silicon!

The key is to minimize risk while maximizing gain. This is achieved by focusing the changes in the non-critical areas of the design that result in the highest PPA (power, performance, area) improvement. Generations of experience in building the IP and a bullet-proof design and verification methodology do the rest. In exchange, millions of dollars and tens of watts per chip are saved!

Now, we have finally reached the end of the line. We started with the need for vast amounts of IP in modern SoCs and ended up with a key realization — standard IP delivers standard results, optimized IP brings a market advantage.

You will certainly need a mixture of standard IP and optimized IP for every one of your ASICs. Choose carefully which is which but do not hesitate to dive in when big bucks and PPA are at stake. Do not underestimate the complexity of IP integration: demand a coherent IP platform. Focus on your differentiating logic and let the ASIC Chassis take care of the integration and verification of the infrastructure and interface IP for you. And when everything is said and done, sit down and enjoy your silicon, for ASIC design is certainly not for the faint of heart!

__________________________________________________________________________________

eSilicon Headquarters | 2130 Gold Street, Suite 100 | San Jose, CA 95002
Phone: 1.408.635.6300 | www.esilicon.com | sales@esilicon.com | https://star.esilicon.com