Paper Submitted by Bipul Talukdar, Director of Applications Engineering in North America, SmartDV

 

The future of chip design in a few short years could look entirely different as the semiconductor industry witnesses an advancing trend toward free and flexible, community-supported hardware designs for the long tail of new applications based on custom semiconductor devices. Anyone now can take advantage of open-source solutions to design their own CPU, custom accelerator or specialized processor, creating a semiconductor renaissance like no other the industry has seen before. In addition, availability of high-performance FPGAs with affordable, commercialized flows give boost to a low-budget, high-volume application markets.

Changes also bring a new level of complexity because these chips need a disparate set of capabilities to support open source, flexible and adaptable designs. The chip supporting anything open source does not need customized implementations because an open source capability or other software or hardware plugin devices that follow standard protocols will comply with the chip.

It’s unclear the semiconductor industry is ready to respond.

In the past, a centralized design automation and IP ecosystem made sense when there were a handful of large, concentrated markets with more homogeneous architectures such as cell phones, PCs, servers, networking components and hard drives. These applications create large, competitive markets that demand the latest in silicon technology and require commercial tools and IP to address performance, price, time to market and risk reduction.

The semiconductor industry knew how to respond.

New chips are upending the general industry order by enabling centralized cloud computing and decentralized, intelligent, connected devices working together to make technology more useful to consumers. They are propelling the industry from millions of early computers to hundreds of millions of PCs to billions of phones to tens of trillions of connected IoT devices. Anything that can be connected and anything that benefits from being smarter with AI, machine learning and software will be connected and will get even smarter. Some of these breakthroughs are a result of open source solutions.

As wonderful as this sounds, there is no “free lunch.” A closer look the open source phenomenon reveals some yet-to-be resolved deficiencies in the emerging chip design environment. Sure, a raft of complex hardware applications has been introduced and available using free, open source tools with flexible, no-cost or low-cost licensing able to fit into any development flow.

The end goal is not design. It is delivering a manufacturable product that provides some benefit or solution for the target market. That journey from design into manufacturing is risky, not free and comes with the risk of respin that results in huge costs.

Chip design verification is important to remove risk by uncovering issues that would result in a manufactured chip that does not perform to specification or perhaps not even work at all. Verification consumes a majority of the project development schedule. Estimates range from 60% to 80% of a project’s resources are budgeted for verification to ensure success. The risk of a failed design can be expensive when considering longer time to market, resource costs and costs associated with respins. A respin on a complex chip being manufactured in an advanced process technology can be more expensive than the investment in tools and IP used to create the chip design itself.

Large processor companies spend years and many billions of dollars developing verification flows and methodologies for their specific processors and instruction sets. They benefit from time and experience and, even then, notable issues demonstrate how difficult it is to verify CPU designs for every possible scenario. In fact, it is impossible to completely verify a CPU design in a realistic timeframe, even using the most advanced supercomputers. Instead, groups focus verification efforts to find the most likely or expected flaws in the time and resources allotted during the project. Even then, bugs and errors may not be found until the chip is fully deployed.

A further consideration is the newness of an open source solution such as RISC-V that does not have the benefit of field-proven experience, which means a carefully vetted CPU verification strategy is essential. Challenges may exist with availability of a “Golden Reference Model” of the CPU, also known as an Instruction Set Simulator (ISS) for fitting into an open architecture ISA model flow. This means the ISS must be updated with every change in the ISA and calls for a generatable ISS for ISA customization.

Any new architecture design paradigm creates another challenge because there are few off-the-shelf available IP. Bridges and adapters must be added to list of deliverables to be verified for ultimate open source success.

It won’t be long before verification groups will be afforded the much-needed confidence in their verification results and create open collaboration and innovation coupled with commercial verification flows for an open source CPU or custom accelerator. That’s because more vendors are offering a variety of electronic design automation tools, both software and hardware, to support the open source movement.

Given the implementation transparency of open source hardware, using libraries of reusable verification components and pre-defined functional blocks called verification IP can accelerate verification sign-off. Verification IP is a popular mainstay within a chip design verification flow after some clever engineers applied the black box concept to verification.

The result is verification engineers having access to previously tested blocks of protocols, interfaces and memories required to verify their SoC designs deployed across thousands of projects. Different verification engines such as simulation, emulation and formal verification can be employed in the verification process and, accordingly, verification IP is migrated from engine to engine as the verification process evolves. To fit this process flow, verification IP is offered for simulation, emulation, formal verification and FPGA-assisted verification.

Verification IP creates an infrastructure to support industry-standard interfaces and interconnect protocols and offers a known reference against which the design under verification can be compared. Its infrastructure framework or testbench comes with stimulus generators, monitors, scoreboards/checkers and functional coverage models. Open source standards such as RISC-V’s TileLink benefit from available verification IP because it can accelerate implementation and comprehensive verification, and offer faster testbench development, more complete verification with built-in coverage analysis and simplified results analysis.

A testbench is complicated and requires a variety of verification IP to generate comprehensive tests and stimulate and verify different interfaces and standard bus protocols. Most include transactions/sequences, drivers, configuration components, a test plan for a specific interface and test suites to connect to a design under test (DUT) inside the testbench to simulate or emulate an IP or an SoC design.

Design and verification groups implementing an open source design have choices to make when architecting their methodology, flow and toolset for CPU verification. Some prefer the Accellera universal verification methodology (UVM) standard, while others may choose plain SystemVerilog or SystemC. Either approach is suitable. Verification IP is a valuable reference for verifying a CPU architecture. It is used to verify system-level functionality and validate target performance by generating application-specific traffic supplied as industry-standard compliant, plug-and-play modules using different hardware verification languages (HVLs), such as UVM, SystemVerilog and SystemC.

Similarly, there are multiple choices when it comes to simulation platforms, a decision typically made based on either experience or the simulation platform already in use. Newer companies weigh the pluses and minuses of investing in de facto standard simulators from the EDA industry or open source simulators like Verilator. Many complex designs demand the use of hardware emulation or FPGA prototyping to more fully verify the design.

Armed with this long list of criteria, developing a verification flow for open source CPU designs may seem like an enormous undertaking. Commercial CPU verification platforms are available as complete environments for verifying new open source CPU designs. Most are compatible with SystemVerilog/UVM and C-based flows, as well as all industry-standard simulators and Verilator. Test suites for open sources ISAs are supported along with golden reference models.

 

The promise of free and flexible, community-supported solutions means that nothing will be the same in the semiconductor industry, especially the verification landscape. That may not be a bad thing as new applications offer consumers in many different markets more usable electronics devices.

_________________________________________________________________

About Bipul Talukdar

Bipul Talukdar is SmartDV’s director of Applications Engineering in North America. He is an expert in hardware functional verification with a specialty in Verification IP development, formal property verification and hardware emulation. Talukdar’s recent experience is in formal verification of RISC-V based cores and subsystems and coverage-based closure. He holds a Bachelor of Science degree in Engineering, Electronics and Telecommunication from the National Institute of Technology, Silchar in India.