By Tom Anderson, Technical Marketing Consultant, OneSpin Solutions
Just a few years ago, the idea of an open-specification processor architecture with open-source implementations available would have been dismissed by many. Modern processor designs are highly complex, with such advanced features as multi-stage pipelines, multi-level caches, out-of-order execution, branch prediction, and memory pre-fetching. Beyond the hardware design, a huge ecosystem is needed. Reference design kits and software development platforms are essential. Operating systems and applications must be ported to the new architecture. A significant portion of the system-on-chip (SoC) industry must design-in the new processor and validate it in silicon. These challenges have appeared daunting indeed.
However, the introduction of the RISC-V architecture has defied conventional wisdom and is starting to disrupt the processor world. The original design was developed in the EECS Department at the University of California, Berkeley. The instruction set architecture (ISA), the primary processor specification, is now supported by the RISC-V Foundation. Its more than 200 members span semiconductors, systems, intellectual property (IP), software, academia, and more. Clearly there is a great deal of interest in this topic, but the industry has moved well beyond just curiosity. Many RISC-V cores, and even some SoCs built around these cores, are available as open source. Commercial cores also exist, and chips containing RISC-V processors are shipping. Many software titles have already been ported.
It might seem that nothing can stop the ascension of this new ISA, but no evolution happens without some challenges. For RISC-V, one of the biggest questions is how to verify the integrity of cores and the SoCs that contain them. How do core developers demonstrate that their implementations are correct? How do SoC designers ensure that the cores they choose, whether from open-source sites, IP suppliers, or internal development teams, are acceptable? Answering these questions requires thorough verification of design integrity and its four components: functional correctness, safety, security, and trust. This article discusses some of the specific challenges in meeting RISC-V integrity and describes some techniques that can help.
The attraction of the new architecture is clear: anyone can design anything based on RISC-V with no license fees or royalty payments. The ISA is open and available to all. Further, implementations are available from many sources, enabling comparative evaluations by SoC teams and establishing a stronger position for negotiation if choosing a commercial solution. This is in sharp contrast to most established processor architectures, available only from a single vendor that controls the ISA and charges for licensing. On the other hand, established processor families offer huge ecosystems and years of proven silicon. SoC developers give little thought to “re-verifying” a licensed core, trusting that the vendor will perform thorough verification of functional correctness and perhaps some other aspects of integrity.
To build a viable business, RISC-V core vendors must compete successfully with the proven established options. Open-source implementations must be vetted so that SoC developers are comfortable integrating them into their chips. This all comes back to verification. There is no one central team designing and verifying RISC-V cores. For the RISC-V ecosystem to thrive, core suppliers need an independent verification solution to ensure that their designs are compatible with one another and compliant with the ISA specification. Similarly, SoC developers must ensure that the cores they license are fully verified and ISA-compliant. For applications with safety, security, and trust requirements, the other aspects of integrity are also critical. RISC-V developers must ensure not only that their core do what they are supposed to do, but also that they do not do what they are not supposed to do.
Verification starts with proving that a core implementation complies with the RISC-V ISA. Every required instruction must decode properly, perform the correct function, and complete as specified (location of results, condition flag settings, etc.) Traditionally, compliance is demonstrated by developing a suite of simulation tests that iterate through every instruction in the ISA and check that the results match the specification. Different values of operands and other options must be exercised to provide more coverage of the design. Any interesting corner-cases that the verification team can identify (for example, an add operation that overflows) must also be tested. Compliance suites are a well-established part of the verification process. The RISC-V Foundation and other groups are working on such solutions.
Demonstration of functional correctness is not the same as proof. Simulation, no matter how extensive, can exercise only an infinitesimal portion of possible design behavior. Even the most carefully crafted compliance suite leaves gaps in design coverage. There are countless cases where specific operand values or corner-case conditions are untested, and some of these may trigger hidden design bugs. Arithmetic operations are especially subject to this problem. A single mis-typed array index in a register-transfer-level (RTL) description can yield a design where unexpected and nonintuitive operands produce the wrong answer. The very nature of simulation, as well as acceleration and emulation, makes it impossible to try every possible case.
Formal verification addresses this dilemma by operating in an entirely different way. Formal tools don’t iterate through test cases; they mathematically analyze a design in its entirety. They check the design against assertions, statements of design intent specified by the designers or verification engineers. If a mismatch is found, the formal analysis yields a counterexample showing how an assertion can be violated. Many tools can export counterexamples as simulation test cases so that the design can be debugged and fixed in a familiar environment. When there are no more counterexamples found, a formal proof guarantees that there are no bugs remaining in the design that could be detected by the assertions. Simulation-based verification technique can provide nothing even close to a proof.
In the context of RISC-V verification, the immediate question that arises is whether the ISA specification can be captured in the form of assertions. The answer is yes; a set of SystemVerilog Assertions (SVA) can capture the intended behavior of every RISC-V instruction from decode through completion. Standard SVA can be packaged to provide a way for expressing high-level, non-overlapping assertions that capture end-to-end transactions and requirements. Such assertions can achieve 100% functional coverage of the design by describing entire design transactions in a concise and elegant way, similar to timing diagrams. This method works particularly well for capturing all the rules of an ISA, and in fact a complete set of SVA for the RISC-V instructions is available today.
Given a set of ISA assertions, formal tools can deliver on their promise of thorough verification and achieve full proofs of all assertions on RISC-V core designs. The setup for this analysis includes specifying how to connect the assertions to the design, certain aspects of the design (such as the number of pipeline stages), and which optional aspects of the RISC-V ISA have been included in the design. This flexibility is critical since RISC-V was architected to be implemented in many ways, with different tradeoffs in power, performance, and area (PPA). Even at this relatively early stage, the cores available span many different microarchitectures for many possible SoC applications. The RISC-V ISA also makes provisions for user extensions such as custom instructions, so the assertion set must be extensible as well.
Figure 1: Formally proving that a core matches the ISA is a key part of RISC-V integrity verification.
Source: OneSpin Solutions
Compliance to the ISA lies at the heart of functional correctness, but there are other aspects as well. Good practice for RTL designers includes writing assertions for design intent, especially at the microarchitectural level. These assertions can also be formally analyzed to find bugs via counterexamples and then prove that no more bugs exist. Other types of static analysis tools can detect many types of design problems even if no assertions at all are specified. These tools should be run on all RISC-V core implementations to check for common issues, including:
- Truncated signals (no sink)
- Incorrect SystemVerilog sensitivity lists
- Violations of synthesis “case” directives
- Improper resolution of “X” and “Z” values
- Undersized or oversized arrays and buses
- Unused portions of RTL (dead code)
Core providers must run all the formal and static checks to guarantee the correctness of their designs; SoC developers may also wish to re-run these tests as part of their IP evaluation and screening process. This is possible only if a third-party RISC-V integrity verification solution is available for use by both sides of the core IP transaction. When RISC-V cores are integrated into SoCs, additional verification steps are prudent. For example, formal connectivity checking ensures that the processor and the other design blocks are integrated properly. It is quite likely that other forms of formal analysis, including proving compliance to bus protocols and custom assertions in other blocks, will also be run on the SoC design.
As mentioned earlier, design integrity goes beyond functional correctness. Many RISC-V cores are being designed into products with strict functional safety requirements. Autonomous vehicles are the most-discussed safety-critical application today. Even if the design is 100% correct, no one wants a self-driving car to act dangerously due to a random error in the field, such as an alpha particle flipping a memory bit. The ISO 26262 standard requires the inclusion of safety logic to handle random errors, verification of the safety logic, and calculation of the probability of failure. Similar standards place similar requirements for other high-risk applications such as military/aerospace, medical electronics, and nuclear power plant controllers. Fortunately, formal safety solutions exist to provide the necessary analysis and calculations. They automate a traditionally tedious and manual process, so their use in RISC-V SoC projects is essential.
Security is also a major concern for many applications using RISC-V processors. No one wants a self-driving vehicle (or a pacemaker or other safety-critical design) to be highjacked by agents with malicious intent. Security holes in operating systems and applications get most of the headlines, but hardware vulnerabilities underlie some of these. RISC-V core providers must do their best to ensure that their designs do not contain unintentional flaws that could be exploited by nefarious adversaries. Some of the design problems detected by formal and static analysis tools potentially could be used as avenues to compromise chips and therefore systems. RISC-V providers must run these tools on their cores and integrators must do the same on their SoCs.
Chips in the field can be hijacked not only by accidental security holes, but also by hardware Trojans introduced intentionally at some point in the development process. This is increasingly likely with vertical corporate disintegration, increased outsourcing, and the obsolescence of closed, trusted supply chains. Trojans could be inserted by individuals or by compromised tools, but formal solutions for detecting them work well regardless of source. For example, formal equivalence checking tools can ensure that nothing has been added to the design between trusted RTL and the post-synthesis netlist. This applies to both RISC-V cores and SoCs overall.
Equivalence checking is even more important for FPGA designs due to the aggressive optimizations needed to map a design to the fabric while meeting PPA goals. In fact, sequential equivalence checking is required to handle state machine re-coding, logic moving across clock boundaries, and other transformations that alter the state space of the design. Tools exist to handle these checks, verifying that no bugs or Trojans are introduced from RTL to gates to placed-and-routed design, and all the way to the bitstream that programs the FPGA.
Figure 2: There are several locations where hardware Trojans could be inserted into a RISC-V design.
Source: OneSpin Solutions
Establishing trust for the source RTL requires a different verification approach to prove that the RISC-V core does not contain any extra logic beyond the ISA. Capturing the ISA rules with SVA can be done in a specific way that ensures the set of assertions is sufficient to cover the core design and that there is no unverified RTL functionality. Any unexpected functionality in the design, including hardware Trojans, is detected and reported as a violation of the ISA. This includes the discovery of hidden instructions or unintended side effects of instructions. This process enables both core providers and integrators to have the highest trust in the RISC-V implementation. SoC teams can develop assertions for other blocks and use the same technique to establish a similar level of trust.
To meet the integrity standards demanded by SoC integrators, RISC-V processor cores must be formally verified. Formal tools can guarantee that cores implement the ISA exactly, missing no required functionality while not inserting any deliberate or unintended behavior that violates the ISA. This includes screening the design for security vulnerabilities and detecting any hardware Trojans. Formal tools can also analyze the core design for functional safety and calculate the fault and failure metrics required by safety standards. A third-party RISC-V integrity verification solution allows multiple core vendors to verify with the same solution, establishing a de facto standard, enables SoC teams to screen potential cores, and fosters a rich ecosystem. The OneSpin RISC-V Integrity Verification Solution satisfies these requirements. To find out more, please visit https://www.onespin.com/solutions/risc-v/.
About Tom Anderson
Tom Anderson is technical marketing consultant at OneSpin Solutions. His previous roles have included vice president of marketing at Breker Verification Systems, vice president of applications engineering at 0-In Design Automation, vice president of engineering at IP pioneer Virtual Chips, group director of product management at Cadence, and director of technical marketing at Synopsys. He holds a Master of Science degree in Computer Science and Electrical Engineering from M.I.T. and a Bachelor of Science degree in Computer Systems Engineering from the University of Massachusetts at Amherst.