Thomas L. Anderson, Vice President of Marketing, Breker Verification Systems, Inc.
This whole planet is buzzing about the emergence of the Internet of Things (IoT) and the constant, complex web of interconnectivity it will enable. Perhaps no aspect of IoT is more personal than wearable technology, which weaves connectivity into the very clothes, shoes and accessories that people wear. The complex system-on-chip (SoC) devices that enable this level of connectedness are hard to design and even harder to verify. In fact, three attributes of wearable technology make it especially challenging for SoC verification teams.
First, wearable technology is likely to communicate in many ways with many different devices. Audio input, some sort of display output and a wide variety of wireless protocols are a given. Audio output, camera input, touchpad input and sensors of many types will also appear in certain classes of wearable items. The potential for many of these communication channels to be active at the same time greatly increases verification complexity. Corner-case design bugs must be found and fixed prior to device fabrication.
This challenge exists today for smart phone, tablets and similar devices. Otherwise reliable products may exhibit buggy behavior when saturated. For example, a cellular connection may be dropped if the user is making a call at the same time a text message arrives, an email message is downloaded, the GPS updates location information and an alert for an upcoming meeting is issued. The mantra for verification is that if something hasn’t been verified, it will have bugs. Unless this real-world scenario is verified before chip fabrication, there’s a good chance it will fail in the field.
Unfortunately, repairing buggy hardware in the field is not feasible. The SoCs and the rest of the electronics are embedded deeply within wearable items and there will be no notion of a manufacturer sending along a new electronics module to be installed by the user. Many bugs that make it into silicon may be fixed, or at least ameliorated, by patches to system or application software. However, software patches to work around hardware bugs are annoying to the user and usually compromise some aspect of system functionality or performance.
Many IoT devices, including wearable technology, will be able to update software via Internet connectivity. As any laptop, tablet or smartphone user knows, software updates interrupt critical functionality, always take longer than desired and frequently require a device reboot with loss of continuity of usage. No one wants an important phone call cut short because of a software update. This is hardly the sort of experience that users will want in technology specifically advertised as always on and always connected.
Finally, it seems likely that users will bond with wearable technology even more than they do with today’s smartphones and tablets. Users will regard this technology as an extension of their intrinsic capabilities and will get quite upset when any interruption or even slowdown occurs. People get frustrated when a physical limitation such as a short-term injury prevents them from acting out their normal lives. They will surely be even more frustrated when the IoT-enabled extensions to their bodies prove unreliable.
The expectation of perfect, continuous, predictable connectivity and performance in wearable technology places extreme pressure on the verification team to model and check all the real-world scenarios that could arise in actual usage. Users will have no tolerance for wearable technology that freezes under conditions that were never verified. New verification technologies and methodologies will have to emerge in order to meet this challenge by producing SoC devices whose hardware and software work together seamlessly.
Traditional verification methods are not sufficiently powerful to deliver on this requirement. All major existing methodologies, including the Universal Verification Methodology (UVM) standard, view verification as a testbench problem. The desired behavior within the hardware is defined by coverage points that must be exercised in order for verification to be considered complete. A simulation testbench is developed to automatically apply legal stimulus to design inputs while checking the results at its outputs.
Constraints are placed on design inputs to ensure that the stimulus stays within legal boundaries. For example, if a two-bit input port should never have the “00” value, then a constraint will restrict possible stimulus values to “01” or “10” or “11.” Biases can be used to favor some values over others rather than equal random likelihood. If all (or nearly all) of the coverage points are exercised by the constrained-random stimulus in simulation tests that pass the checks, the design is considered verified.
This approach works well for small intellectual property (IP) blocks and even subsystems. However, it starts to break down with a large design where it’s hard to stimulate deep behavior and associated coverage points from the chip inputs. When the design verified is an SoC with one or more embedded processors, the testbench approach is inadequate. Since the power of an SoC lies in its processors, trying to verify the design without leveraging the processors is a losing battle.
Virtually all SoC teams leverage the processors in late-stage verification when they run the production software operating system and perhaps, some applications on the design. This step is usually called hardware/software co-verification –– an essential part of the SoC development process.
However, there are several reasons why this is not an ideal way to find remaining bugs in the hardware design. For a start, production software often is not ready until late in the project. It’s preferable to find bugs as early as possible.
Because production software is designed to perform user functions, not to verify the design, it is inefficient at finding bugs. Booting an operating system in simulation may take days or weeks, so hardware/software co-verification is usually performed on an in-circuit emulation (ICE) system or an FPGA-based prototype. Debugging and diagnosing bugs is much harder in a hardware platform than in simulation due to reduced visibility and production software provides few features to help.
Recognizing the gap between the testbench and ICE, some SoC verification teams have recruited a few of the project’s embedded programmers to write C tests to run on the SoC’s processors in simulation. This better leverages the processors, but hand-written tests tend to be quite simple. Humans are not good at thinking in parallel, so the tests tend to do one or two things at a time. For example, a test might verify that the phone block can both send and receive phone calls.
A modern SoC may have a half-dozen or more processors, with the possibility of many things happening in parallel. Hand-written tests don’t stress cross-processor cache coherency or exercise concurrent operations. They are unlikely to trigger and verify the (call + text + email + GPS + alert) corner-case example described earlier. The upshot is that hand-written C tests running on the SoC’s embedded processors narrow but do not close the gap in full-chip verification.
Fortunately, a technology is available today to greatly improve the verification of complex SoCs, especially those that will be used for IoT and wearable connectivity. Software such as Breker’s TrekSoC family can automatically generate C test cases from graph-based scenario models that capture the design intent and the verification space. These test cases are multi-threaded and multi-processor, running realistic user scenarios based on the functionality of the SoC. They are designed to find all hardware design bugs before the chip is ever built.
The C code running in the embedded processors can communicate with a runtime simulation module via a memory-mapped mailbox. The runtime module handles “events,” such as supplying stimulus to an SoC input port or reading and checking results from an output port. Existing UVM verification components (UVCs) and certain other components in the UVM testbench, including coverage points, can be reused. A set of system services makes generation easier since test case runs on “bare metal” without an operating system.
Scenario models developed for standalone IP block verification can be reused at the subsystem and SoC levels. The same SoC scenario model can be used to generate test cases for every verification platform, from high-level virtual prototypes through register transfer level (RTL) simulation, simulation acceleration, ICE and FPGA prototypes. The result is much more thorough pre-silicon verification. When the actual SoC arrives back from the foundry, the same test cases can be run to validate that the silicon is correct.
SoC verification is a challenge for today’s smartphones and other devices. The expectation of perfection will be even greater for wearable technology likely to be viewed as an extension of its human user. Testbenches and hardware/software co-verification alone are not enough to satisfy verification requirements for SoCs in wearable applications. Only the emerging SoC verification approach of automatically generated C test cases from scenario models can verify real-world conditions well enough to meet end-user expectations.