Global Semiconductor Alliance Logo

Why Care About IP Quality?

Piyush Sancheti, Vice President, Product Marketing, Atrenta Inc.

As an SoC designer, you’re probably chagrined at how IP (3rd party and even proprietary) oftentimes impedes – rather than speeds – your design getting to tapeout. Wait!  Wasn’t IP supposed to be the panacea for increasingly-complex SoC design?   Why has it turned into a sometimes-nightmarish, arduous series of IP fixes?    Why is IP reuse costing design projects more time and effort and, in the end, more money?

It’s because IP quality ranges wildly.   We might be able to use an IP block in multiple designs…or we might have a problem designing an IP block into even one design.  We just never know.  We need better IP quality.  We need a system to inspect IP blocks for the quality metrics we set before we use the IP.  And we need to set metrics that allow us to enforce our quality standards so that we have confidence that an IP block won’t require arduous and time-consuming tweaks and fixes to work in the target design.

To ensure IP quality, our design projects (and hopefully our entire organizations) need to create an IP quality methodology. I have some suggestions on how to get to such a methodology.

First step: answer in your own minds what IP is supposed to do.   Basically, an IP block has to: 1) meet its functional requirements and adhere to the protocol or spec, and 2) successfully integrate into the target chip or sub-system. High quality IP will get you to tapeout a lot faster.   Lower quality IP will require you to go in and fix the IP to work in your particular design.

You’ll need a few releases to stabilize support for a spec change in your IP. Multiply this by all the IPs you are using in your design, recognizing that more than one possible problem per IP can crop up.  You can see how the problem gets out of control pretty quickly. You can’t re-run a full evaluation on each change, but you also can’t afford to expose your design to incoming changes of unknown quality. So what you need is a scripted system you can run across the entire IP library, checking for what changed and flagging only when an IP changes in some suspicious manner.

In a perfect world, you should run each incoming release through the full suite of tools that will be used in the design assembly, verification and implementation flow. Such tools include static quality checks, simulation, synthesis, timing analysis and ATPG. IP development organizations/companies do this, or should.

The challenge for an IP user is that setting up similar kinds of checks is really difficult. It could take up to a week to run and is probably redundant.  Here is where you need to make your tradeoff decisions:

  •  You have to do some level of checking, otherwise you are completely exposed
  • However, no matter how hard you try, some problems will only be detected in design assembly/verification/implementation
  • So you need to balance how hard you try against the delay in getting to use a new release
    • The tradeoff here is how much delay you’re prepared to tolerate versus the potential hit of a problem you might have detected on inspection
  • You will probably opt for the best quality assessment you can get quickly, especially if the method can be tuned (with experience) to minimize significant escapes

So you’ll have to figure out what is the best IP quality assessment you can get quickly. One solution is running production tools to check for quality.   Another is running static quality assessments – lint, domain crossing analysis, power constraint validation, testability metrics and SDC quality checks. All these checks can all be run with minimal setup and in short runtime per IP block.

What about functionality? Can you really check quickly that the function of an IP wasn’t disrupted in some subtle way? Formal verification may provide some help on small blocks or specific checks, but writing assertions requires a lot of human effort and expertise. Running the supplier’s testbench seems redundant, and constrained-random verification only goes so far.    What you need are more automatic ways to capture the verification intent of the IP, which can be achieved with innovative techniques like assertion synthesis. Fine-grained high quality assertions can capture the implicit functional specification of the IP and make SoC-level verification much more predictable.

Bottom line – you are a design project manager and you need predictability of results so that you can hit your tapeout milestones. Putting in place a methodology based on objective quality criteria that can be measured and enforced will give you better control over the design project schedule because you’ll get fewer surprises from your IP.

IP quality is not something design managers and designers can retrofit into the design flow. What’s needed is a methodology to manage design quality at each stage of the design, from spec to architecture to RTL and all the way down to silicon.   It must start from the beginning of a project, when the design methodology is being defined.  Otherwise, designs that we see coming just beyond the horizon will invariably fail.

Editorial contact:  Liz Massingill,, 831-345-4702


This entry was posted in Blogs. Bookmark the permalink.