( ESNUG 484 Item 1 ) -------------------------------------------- [03/12/10]

Subject: A first look at stealth start-up NextOp's assertion synthesis

> NextOp (one of the many "stealth" operations) has some promising
> technology in the automatic assertion space, but needs a some features
> and a UI to make the tool more interactive with the user.
>
>     - from http://www.deepchip.com/items/dac09-05.html


From: [ The Swedish Chef ]

Hi, John,

Please keep me and my company anonymous.

We've been using NextOp's BugScope assertion synthesis tool for two years
(since early 2008) and used it in two production designs.  BugScope
takes in our RTL design and testbench as inputs and generates properties,
(which we then categorize as assertions or coverages) that help identify
bugs and coverage holes during simulation.  In contrast, Mentor's 0-in
assertion synthesis does not use our testbench; also O-in only generates
assertions but no coverage properties.

We started doing assertion-based verification by hand roughly 4 years ago.
Our hand-written protocol checkers in our testbench only checked our block's
inputs and outputs.  Assertions check the functionality INSIDE each block
so you can see if the block is working properly as part of the system;
which is why we switched to them two years later -- we wanted 'white-box'
observability into each block vs. just looking at the paths in between our
blocks with interface checkers.

From a design methodology point-of view, assertions help to capture the
design intent (acting almost as "executable specifications") and to find
bugs in our design earlier.  Our ideal approach was to have our verification
team write the assertions for the block interfaces, and have our designers
write the assertions for each block inside the design.  If the designers
write assertions as they create their designs, the assertions are like a
living specification of the design.

However, our deadlines never allowed our designers enough time to write
assertions, plus they hesitated to write them because they needed to learn
SystemVerilog and PSL.  So the reality was that our designers specified
which features are "assertion worthy", and our verification engineers
developed all of our assertions.

Roughly 60-70% of our design code needed assertions, depending on type of
design, e.g. controllers need more assertions than memories.  It took us
about 8 hours to write each assertion manually.  (Yes... that sounds like a
lot, more details later.)

When we began our BugScope eval, we only cared about assertion properties
it generated -- we didn't initially see any value of BugScope's coverage
properties.  Our BugScope methodology:

  1. Get our design and testbench stable via regressions and random
     simulation in Cadence NC-Sim plus Verisity "e".

  2. Input our design RTL (in Verilog) and testbench (in Verilog and
     Specman) into NextOp.

  3. BugScope generates a properties report with both assertions and
     coverage points.  Basically, if the property is not an assertion,
     it points to a coverage hole.  For example, if your original
     testbench never writes zero to register A, NextOp will write a
     property along the lines of: 'never write zero to Register A'.
     If you then review the spec and see that writing to zero is
     actually allowed, you will know that something is missing within
     regression from coverage point of view, i.e. the property is a
     coverage point, not an assertion.  Or if the NextOp coverage
     property doesn't look correct, there's an issue in your spec.

  4. Our design and verification engineers review the report, which is
     in 'NextOp English' so engineers unfamiliar with SystemVerilog or
     PSL can understand it.  The readability of NextOp's reports makes
     it easier to get the designers to participate.

     BugScope style code (in "NextOp" English"):

                 @a -- value of a at next clock;
            a |-> b -- true if a implies b;
         onehot0(a) -- true if a is one-hot or zero;
        onecold0(a) -- true if a is one-cold or all 1s;
        overflow(a) -- true if a overflows at next cycle,
                       for example, a == 8'hff && @a == 8'h0;
       underflow(a) -- true if a underflows at next cycle,
                       for example, a == 0 && @a == 8'hff;
          stable(a) -- true if a does not change at next cycle,
                       i.e., @a == a;

     Our team classifies which properties are assertions and which are
     coverage points, and also whether to keep or ignore the properties
     ('don't care').  Once they classify a property as an assertion,
     NextOp has a filter file for future runs so you know you have
     already reviewed and classified the property.

  5. Our engineers then patch our testbench for the specified coverage
     holes that BugScope identified.

  6. We use BugScope to output the assertions in standard System Verilog
     format for use in our NC-Sim simulation environment (BugScope can
     generate SVA, PSL, Verilog and synthesizable Verilog).  Our engineers
     only need to know enough System Verilog to bind the assertions to
     their RTL and to then debug them when they fire; it's much less of
     a hurdle to run assertions than to write them.

  7. We iterate between random NC-sim simulation and NextOp runs.  As we
     progress patching our coverage holes, each NextOp run will output a
     smaller number of coverage properties.  When we get to the point
     where virtually all of the properties NextOp generates are assertions,
     it is a strong indication that our coverage holes have been filled.
     The theoretical goal is to have 0 coverage properties.

     This aspect is very useful as it allows us to get better coverage with
     our NC-Sim random simulation, and helps us measure our progress.


Our original hand-written methodology:

Developing assertions manually was a slow process.  It took us roughly
8 man-hours to write and debug each assertion.

  1. The designer reviews the design, and mentally figures out what
     assertions he would like.

  2. The designer writes the assertions (in English) and gives them
     to the verification engineer.

  3. The verification engineer specifies the assertions in System Verilog
     or PSL.

  4. The verification engineer debugs the assertion, and generates error
     cases to see if it is firing and actually doing what it should do.

In contrast, BugScope assertion synthesis does all this automatically, in
about 6 minutes, which is a time savings of about 80x.

Part of the time savings can be attributed that we aren't dependent on our
designer, nor our verification eng to understand the design specification
to create the assertion.  BugScope does it.  Additionally, after we did
our extensive eval, we became comfortable that NextOp's assertions were
'correct by construction', so we no longer need to verify the NextOp's
assertion as we must do with our manual ones.

Here is a sample outcome from a BugScope run, though the 'minutes per
assertion' result is higher than our estimated average of 6 minutes:

    DMA module

           Lines of RTL source code:  3,200 lines
    Properties gen'd on Initial Run:  40
               Number of Assertions:  25
           Number of Coverage Holes:  3
                       "Don't care":  12
                  BugScope run time:  8 hrs (using 2 Specman licenses)

These assertions were selected out of 40 generated properties.  We did not
consider the remaining 12 properties because they were deemed not useful.
An example would be assertions that point out clocks should not be an "X"
(vs. a "0" or "1").  We concentrated on assertions that we considered to
add value.  We didn't want to overload our simulation runtime with
unnecessary CPU cycles.


Other feedback on BugScope:

   - Tool set up.  Very quick -- we can move it to a different location
     at a different site in about 3 hours.

   - Quality of individual assertions.  NextOp does well here.  What we
     wrote in 5 lines for a particular design feature, NextOp wrote in
     1 line.  NextOp has a comprehensive way to look at all signals
     affecting an assertion, such that it can look at the root cause and
     produce more efficient code.  Because NextOp's assertions have
     fewer lines, we don't need to spend 15 to 20 minutes to understand
     an assertion with more complicated code.  Plus we used NextOp to
     check modules that were well verified with a comprehensive Specman
     "e" testbench and it still found RTL issues based on coverage holes
     we were missing.  Additionally because NextOp generates efficient
     assertions, it is more reusable; in contrast the more complicated
     hand generated assertions are more likely to need modification to
     adjust to changes of future designs.

   - Signal-to-Noise ratio, or ratio of useful assertions to uninteresting
     assertions is 8:1.  For a sample block during our eval, NextOp
     produced 197 useful assertions and coverage points, and 23 correct
     but uninteresting assertions.  This was a good ratio.  It's worth
     noting that NextOp's signal-to-noise ratio depends more on the
     quality of your testbench when running the tool instead of just the
     BugScope tool itself.  I recommend waiting to run NextOp until you
     have a stable regression or you will have a lot of noise.

   - Accuracy of the assertions.  This is something where you will need
     to look into it to then trust the tool.  After using BugScope many
     times over 2 years, we have high confidence in the assertion accuracy;
     but this is something you have to experience first hand to believe.


BugScope gotchas:

   - Don't be fooled that Assertion Synthesis will generate 100% of your
     assertions.  You will always need to generate some assertions by hand
     by yourself.  But in a better methodology, with best practices of
     writing code you can get 70-80% of your useful assertions from the
     tool.  We find that reviewing NextOp's assertions often triggers our
     designer to write related assertions.

   - To improve its signal-to-noise ratio, it would be better if BugScope's
     reports tiered its assertions, with a mode where you don't have to
     view the simplistic assertions that are presented.

   - It would be nice to have the ability to direct BugScope toward specific
     assertions, e.g. just generate assertions for a specific set specified
     control signals.

NextOp's BugScope assertion synthesis has become part of our verification
best practices.  Although it takes 3 to 4 years in our company for an EDA
tool to be officially included as part of signoff, 'unofficially' BugScope
is already in our signoff.

    - [ The Swedish Chef ]


 Editor's Note: I love getting unofficial hands-on user evals of new EDA
 tools like this!  Hell, NextOpSoftware.com doesn't even have minimal
 info about what their company does on it!  Scoop! Scoop! Scoop!  - John
Sign up for ESNUGs! Fun!    Index    Next->Item












   
 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)