( ESNUG 487 Item 6 ) -------------------------------------------- [03/01/11]

Subject: (DAC 10 #1)  User benchmark on NextOp BugScope verification tool

> NEXTOP KICKED ASS: In the eyes of the users, the one EDA tool that "won"
> this year's DAC was NextOp's BugScope.  Why?  The majority (58%) of the
> users who saw it at DAC also said they wanted to evaluate it.  Judging
> from the user comments, their next nearest rivals, Zocalo and Breker, so
> far don't seem to have the same initial traction that NextOp has.
>
>     - from http://www.deepchip.com/items/dac10-01.html


From: Vinson Chan <vchan=user domain=altera not mom>

Hi, John,

Altera first purchased NextOp BugScope Assertion Synthesis in late 2009.
BugScope automatically generates functional coverage and assertion 
properties by using information from simulation runs to imply properties 
about our target design's functionality.

Our goals for assertion synthesis were:

   1. Goal: Find and debug issues faster and earlier in our design/
      verification cycle.

      NextOp Result: We have been able to start verification earlier 
      because BugScope can generate assertions and coverage properties to 
      help us find bugs in the design even before our end-to-end checkers 
      and coverage monitors have been completed.

   2. Goal: Highlight corner cases which can be difficult to conceive 
      while developing the functional coverage plan.

      NextOp Result: BugScope needs no user guidance for signals or signal
      types.  Instead, NextOp automatically identifies all relevant
      problem signals.  This helps the designer to uncover scenarios that
      they didn't originally conceive.  For example, a property might
      relate two signals that the designer might not have considered
      together.

   3. Goal: Allow our design and verification engineers to quickly come 
      up-to-speed on assertions and assertion based verification. 

      NextOp Result: BugScope's generated properties use primarily a 
      Verilog syntax, so our designers could understand them without 
      having much prior exposure to assertions.  Its automation also 
      reduced their learning curve.

   4. Goal: Provide additional insight and assurance toward achieving 
      our functional coverage goals.

      NextOp Result: As we iterate between simulation and BugScope's
      property generation, the number of coverage properties goes down
      and the properties that BugScope generates become primarily
      assertions.  We see this as a useful part of the process 
      to determine when our verification is complete.

BUGSCOPE BENCHMARK DATA

The benchmark data below is based on a sub-set of seeds from our constrained
random simulation.

   1. Set up: 1/2 day training and support.  1 day to set up and run an 
      initial simulation, and generate properties.  Future runs for the 
      same design took 1-2 hours to set up, and we expect the same for 
      new designs.

   2. Inputs: 200 constrained random tests

   3. Lines of RTL Code: 5000

   4. Properties generated: 250, with roughly a 50/50 split of assertions
      and coverage.  This is ~1 assertion for every 40 lines of RTL code.

   5. BugScope PLI overhead: ~15% overhead on simulation to generate 
      NextOp database.

   6. Assertion Quality: We found the quality of BugScope's properties 
      was especially good, meaning they were usable and non-trivial.  We
      added approximately 95% of the 250 generated properties to our 
      simulation.

ASSERTION SYNTHESIS METHODOLOGY

Prior to using NextOp BugScope, to reach functional coverage goals, we 
mostly relied on a mixture of directed and constrained random verification 
using VCS with test plan and code/verification reviews.  For structural 
coverage, we used VCS code coverage reports.

Below I have outlined our current methodology with NextOp.

  1. Our verification team links BugScope to our regression database via 
     PLI.  We run BugScope 3-4 times throughout the course of our 
     project, but it is not part of our weekly regressions.

  2. The designers review the properties that BugScope generates and 
     classify them as either assertions or cover properties.  It was 
     easy for our designers to understand the properties, however, it 
     certainly required some deep thought to properly classify them.  In 
     fact, my advice to others when using BugScope is to take the time 
     to carefully classify the properties, because it will help you find 
     bugs right in your design without re-running the simulation.  
     I have more details on this in the next section.

  3. Our designers give the classified properties back to our verification
     engineers to include in their regression runs.  The verification
     engineers use BugScope to generate assertions and cover properties in
     these industry formats:

          - Verilog, Synthesizable Verilog
          - System Verilog Assertions (SVA), Synthesizable SVA,
            VMM compliant SVA
          - Property Specification Language (PSL)

  4. Functional coverage is a main factor in our verification signoff, 
     so we use BugScope's coverage properties as a focal point for our 
     design and verification teams to help to identify and close our 
     coverage holes.

DESIGN AND VERIFICATION TEAMS - FINDING BUGS WITH BUGSCOPE
  
The BugScope properties create additional dialogue between our design and 
verification teams.  In addition to the cover properties identification 
mentioned above leading to reviews between the design and verification 
team, our teams used BugScope to help us find bugs in two primary ways:

  1. Firing assertions during simulation.  This helps the verification 
     team narrow down the source of the bugs and discuss the 
     circumstances surrounding how the issues occur with the design team.

           Example - BugScope property:

                        counter_a >= counter_b

     Bugscope property "counter_a >= counter_b" is based upon an RTL 
     assignment involving the condition "(counter_b - counter_a)".  After
     exercising more constrained random tests, the assertion had fired 
     indicating an unexpected underflow had occurred and exposing a bug.

  2. The classification process provided a form of code/design review 
     which exposed bugs directly.  Below are real examples of bugs we 
     uncovered by:

     a. Comparing the BugScope properties to the RTL during classification,
        and determining that they did not match design intent.

           Example - two BugScope properties:

                     signal_a |-> signal_b,
                     next_state != IDLE |-> signal_a

        Module A had the BugScope property "signal_a |-> signal_b", which 
        was deemed as a cover property as signal_a could also be high 
        when signal_b was low.

        Module B had BugScope property "next_state != IDLE |-> signal_a"
        which was also deemed a coverage property; we expected next_state
        should not move to IDLE until after signal_b is high as well.

        With NextOp highlighting the dependency on signal_a and signal_b 
        between Module A and Module B, it sparked the designer to do a 
        code review and found the logic in Module B had only considered 
        the case of signal_a to be high with signal_b not matching his 
        design intent that it should consider the other combinations.

     b. Comparing the BugScope properties to each other during 
        classification.

           Example - two BugScope properties:

                     signal_a == 0
                     one_hot0(signal_x, signal_y, signal_z)

        The property "signal_a == 0" was classified as a cover property 
        as "signal_a" could also be "1".

        The property one_hot0(signal_x, signal_y, signal_z) indicated 
        that these three signals were mutually exclusive to each other 
        when asserted.  However, if signal_a == 0, then signal_x should 
        not be asserted, so this property should not have been detected 
        by BugScope.  This pointed to a bug in our design.

White-box assertions vs. Black-box checkers:

Unlike black-box checkers, NextOp's white-box assertions use knowledge of 
the internal design structure and can view the internal working of the code 
structure, like looking through a glass-box.  As a result, BugScope's 
assertions give us greater observability into the quality of our stimulus/
coverage and can pinpoint bugs in the RTL implementation.  In comparison, 
black-box checkers are concerned with the end-to-end functional behavior.

What BugScope does best:

Easy to set-up, properties are generated automatically, and they were 
unique in quality.  We did find bugs and coverage holes that we were 
unaware of.  Bugs were found during the classification process as well as 
with traditional assertion firing.

Where BugScope needs work:

The properties are generated post-processing block-by-block.  It would be 
nice to have the capability to generate them on a broader scale like 
multiple blocks at a time, or even at the top-level.  This could be handled 
through improved scripting for batch mode, or with GUI capabilities.

TIME SAVINGS

Detecting and pinpointing bugs earlier in the design process rather than 
waiting until we complete the end-to-end checkers and coverage monitors 
both shortened our verification process and saved us engineering time in 
terms of debug effort.  It's hard to quantify the debug time savings as it 
depends on the number of bugs found during the course of the design and 
verification.  The three examples described above probably saved us an 
estimated ~4-6 work-days of debug time compared to not having properties.

Our team was relatively new to assertions, so using BugScope also saved us 
on the learning curve it takes to write good quality assertions.  They were 
able to be productive with assertion classification within a day.  We 
probably saved ~3-4 work-weeks of training and writing/reviewing quality 
assertions -- although I suspect we likely would not have written as many 
assertions as generated by BugScope and depended more heavily on the 
end-to-end checkers and coverage monitors.

Altera's IP Protocols team was the first to adopt BugScope, near the end of 
one project that we taped out.  Given our positive experience, we plan to 
expand BugScope usage across teams and projects within Altera.  We expect
BugScope to become a planned part of our verification signoff.

    - Vinson Chan
      Altera Corp.                               San Jose, CA
Join    Index    Next->Item






   
 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)