( ESNUG 558 Item 3 ) -------------------------------------------- [02/19/16]

From: [ Jim Hogan of Vista Ventures LLC ]
Subject: 9 major and 23 minor Formal ABV tool metrics - plus their gotchas

Hi, John,

Here are the 9 major metrics that I feel an engineering team must consider
when choosing what specific Formal ABV to use on their project (if any):

     1. Performance/Capacity/Convergence
     2. Debug
     3. Set-Up and Control
     4. Property Types
     5. Standard Language Support
     6. Coverage Mechanisms
     7. Formal/Simulator Interoperability
     8. Formal Engine Access
     9. Customer Tech Support

Here's a generalized F-ABV flow.  Start at "design code" for input:
Notice that "debug" and "coverage" are the final output as well as info on
whether the assertions have passed.

Using this framework, below I explain in detail the impact and pitfalls
of 9 major and 44 minor metrics will have on your F-AVB tool choice.

    - Jim Hogan
      Vista Ventures LLC, OneSpin BoD            Los Gatos, CA

         ----    ----    ----    ----    ----    ----    ----

1. PERFORMANCE/CAPACITY/CONVERGENCE

Measuring a formal tool's capacity and performance is tricky, as the tools
are highly dependent on your individual input design code and assertions.

The rate of convergence (the number of assertions that actually complete) is
also a factor, because some tools never converge for some checks.  The
convergence rate is based on effectiveness of the 15 to 20 possible formal
engines used in your formal tool -- as well as your specific design and the
quality of its assertions themselves.  Convergence can range from 70% right
up to 98% (i.e. only 1 in 50 assertions not completing).

Companies often do a lot of benchmarking to get a good handle on these three
factors.  However, formal is very sensitive to benchmark bias as assertions 
are often written with a specific formal tool in mind.  For example, what 
works in Questa may not work in JasperGold.  Using those same assertions on
a different tool can lead to poor results if the characteristics of the new 
tool are not considered.  

Characteristics which impact capacity, performance, and convergence:

  - Underlying Formal Proof Engines

    Unlike Verilog/VHDL simulation, where you have one core engine for each
    simulator, with formal tools you have a collection of proof engines;
    each leveraging different algorithms.  These engines are applied based
    on the specific design and assertion characteristics for best results.
    
    Some common public algorithms inside formal engines are:

          - Binary Decision Diagrams or BDD
          - "SATisfiability" Proofs or SAT
          - Bounded Model Checking or BMC
          - Craig Interpolation, usually shortened to "Craig"
          - Property Directed Reachability or PDR

    Also, proprietary algorithms for special situations are common.  Often
    the more mature the tool, the more engines with different algorithms it
    will contain.  For example, Cadence JasperGold and OneSpin Verify both
    have approximately 15-20 formal engines/algorithms.

  - Formal Engine Selection

    How a specific formal engine is selected is a key factor.  It used to be
    engineers did this by hand choosing from 4 to 8 different algorithms
    with limited information about the best one to use.  Today, some formal 
    tools have an automated engine selector which logically picks from
    ~20 choices -- and some vendors are working on adding heuristic or
    experience-based mechanisms which use both your Verilog/VHDL source code
    and assertions as their basis of formal engine selection.

  - Formal Model, or Data Structure

    The formal model is the proprietary data structure used to store your
    chip design in memory.  They contain your design's state and transition
    information.  These models (or data structures) are often optimized
    to minimize the storage requirements -- while still allowing for rapid
    access to specific info.

It can easily take 100's of engineering R&D man-years to build an optimal
combination... with the right combination being the "secret sauce" for
consistent performance, capacity, and convergence.


2. DEBUG

Debug was historically a major limitation of formal tools.  For a specific
assertion, the formal tool would cryptically say "pass/fail/issue" with
limited (or no) info on exactly why the design failed.

Today formal vendors are making strides to have their formal tool debug more
closely mirror the debug-friendly Verilog/VHDL simulation environments.

What is important look for?

  - Graphical Debugger 

    Many Formal ABV tools today now offer some sort of GUI with features
    such as: waveform display, source code display with annotation, driver
    analysis, and code navigation.  Examples are Mentor Questa debugger,
    OneSpin Debugger, and the Synopsys Verdi environment.

  - Debug Trace Generation 

    A trace is a sequence of events that make up a particular source code
    operation leading to an assertion failure.  Most formal tools can
    generate a trace, or a counter-example, to demonstrate why an assertion
    fails.  

    It can then use a simulation-style debugger to show the signals that
    relate to the failed assertion.  The debugger can be proprietary to
    the formal tool provider or a third party solution like Synopsys Verdi.
    The debugger may also include some form of root cause analysis to
    allow the original source of the issue to be discovered quickly.

    Some formal tools can also auto-generate simulation stimulus from the
    trace, so that the engineer can further debug the problem in simulation.

  - HDL Testbench Generation For RTL Simulation

    Some F-ABV tools can generate another type of trace called a "witness"
    of a specific code operation scenario of interest -- and then create a
    standalone Verilog simulation testbench component of it.  

    For example, OneSpin Verify can create a witness that may be fed
    directly into a testbench for Verilog simulators like Synopsys VCS,
    Mentor Questa, Cadence Incisive and Aldec Rivera.  This is a fast way
    of identifying important operational details and automatically
    producing a simulation directed test for that detail.


3. SETUP AND CONTROL

How do you measure how long it takes to get a Formal ABV tool up and running
on a chip design?  Some automated parts of tool set up are:

  - Assertion Creation Templates

    Templates that make it easier to write specific assertion types, such as
    FIFO checks, FSM Checks, etc.  This can eliminate the 1-2 days it can
    otherwise take to write 10 good assertions for a FIFO test.

  - Automatic Clock/Reset Recognition

    Some F-ABV tools will recognize "reset" and clock signals, and make
    them be easily controlled.  Although this often takes only 1-2 hours,
    they may have to be redone every time a significant block is added
    to the design.

  - TCL

    All tools today use TCL for scripting; however due to proprietary tool
    commands, TCL scripts are not interchangeable between F-ABV tools.


4. PROPERTY TYPES

The design intent of a digital chip may be thought of as design properties.
These are the fundamental characteristics of the design that the formal
tool will verify.  For formal verification, these are written as assertions.

Some property types are:

  - Safety Checks

    Ensures your design behaves correctly and there are no circumstances
    where an error condition might occur.  Example: Check FIFO control
    logic to make sure there can never be both a "full" and "empty"
    indication at the same time.

  - Activation Checks

    Ensure that specific code sections can be accessed and the verification
    problem is not over-constrained (related to the functional coverage
    below).  Example: For a 30 state FSM, check that all 30 states may be
    reached as expected -- i.e. no mistakes have been made in coding the
    arcs between states.

  - Functional Cover Checks

    Makes sure that a specific operational scenario is actually tested
    during verification.  Example: In a block-to-block handshake, make
    sure that an ACK signal is observed to happen during verification
    every time a REQ is generated and a legal data packet is transmitted.

  - Liveness Checks

    To check whether some event will eventually happen.  Example: In an
    interrupt controller, make sure that every interrupt signal will be
    processed eventually -- even if it happens after many cycles.

  - Structural Checks

    Makes sure the structure of your design post-synthesis is correct
    (e.g. ensuring your "case" statements in your Verilog source code
    match your design intent), and that there are no structural errors
    due to incorrect RTL coding.

    This is not just wire connectivity checking.  It's also functional
    checking.  Example: You have a 4-bit register array.  It can only be
    addressed by 0 to 15.  Formal structural checks make sure that it's
    not indexed by 16, 17, 18...  or any other invalid addresses.

If left alone a F-ABV tool will verify every possible state of your design
with every possible input.  However, there are some input possibilities the
engineer knows in real life which can never occur -- for example, "reset"
signal at the same time as an active input.  Your F-ABV tool run will be a
lot shorter if you use constraints to remove these impossible inputs.

Constraints might include a specific reset sequence -- or even an area of
design operation that you know does not require testing.

The more clock cycles a proof must be tested across, the longer the formal
run will take.  Sometimes it may be too compute intensive to complete a
particular proof.  For example, a formal tool might report that "within 100
clock cycles, a specific signal combination will never occur".  These are
known as bounded proofs.  It may be possible to verify that a specific check
can be proven within, say, 10 clock cycles in 2 hours of tool execution; but
to test for any time range might take weeks of tool execution.

For many projects this might do the job; and it can significantly cut down
your overall verification time.

Caveat: Ask your F-ABV vendor which of the above property types they support.
They may not have some of them.


5. STANDARD LANGUAGE SUPPORT

Industry standard languages have highly evolved for formal verification.
Having choices is still important.

  - Chip Design Languages

    All the major Formal ABV tools support the Verilog and System Verilog
    design languages.  Support for VHDL and SystemC/C++ is iffy.

  - Assertion Languages		

    System Verilog Assertions (SVA).  System Verilog is now used by about
    3/4th of the industry.  It's part of the SV IEEE 1800 standard and is
    familiar to Verilog users.  It fits in the Unified Verification
    Methodology (UVM) standard and is generally accepted as the testbench
    methodology of choice.  The SVA format is considered the most powerful,
    and arguably the most complicated, of the HDL assertion formats.

    Property Specification Language (PSL) is an Accellera standard, commonly
    used with VHDL designs.  PSL is older; mostly been replaced by SVA.

    C-Asserts are part of ANSI-C standard associated with SystemC/C++ design
    verification...  Unlike the other assertion languages, C-asserts are
    combinatorial, so require extra code for sequential checks.

    VHDL Assertions are a part of the VHDL language, which is still used
    on VHDL-centric product teams like the USA DoD, some FPGA users, and
    a number of design teams mostly in Europe and Japan.

    Proprietary assertion formats.  There are companies that use internal
    assertions connected with certain tools, such as Intel ForSpec and 
    IBM Sugar languages.


6. COVERAGE MECHANISMS

Coverage is used measure your verification progress.  It's usually in terms
of percent.  For example, for a specific 100 lines of Verilog code you hit
88% coverage during verification.  Coverage is in two flavors:

Code coverage, the common coverage metric used for behavioral/RTL/gate-level
Verilog/VDHL simulation.  An example is 2,378 lines out of 5,800 code lines
are covered -- 41% coverage.  Code coverage inherently harder to measure
during formal verification than RTL simulation, because the formal algorithm
generally works in the state-space rather than on lines of Verilog.  

Functional coverage is what percentage of your design intent that has been
verified.  It's easier to track during formal as it relates directly to the
assertions that you're using.  Caveat: functional coverage is only as good
as the assertions you've written for your design.  Getting 100% functional
coverage on an incomplete (or badly written) set of assertions is classic
Garbage In, Garbage Out.

Both coverage types are desirable metrics for formal.

When considering formal coverage, it is important to differentiate between
"controllability" and "observability".

  - Controllability is activation of code lines or design functionality.
    In Verilog/VHDL simulation this is a measure of how much of your
    stimulus in touches lines of code or functions.  In formal this
    relates to sections of the design that can be "seen" by the tool
    in the state-space.

    If you "over-constrain" your F-ABV tool (to save on tool runtime), it's
    possible that it'll miss important design functionality in your chip.

  - Observability is a measure of the Verilog/VHDL source code lines or
    functions that are being checked by the checkers.  This is important
    for both simulation and formal, and measures how well possible bugs
    are detected by your testbench checks, or assertions.

    For example, if an assertion was checking function "ZZZ" described
    using 4 lines of code, then those 4 lines of code are "observed"
    by the assertion.

Effective formal coverage is focused around observability metrics, which are
applied to both source code (lines, expressions, etc.) and functionality.
It's a direct measure of how effective your current set of assertions are.

For example, you might receive a report that "one specific PSL assertion
touched 27 lines in your source code", or that "95% of the lines in your
source code were touched by one or more SVA assertions."

Below are some of the primary formal coverage types.  

  - Assertion Coverage

    Assertion Coverage is a simple indication of assertion pass/failure,
    bounded proofs (e.g. "proven over 20 clock cycles only"), and other
    quality metrics.  Its limitation is it only provides information on
    the status of your current set of assertions, and not the coverage
    of the design code or functionality.  

  - Cone of Influence (COI) Coverage

    This is based on the activation design logic that drives a particular
    signal.  COI coverage suggests that a particular code line is observed
    by the assertion, as well as all the lines of code that influence this
    observed line.  This is a somewhat conservative metric, which is
    reasonably easy to compute; however, it suffers from the accuracy
    issue that some  of the design logic may be optimized away in various
    formal engines. 

    In addition, to suggest a line of code is fully verified because it
    influences another line can be questionable.  For this reason the COI
    method has been refined by some vendors to focus on the proof itself.
    For example, the Cadence "Proof Core" coverage metric.  

  - Mutation Analysis

    If a line of design source code that is covered by an assertion is
    changed, then the assertion should be triggered.  This is the principle
    behind mutation coverage, which tweaks lines of code one-by-one in
    your design to see if those changes trigger your testbench checkers
    or assertions.

    If all the lines in a design are changed, or mutated, one-by-one then
    you can arrive at a measure of coverage for the entire design.  For
    example if 90% of the line changes triggered assertions, then your
    design is 90% covered.

    The mutation technique can be applied equally well to simulation and
    formal producing a consistent coverage model between the two.  It's
    very accurate but requires a lot of CPU time as the verification must
    be rerun many, many times.  Mutation analysis is used extensively in
    software testing, and was originally made popular for hardware design
    by Certess and their tool Certitude (now Synopsys).

  - Fault Observation Coverage

    Fault observation coverage takes mutation coverage a stage further.
    Instead of changing the lines of code, faults are applied to specific
    points in the design to see whether they trigger assertions.  Your
    formal tool applies those faults to your formal model (the state-space
    of your design contained in computer memory) rather than your original
    Verilog/VHDL source code -- thus taking less CPU time because of a few
    formal runs in state-space, rather than many repeated simulation runs
    than what classic mutation coverage takes.  This also avoids some of
    the false negatives than can occur with mutation analysis.

    Observation coverage only works in formal (not simulation), but it
    outputs a code coverage metric that can be related to simulation
    coverage very easily.


7. FORMAL-SIMULATION INTEROPERABILITY

Because Formal ABV today is used alongside simulation, the interoperability
between the two is important.  A major issue is understanding which parts of
your design that are covered by the simulation or formal tools.  This has
been a problem because the coverage models between the two are different.  

A number of simulators like Synopsys VCS, Cadence Incisive, Mentor Questa.
Aldec Rivera, Silvaco Silos, Fintronic Finsim, Synapticad Verilogger,
Winterlogic Z01X, Tachyon CVC and others all have links to formal -- mostly
through their coverage mechanisms.  

Caveat: your formal tool along with your simulator will output a boatload of
coverage metric data.  This coverage information needs to be loaded into a
common database and viewed with a verification planning tool to assess what
overall progress you're making on your chip.  It is too easy to lose data
between the two different methods if you don't do this!

Some vendors allow for their databases to be read and written in a limited
fashion from your formal and simulation tools, including third parties; but
you must ask each vendor exactly which tools it does or does not support.

The Unified Coverage Interoperability Standard (UCIS) is the industry's
attempt to combine different sources of coverage into one cohesive database,
using a common API.  There are varying degrees of vendor support for UCIS.

Most of the EDA companies have their own verification flows and coverage
databases for their own tools only.  

Caveat: You must look beyond a UCIS Support "check-box" to understand if a
vendor allows UCIS data from 3rd party tools to be written in or read out
of their databases.  This lets you choose the best simulation and formal
technology for your designs regardless of vendor.


8. ENGINE ACCESS

Formal vendors offer different ways to access their formal engines.  This is
for acceleration through parallel execution, increasing CPU capacity, and/or
remixing of licenses for access flexibility.  

The engine access mechanisms include:

  - Parallel Execution

    Running a single RTL simulation job across multiple machines is very
    difficult due to the serial nature of the simulation algorithm. 
    However formal can do parallel execution to accelerate runtimes by
    running different assertions on different machines.  For example,
    if 5 machines are available and you have a formal job with 5 assertions,
    each assertion may be run independently on a different machine; making
    the entire job run in the time it takes to test the slowest assertion
    of the 5.  With a larger number of assertions they be grouped depending
    on the machine loading, making an almost linear speed-up occur.

  - Cloud Access

    Using the sea of machines in the cloud to do parallel execution lets
    you effectively turn your largest formal job runtime to be only as
    long as its slowest single assertion.

  - Token or Standard Licensing

    Standard licensing means that each F-AVB tool has only one key and
    that the tool can only run one job on whatever machine you want the
    job to run on.  (You effectively pay by # of licenses you use.)

    Token licensing lets multiple copies of the F-AVB tool run multiple
    different jobs on whatever mix of machines you want the jobs to run
    on.  (You effectively pay by # of tokens your total run uses.)

    Token licensing lets you easily swap in and out different formal
    engines and different formal apps on whim.  It's amazingly flexible.
    In addition, during project crunch time, it's trivial to get more
    F-AVB cycles on demand simply by buying more tokens -- not easy to
    do with standard licensed tools.


9. CUSTOMER TECH SUPPORT

Formal Verification has become much easier to use in the last few years. 
However, creating assertions and running the tools still requires a certain
level of expertise.  Depending on your internal expertise, you will need
access to formal application engineering experts...  Understanding the
capability of the vendor's AE support is crucial.  Caveat emptor.

Application Engineers (AEs) who are expert in formal verification will
have spent years gaining the necessary experience to show others how to
be productive with the tool.  As such, it is important that your formal
vendor maintains a specialized group of formal AEs on whom you may call
for assistance.

Additionally some vendors offer consulting services to actually provide the
verification itself using their F-ABV tools.  Other vendors will contract
outside 3rd party consultants to to ensure they are trained and certified
on using their tools.  Noted consultants include: Oski Technologies, TVS,
and Paradigm Works.

Formal Apps automate a specific formal task, so when using them, often a
lower or minimal level of AE assistance is needed.

Caveat: it's crucial that you determine before you buy any formal tool
exactly what type of customer support you will actually get, how they will
be contacted, and what additional costs can and will be incurred from
using customer support.  Setting up a new formal tool can be very tricky,
especially for newbies.  Debugging formal problems can be very time and
resource intensive.  Be sure to know what your exact support system will
be before you buy any F-AVB tool.

         ----    ----    ----    ----    ----    ----    ----

In ESNUG 558 #4, I discuss Formal App metrics in detail.  In ESNUG 558 #5,
I show how both the Apps and major ABV vendors fit against these metrics.

    - Jim Hogan
      Vista Ventures LLC, OneSpin BoD            Los Gatos, CA

         ----    ----    ----    ----    ----    ----    ----

Related Articles

    Jim Hogan on how "this is not your father's formal verification"
    Where Formal ABV whomps HDL simulation for chip verification
    9 major and 23 minor Formal ABV tool metrics - plus their gotchas
    16 Formal Apps that make Formal easy for us non-Formal engineers
    Hogan on Cadence, Mentor, OneSpin, Real Intent, Synopsys formal

Join    Index    Next->Item






   
 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.





































































 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.

Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)