( DAC'19 Item 10b ) ----------------------------------------------- [03/04/20]

Subject: MENT InFact ties with CDNS Perspec for PSS is Best of 2019 #10b

INFACT TIES WITH PERSPEC: A big upset this year in the PSS Wars.  In 2017
and 2018, Mentor InFact horribly stumbled in the user survey results
compared to Cadence Perspec and Breker in those year -- leaving InFact in an
embarrassingly distant 3rd place behind Perspec -- and even Breker!.
But this year's user comment word count on PSS tools broke out to:

  Cadence Perspec 2019: ################################# (2,145 words)

    Mentor InFact 2019: ################################ (2,082 words)

   Breker TrekSoC 2019: ## (119 words)

Putting both Perspec and InFact into a statistical tie for 1st place; and
poor Breker TrekSoC is left behind in a waaaaaay distant 3rd place.

        ----    ----    ----    ----    ----    ----    ----

NOT QUITE PSS: From the InFact user comments this year, I get these sense
that Mentor InFact is not quite pure PSS as much as it's "a Mentor tool that
uses PSS technologies" to get 100% coverage closure by duplication reduction
and constraint solving.
      
That is, Ravi's InFact is a much narrower specific use tool overall compared
to Anirudh's Perspec being a general purpose PSS tool.  Ravi is focused on
getting pain-in-the-ass coverage closed.  Anirudh is focused on providing
"all thing PSS" to the users in a PSS smorgasbord strategy -- and letting
the users customize super flexible Perspec to their own whims and tastes.
These two very different approaches might be why InFact and Perspec are now
tied for 1st place.  It's morphed into apples and oranges.

        ----    ----    ----    ----    ----    ----    ----
        ----    ----    ----    ----    ----    ----    ----
        ----    ----    ----    ----    ----    ----    ----

      QUESTION ASKED:

        Q: "What were the 3 or 4 most INTERESTING specific EDA tools
            you've seen this year?  WHY did they interest you?"

        ----    ----    ----    ----    ----    ----    ----

    InFact from Mentor 

    With good PSS support and most importantly their bridge to UVM
    and System Verilog.

    Recently they have added C generation capabilities making it very
    compelling.

        ----    ----    ----    ----    ----    ----    ----

    Mentor InFact is another great product that helps with cutting down
    stimulus generation and we have seen that first hand.

    Other interesting tools are the use of Deep Learning with EDA.  From
    code linters to design aids, I am very curious to see what new features
    they can provide.

        ----    ----    ----    ----    ----    ----    ----

    The InFact stimulus optimizer from Mentor-Siemens has been the most
    interesting tool for me this year. 

    I had a basic introduction of the tool last year and decided to follow
    up on it this year.  After digging deep and getting help from one of the
    Mentor specialists, I was able to get running in our projects easily.

    I was surprised how useful this coverage toolset was for us.  Coverage
    closure has been a thorny issue inside our company.  Testbenches and
    stimulus are becoming quite complex therefore a directed testplan is
    not enough when it comes to achieving 100% coverage.  This is where
    InFact became very productive for us.

    Having a compiled library from Questa simulation is prerequisite for
    using InFact.  There are three main tools from InFact that I have used.

      1. I started out with "qcc" because I didn't have proper coverage
         (coverpoints, crosses, covergroups etc) ready yet.  The qcc tool
         requires all the testbench libraries and a coverage strategy file
         in a CSV format.

         The coverage strategy is easy to write for people who are starting
         out creating coverage.  This tool outputs the coverage file that
         contains all the cover related items specified in the coverage
         strategy and the uvm_subscriber that can connected to the driver
         port to sample transactions.  Once the file was created I worked
         on the file to adjust to our coverage requirements.

      2. The "qso" tool from InFact is main one which takes either coverage
         strategy CSV file directly or the coverage file as input to create
         a stimulus generator.  It is an object file that needs to be
         included and instantiated in the Questa simulation preferably in
         the stimulus generator.  In any type of uvm transaction generation
         this replaces constrained randomization of transaction packet.

         In our case our testbenches generates stimulus through the
         uvm_sequence and I replaced our constrained randomizer with the 
         InFact generator.  For example:

         Instead of :

                         if(!packet_t.randomize())

         This line is used :

                         infact_gen_h.ifc_fill(packet_t);

         Once the simulation runs the InFact engine works in the background
         (within the simulation) to figure out the most optimal way of
         maximizing stimulus coverage while maintaining randomization.

         With the Mentor coverage report I was able see difference between
         InFact and non-InFact stimulus count.  It is was very close to the
         claim made by Mentor (requiring 1/10th stimulus).

         Furthermore, InFact becomes more efficient as the cross coverages
         become more complex.

      3. There is another tool called "qsa" which analyzes the constraints
         of the transaction objects and provides the number of unique
         stimulus required to cover them.  It is a very useful tool for
         understanding the scope of the coverage.  It is also quite useful
         for figuring out faults with constraints when they don't work
         as intended.

    I am also dabbled in the PSS language and at the moment I only tried on
    the PSS tool from InFact suite.  I haven't spend enough time to try and
    possibly compare it against Cadence's Perspec offering.

        ----    ----    ----    ----    ----    ----    ----

    This year, the most interesting tool for me would be Questa InFact.

    It is not only useful for our SoC high-level test scenario creation 
    using the new Accellera PSS Standard, but InFact also effortlessly
    helps us to get exhaustive coverage using multiple test scenarios.

    InFact employs an efficient rule/graph-based technique to generate
    stimulus to target coverage goals.  This reduces our test redundancy
    to reach complete coverage.

    It has a very unique focus on the coverage closure.  Its test generator
    focuses on targeting the coverage.

    So, I believe that the main value proposition for UVM users to consider
    using InFact is:

    a. Lower learning curve for SV/UVM users by importing into InFact
    b. Complements and further leverages existing UVM assets
    c. Simplifies the scenario creation so engineers need not learn
       complex UVM environment and VIP implementations

        ----    ----    ----    ----    ----    ----    ----

    Hi, John,

    I would like to answer the question on best/interesting EDA tool as
    InFact from Mentor.

    We have used InFact in our latest PCIe based DMA design with great
    success in the following 3 areas:

        1. Regression runtime reduction by 30x

           We have a regression suite that would take up to 150 hours
           of random runtime to hit 95+% functional coverage for our
           PCIe Gen3 DMA design.  The ease of InFact integration through
           the UVMF allowed us to deploy InFact in 2 days.

           The full regression that hit 100% functional coverage goal
           was done in 5 hours total runtime.  In the end, the regression
           fit neatly within our nightly regression run allowing the
           regression to track daily RTL development.

        2. Explore our functional coverage to best fit our project need

           Before using InFact, I used to limit our functional covergroup
           to be a few 100's of bins so that our random regression can
           complete coverage in a reasonable amount of time.

           With the InFact Constraint Explorer tool, we can now actually
           handle covergroup/cross in the 10,000's of bins so our coverage
           can be a better match to our functional testing objectives.

        3. Eliminate long cycles of tweaking for hard-to-hit coverage

           Since InFact is effective at hitting all reachable coverpoints,
           we don't have to spend a ton of human time in cycles of tweaking
           constraints -- and debugging any issues due to such changes.

           This give us a cleaner more maintainable random constraints for
           future projects.

    With these advantages, we are a true fan of InFact and will definitely
    explore more advanced use of it in the future.

        ----    ----    ----    ----    ----    ----    ----

    Mentor InFact.

    It's more than just portable stimulus, it's about coverage closure!

    John, please keep me anonymous.  Our corporate types don't like us
    sharing experiences even when it's just industry best practices.  ;^)

    InFact is one of the most interesting tools out there using portable
    stimulus (PSS) tech.  It is because InFact focuses on solving a huge
    problem facing our verification teams: closing functional coverage.

    Our mandate is to get 100% functional coverage.  Getting the first
    95% is pretty straightforward.  But reaching that last 5% is really,
    really difficult.  That last 5% is where we spend a lot of our time.
    It usually happens towards the end of a project, with our managers
    very angry because it's pushing out out schedule.

    Our verification process is pretty typical.  We start with stimulus
    generated by constrained random test benches.  We keep going with
    that, finding bugs, and all well and good.

    Then we start hand-writing functional coverage to cover all the
    interesting stimulus that we want to see in that design.  Some of the
    functional coverage points are really hard to hit so we run multiple
    overnight regressions with different seeds to get coverage up.

    Then we manually write directed test cases to hit some of the really
    hard to hit corner cases.  In addition, we take the design and break
    it up to smaller chucks and run formal verification tools on that.

    Then we have to merge all of this coverage together.

    As you can see, like many, we have to go through all kinds of hoops
    to close coverage, typically adding several weeks to our initial
    verification loop.  And with all the inefficiencies, successive
    regression loops take over a week to validate new RTL fixes.  Not
    only does it impact regression efficiency in time, but also in
    hardware (wasted CPU cycles due to redundancy) and people (manually
    developed tests) resources.

    There was a critical missing link in our flow.  We have this functional
    coverage we are trying to close using completely random stimulus and
    all this random stimulus has no idea what our functional coverage goals
    are.  What we need is a closed loop verification system where the
    stimulus is knowledgeable about the functional coverage we are trying
    to hit.  And that is exactly what InFact does.

    FIRST USE OF PSS IS EXPENSIVE

    PSS sure has created a lot of buzz in the verification community.  Its
    abstract and declarative nature make it great and compact for creating
    configurable and reusable test scenarios.  But PSS being a new language
    and a new methodology. it has massive startup and tool costs.  It is
    often hard to get our management to invest that PSS cost in today's
    project that won't break even and payoff until future projects. 

    So, expect the PSS adoption curve to be much like System Verilog (SV),
    UVM and VIP -- taking many years to go mainstream.

    This is where I think Mentor has taken a unique (and smarter) approach.
    InFact encapsulates PSS technology in such a way that we did not have
    to learn a new standard (or proprietary) language.  We can use test
    benches in our existing SV/UVM methodology.  That saves a lot of time
    and effort significantly reducing startup costs.  Also, by incorporating
    technology to accelerate achieving coverage closure, it brings in a very
    positive ROI right in the first project it's used in.

    InFact reads in our SV/UVM testbench along with our coverage models.
    It then uses propriety algorithms to traverse the internally generated
    PSS-style graph to come up with unique sets of stimulus that meets our
    specified coverage goals.  What I mean by unique, is it eliminates the
    redundancies inherent in constraint random test methodology.

    PRUNES REDUNDANCIES BY 10X

    While other tools claim to have the ability to analyze coverage pre-
    simulation like InFact, it's still left to the user to manually prune
    the redundancies and manually guide the stimulus generations process
    to get to acceptable coverage.  InFact completely automates this,
    removing the need for human intervention.  (See ESNUG 581 #3)

    Also, InFact just introduced an new "coverage creator" app that lets
    users to further guide the coverage efficiency by defining coverage more
    abstractly and does complex cross coverage that is a bear to write
    manually in SV/UVM.  Input a CSV spreadsheet, no new language to learn
    again, and out comes an optimized covergroup.

    Also, what seems unique about InFact is its generation runtime model.
    InFact acts as a plugin solver as an alternative to the System Verilog
    random constraint solver built into the VCS/Incisive/Questa simulator.
    This means stimulus gets generated on-the-fly as simulation is running
    with no noticeable overhead.  This means we don't have to wait around
    for stimulus generation to finish before we start running simulation.
    This is especially important for larger complex test scenarios which
    would require long generation times and possibly 100's-to-1,000's of
    large SV stimulus files to manage in our regression flow.  And InFact
    even manages stimulus across parallel jobs running on the grid, again
    making sure no redundancies get through and no need for user to
    manually partition tests.

    We were using VCS simulators from Synopsys; being a simulator agnostic
    solution was important.  InFact even supports the popular enterprise
    simulators with its on-the-fly plug-in runtime model.

    However, since it was pretty easy to switch simulators, we migrated to
    Mentor Questa Sim for the functional coverage part of our regression
    with InFact.

    By using InFact in this "first use model", we saw significant reduction
    in the first time to hitting coverage.  Also, we saw our regression
    turnaround time go from 8 days to 1-2 days.

    We have now added InFact into our division's verification infrastructure
    so other projects can take advantage of the efficiency boost.  The
    Mentor technical folks have been really helpful to make sure our use
    models are reusable and widely adoptable.

    Going forward, we plan to build on this foundation.  The next step is to
    augment our SV/UMV test bench methodology with PSS language to specify
    more configurable "virtual sequences".  And the beauty is it plugs in
    naturally to the coverage closure use model we have already established.

    So my vote for best product of 2019 is Questa InFact coverage closure.

        ----    ----    ----    ----    ----    ----    ----

    We use InFact because it heavily (20x less) trims away massive amounts
    of duplication in our constrained random runs from invalid crossings.

        ----    ----    ----    ----    ----    ----    ----

    Our coverage is MUCHO faster since we started using InFact, so that's
    my nomination for best of this year.

        ----    ----    ----    ----    ----    ----    ----

Related Articles

    CDNS Perspec ties with MENT InFact for PSS is Best of 2019 #10a
    MENT InFact ties with CDNS Perspec for PSS is Best of 2019 #10b
    And weak user turnout for Breker this year is Best of 2019 #10c

Join    Index    Next->Item







   
 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.














Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)