( DAC'20 Item 02a ) ----------------------------------------------- [02/19/21]

Subject: Cadence vManager saves 1 hour/day/engineer is Best of 2020 #2a

TIME IS MONEY: Cadence vManager is all about "metric-driven" verification.
Set up your verification plan and your metrics to track the requirements
in your spec.  Then run regressions, do failure analysis & triage, and
optimize coverage -- all while you're tracing your metrics against each
requirement until you hit coverage closure weeks or months later.

The "winner" is the one who can get verification closure done the quickest,
so they can tape-out, manufacture the SoC, sell it on the world market as
they laugh all the way to the bank -- while his competitor is still
trying to get his own SoC closed.  In the chip business, Time is Money.
     
So *anything* that speeds up verification gets *everyone's* attention.

And with 6 users saying this about vManager, it gets noticed.

  "Savings work out to 1 man-year less for each 12 man-year project."

  "Compared to how we used to verify, vManager saves each engineer
   at least 1 hour a day, every day."

  "For our projects that run for 12 months or more, I'd estimate our
   time savings just for failure analysis alone is 20% less."

  "I would say vManager saves us 1/2 an hour to 1 hour per project
   engineer per day."

  "Reduced our verification closure effort for our configurable
   IP by ~15%."

        ----    ----    ----    ----    ----    ----    ----

TECHNICAL NOTES: Three big things stood out about vManager technically.

   - users noted that "the vManager of today is vastly different than
     the vManager of 2016."

   - vManager can now manage Incisive, Xcelium, Palladium, Protium,
     JasperGold runs -- plus crazy analog stuff like Virtuoso ADE,
     Spectre, BDA AFS-XT, HSPICE, Silvaco for SPICE regression runs!

   - one user discussed vManager in an all AWS cloud installation.

        ----    ----    ----    ----    ----    ----    ----
        ----    ----    ----    ----    ----    ----    ----
        ----    ----    ----    ----    ----    ----    ----

      QUESTION ASKED:

        Q: "What were the 3 or 4 most INTERESTING specific EDA tools
            you've seen in 2020?  WHY did they interest you?"

        ----    ----    ----    ----    ----    ----    ----

    We use vManager to create our verification plan, manage regressions,
    and collect metrics from our Xcelium and JasperGold runs.

    We use it for multi-site collaboration and to do analysis across a
    number of SoC development sites throughout North America.  

    Initially, vManager replaced our ad-hoc regression scripts.  Over
    time, it also replaced our distributed test plan with an annotated 
    verification plan.  

    Our team has transitioned from a "testcase-driven" verification closure
    methodology to a "metric-driven" verification methodology -- meaning we
    now use our coverage metrics to guide our verification closure process.

    MAN-HOUR SAVINGS 

    Our resource savings work out to 1 man-year less for each 12 man-year
    project.  In addition, compared to how we used to verify, vManager
    saves each engineer at least 1 hour a day, every day.

    VERIFICATION PLANNING

    We develop our verification plan (vPlan) for our specification in the 
    vManager tool.

    The "vPlan Authoring" feature lets us associate/connect each chapter in
    our vPlan to a defined set of coverage metrics, functional coverage 
    group, code coverage hierarchy, and assertion coverage.

    We can also automatically trace the regression/coverage results from
    our vPlan back to our high-level requirements.  This reduces the
    workload for standards like ISO 26262.  

    The vPlan spec annotation feature shows connections between 
    specification and verification results.  Specifications portions that
    change are highlighted and may be flagged for attention.  We can analyze
    our regression results against our vPlan to determine which features 
    within the architecture spec are well tested, and which are not.  

    Note that vManager also has an open API interface.  I haven't used it 
    yet, but I expect it to be later useful to deploy for configurable
    designs.  

    LAUNCHING & MANAGING REGRESSIONS

    Prior to vManager, we had a team of full-time engineers maintaining our
    custom script environment.  With vManager we were able to 100% re-deploy
    one engineer because vManager automatically did all this for us:

         - Launched regressions and automatically rerunning failed jobs
           to extract additional debug data.  

         - Having a central tool to launch and manage our regression 
           runs for Xcelium and JasperGold, with an interface to our 
           compute resource manager.

         - Confirming if a new file was checked into the Jenkin's 
           repository properly.  (vManager must first be linked to 
           Jenkins.)

    FAILURE ANALYSIS & ASSIGNING FAILURES

    VManager categorizes the regression logfile warning & error messages for
    further analysis.  Then we search and filter to find the data we need.

    We also use it to assign specific failures to individual engineers. 

    One big plus is its metrics tracking process.  vManager's regression
    -level debug helps identify trends -- and even points out the fastest
    path to recreate failures.

    For this feature set, the ROI is highest on longer projects.  For our
    projects that run for 12 months or more, I'd estimate our time savings
    just for failure analysis alone is 20% less.

    COVERAGE 

    1. Priorities & Test Weighting

    We use vManager's functional coverage annotations to focus on our
    design's high priority features first.  

    With vManager, it lets you use a weighting factor to automatically 
    measure/compute coverage metrics.  I haven't used weighting, but I
    think it could be quite handy for automotive functional safety 
    verification.  ISO 26262 requires weighting to be assigned to each 
    functional block based on the total physical area of the circuitry.  

    2. Coverage Refinement 

    We can refine our coverage by using vManager's built-in mechanism
    to eliminate portions of our design (or testbench) that are not
    needed, including managing coverage exceptions.

    This is quite useful for IP level verification; configurable IPs 
    require more sophisticated coverage refinement, so thoroughness is 
    important.

    This refinement feature reduced our verification closure effort for
    our configurable IP by ~15%. 
          
    3. Aggregation & Analysis

    The tool automatically merges coverage across multiple tests and 
    regressions, with detailed analysis spanning all your coverage types. 
    Because it's a single platform, we can collect & sample from 1,000's
    of simulations.  

    I'm encouraged about the potential for machine learning and AI, but we
    haven't experimented with it.  

    REPORTING  

    For reporting key results and progress, vManager has dynamic web access
    (interactive webpages) along with static HTML reporting.  It's easy to
    see commonly used data.  

    The reporting features are an integral part of vManager flow, including
    a dashboard for showing data over time.  We can check if our failed 
    tests are declining and our coverage metrics are going up.

    VMANAGER & METRIC-DRIVEN VERIFICATION ACROSS OUR TEAM

    Metric-driven verification has been key to us succeeding in verification
    for all our DUTs, including sub-blocks, blocks, IPs, sub-systems and 
    SoCs.  At each level, we've been able to maintain high verification 
    quality and project schedules when we use coverage metrics to track the
    progress of our verification.

    Regression management is a complex process that requires stitching 
    multiple tools together.  This is where vManager serves as a needed 
    central verification management tool -- it provides a relevant
    interface for each flavor of user.

       - For the design engineer, it simplifies the task of 
         prioritizing the debug scenarios.

       - The verification lead benefits from the data tracking tools.  

       - Architects and design teams can seamlessly review 
         specifications to test coverage scenarios.

       - When needed, design teams can zoom in on evidence of coverage
         by tracing a functional feature to coverage metrics.

    Once you get it installed and your team fully trained -- which takes
    some time -- I very highly recommend the vManager environment for any
    serious SoC verification project.

        ----    ----    ----    ----    ----    ----    ----

    Cadence vManager has everything design and verification engineers need
    to do the analysis required to give the thumbs up for tape-out.   

    We use it with Xcelium -- although you can also use it with other 
    simulators.  (As well as with Jasper and Palladium)

    The vPlan Verification Plan 

    I'm a huge proponent of vManager's vPlan and their annotations for
    multiple reasons.  

    1. Paints a picture for Executive Management

    It addresses one of the bigger headaches I used to have when meeting 
    with our executive management and trying to give them a verification
    update.

    In a way, a vPlan is a DV engineer version of a design specification.

    I can take all the functional elements of a specification, put them into
    a verification view and then map everything I need to do to prove that
    the particular feature in our SoC has been: 

       - Completely exercised 
       - Partially exercised
       - Not exercised

    Plus, all the supporting data/evidence needed to prove each feature, 
    such as:

       - The specific testcases 
       - The coverage elements associated with it 
       - The functional block 
       - The code coverage associated with it
       - The functional coverage elements I wrote specifically for this
         feature and how they are covering it.

    It lets me paint a picture for executive management that shows health of
    the design I'm verifying.  And it's organized automatically.  I don't
    have to open a bunch of random, scattered reports and then try to link
    them all together.

    2. For our DV Engineers, it does Authoring & Analysis

    When I go through my vPlan and the annotations, I get a second chance to
    look at the numbers themselves to understand where the holes are, and 
    determine how I might go back to try to address them.

    Further, rather than only looking at the top number, I can focus on 
    sections.  For example, I might see that one block was only 80% covered.
    I can take a quick look to see why and determine how I might improve
    it, e.g., by randomizing something differently.

    vPlan Authoring 

    I am a big proponent of using vManager's GUI to do "vPlan Authoring". 
    To import an external plan, you need to know exactly what vManager
    expects the format to be, so that the process will be clean.

    vManager has a REST API, but we've never needed to use it.  It's always
    worked out-of-the-box work for us.  Cadence already had tools for 
    what we needed.

    Failure analysis 

    Failure analysis for vManager improved a lot between 2017 and 2019. 
    It's more flexible and easier to implement now.

    It's for the DV engineer power user and not the SoC designer.
 
    The DV engineer can look at the log files that the simulator generated
    and determine what data they want to capture.   Then use vManager to:

       - Collect regular expressions that they want to use to parse
         the log files.  

       - Specify the regular expression they want to match for each 
         filter, assign the severity level, such as warning, error, 
         fatal, -- and specify the text they want to print to the 
         triage window.  (plus, a couple more verbosity options)

         The tool is flexible.  It can have as many of these files as
         you want, and a specific set of filters for each testcase or
         regression.  It's not limited to pre-defined filters.

       - It can then start sorting on the errors pulled from those 
         filters.  So, instead of looking at a number of tests to try to
         figure out why they passed or failed, the tool provides a
         mechanism to see all the first failures, and which test cases 
         that had those failures.

    3. For our SoC Designers it does Failure Triage

    Our typical SoC designer's use model for vManager is:

       - Launch a regression or look at a regression that vManager 
         launched automatically.

       - Check the regression's pass/fail status.

       - Based on the triage information from vManager, rerun the test 
         in the designer's own environment (using SimVision or Indago)
         and do the analysis there.  

    The tool has a mechanism to do all this automatically.  

    The time savings are significant; easily weeks from our debug schedule.
    Instead of having a designer or DV person do triage and then restart
    the simulations, we had vManager do it automatically.  And if it
    failed, look it up using SimVision or Indago so our SoC designer would
    have a waveform to view at the next morning.  


    First Time Fears 

    Our intial fear was if vManager wa smart enough to figure out what
    it should and shouldn't automatically re-run with waveforms.

    When we introduce waveforms, our disk footprint goes through the roof.
    If the tool doesn't do it right, and/or it doesn't get cleaned up
    properly, we could end up wasting terabytes of disk space from useless
    regressions.

    Turns out that we were wrong.  vManager did the right thing.


    Launching Regressions

    We use the vManager plug-in for Jenkins a lot.  We like Jenkins making
    all the scheduling decisions and let vManager do the regression running.

    In our industry, everyone develops their own way to launch simulations.
    They might use a Python system, an API, or "make" files.  This is
    because System Verilog simulators have 100's of switches, which you
    sometimes must configure to launch the simulation exactly the way
    you want.  

       - The goal is to make it as easy as a step as possible to invoke 
         a simulation.  

       - This removes the burden of having everybody on the team learn
         every Xcelium switch needed to run a SV simulation.  

       - So, do this, engineers have a "runner flow", which is a script
         or layer that sits on top of Xcelium that removes the complexity
         of how to build a the Xcelium simulation command.  It might have
         simple words, such as "run this test", and then the script under
         the hood figures out all the switches that need to be applied to
         Xcelium to run that test.  

    Engineers need a way to pass information between your "runner flow" and
    vManager.  However, the existing regression "runner flow" is unlikely to
    fit precisely into vManager unless it was specifically designed to do
    so.  

    So, what usually happens is that a power user must write a gasket script
    to connect vManager to your runner.  As part of this, they may make some
    concessions, e.g., because there may be certain things that vManager
    doesn't support.  

    vManager has a lot of versatility, and so far, every system/pre-existing
    "runner flow" I've seen can be interfaced to vManager.  It provides
    pretty much everything needed to do so.  

    Coverage & Test Refinement

    vManager's test/coverage refinement methodology has improved drastically
    between 2017 and 2019 in terms of controlling how you refine your
    coverage or test databases.  

    Now we can have multiple refinement files, and open them and close them
    at any time, and manage refinements own way as opposed to having only
    one refinement file per session.  

    If an engineer doesn't want certain functionality covered, they can 
    exclude it and/or ignore specific parts of their code where they are not
    using that functionality.  Both the design team and the verification
    team can now go in and mark items that: 

       - are failing
       - are not covered 
       - are not meant to be covered 

    The marks are very easy to find in vManager's dashboard views.  Then,
    when we refine your coverage, all the numbers recalculate, and those
    elements are no longer part of our coverage numbers.  And it's all done
    under the hood.  

    This is managed with an extra filing layer when we load in your coverage
    database of coverage.  The file specifies what lines and/or sections of
    code we are refining and why.

    Coverage Aggregation 

    vManager handles coverage for a particular block or various blocks for
    one session very well -- including merging coverage.

    My one sticky point with vManager is the concept of hierarchical merge 
    when we have multiple sessions across multiple levels of hierarchy of 
    the same design.  For example, if I have 5 regressions from a block,
    4 regressions from a subsystem, and 2 regressions from a chip -- and I
    want to know the aggregate coverage of everything together, the flow
    is not great.  It is very hard to understand how we are supposed to
    use vManager to merge everything properly in these situations.

    So, what our engineers typically do is merge the results manually and 
    then bring it back into vManager.  This works, but it puts the onus on
    our engineers to know exactly how our merge conditions all fit together
    for vManager to hold our system together.  

    Cadence needs to do some additional work to correct this.

    HTML Reports 

    The static HTML reports are great.  Our senior management loves them. 
    They're formatted well.  They navigate well.  They display well.  They
    are very easy to control and automate and are completely flexible in 
    terms of what information we put include and how much detail.

    Some of our managers would like to see Cadence add the ability to 
    generate the same information as a PDF or CSV file.

    Some people did take advantage of vManager's web portal.  However, 
    mostly they preferred having the HTML reports emailed to them, as they
    know they will definitely be able to view it rather than trying to 
    figure out if they can connect to the company's network.  (The network
    can be a limiting factor both when traveling and working from home.)

    Learning Curve & Support

    Our SoC designers use vManager and it save them a lot of time.  Prior to
    vManager, they had a hard time going in on their own and understanding
    where their coverage numbers were -- especially on the RTL code side.

    The tool does so many things that there is about a 2-week learning curve
    for a new user to get to the point where they are comfortable enough to
    begin using it on their own.  

    The good news is that it's a one-time effort.  And once they get there, 
    they don't want to go back to the old way.

    Cadence support is really good for vManager.  I have never stumped them
    on any problems I've had.  And they are very open to user suggestions.

    Conclusion

    I would recommend vManager 100% if your engineering team or company is
    at the point where managing data is now a bottleneck.

    For a small company running only one or two projects, it can be too 
    early to adopt it.  There is some upfront investment, and the company 
    is likely to have higher priority problems to solve first.  

    As the projects get bigger, and there's more data to manage, then
    vManager will add a lot of value to your company.  

    And for anybody who hasn't used vManager, or looked at it, since 2016 or
    earlier, the incremental gains since are huge.  I was away from it from
    two years and I didn't even recognize it when I came back to it in 2017.
    There is a night and day difference in terms of what its capable of now.

    And the good thing is there still room for improvement.  It's still has
    places to go.

    The next logical step is the machine learning part.  And once that's in,
    I don't know how anyone won't use this tool anymore.

        ----    ----    ----    ----    ----    ----    ----

    Cadence vManager runs regressions defined in a vsif-file and then
    analyzes coverage.  It also lets you define verification plans and
    associate those plans to attributes to track your functional coverage.

    We use it with Incisive/Xcelium and Virtuoso.  

    For Virtuoso we use Verifier to do our analog verification.  It does
    SPICE regressions for stuff like testing a voltage range in a circuit,
    or overshoots, etc.  We run it with Spectre, but it runs any Oasis
    compliant SPICE like BDA AFS, Synopsys SPICE, Silvaco, etc.

    For Incisive/Xcelium, vManager does all the equivalent tasks but for
    digital testing.  But since Questa/VCS/Xcelium have conflicting
    coverage models, vManager realistically only takes CDNS SV simulators.

    Below is the vManager functionality we use.  

    Verification Plan

    We generate a vPlanx from our own vPlan XLSX spreadsheet and analyze our
    results against our vPlan.  

        - We currently read in vManager's generated results files in our
          requirements management (RM) tool.  

        - With our next generation tool suite for RM, we want to use 
          vManager to automatically synchronize the results back to
          the RM.  (We haven't tested it yet).

    vManager also has an API; so far, we've only used it for our 
    check-the-checker flow to convert expected fails to a pass in the
    regression.

    Launching Regressions

    We use vManager to launch regressions and re-run fails; we can use the
    same settings for both.  We do not need different/additional runs to 
    collect additional debug data.

    The tool has an interface with our LSF resource manager.

    Failure Analysis

    As part of our daily regression analysis, we use vManager to collect 
    warning & error messages from logfiles and categorize them for analysis.
    It sorts the fails per time and provide the seeds for detailed analysis
    to the engineers.

         Regression analysis time savings: ~10%

    Coverage Refinement 

    We primarily use vManager's coverage refinement to exclude parts of the
    design that are not used in the current configuration.  These refinement
    files are very useful for documenting our coverage exceptions.

         Regression analysis time savings: ~5% 
                         
    Coverage Optimization

    vManager checks which tests contribute most to our coverage goals, and 
    then increases executed seeds -- or adds more randomization to the 
    tests/sequences that are not performing as expected in terms of 
    coverage.  

         Regression analysis time savings: ~2% to ~5%

    Coverage Aggregation & Analysis 

    There are multiple features here that we find useful: 

       - vManager's GUI merging coverage for several regression sessions.

       - Detailed coverage analysis, including covergroups, toggle, 
         block, expression, assertion, FSM.  It's our main tool for
         code coverage.  

       - Defining the collection and sampling across all our simulations
         in one system.  (This has good potential to be used with machine 
         learning speed ups.  ML could prioritize the found patterns by
         severity.  It's not doing that now, but we see it coming.)

    HTML Reports / Webpage

    We use its HTML reporting to share regression and coverage results.

    Room for Improvement

       - The vManager verification plan generation and updates were not
         that straight forward for us when we tried Cadence's "vPlan
         authoring" a couple of years ago.  The PDF synchronization with
         PDF update was work-intensive.

       - It would also be nice to be able to generate vPlanx files (XML
         schema or DTD definition would be nice to have for file-based
         flows).  Although I'm not 100% sure how the linking to 
         requirement management tools like Doors would work.  

       - We use vManager with Jenkins.  However, the Jenkins plug-In is 
         not working with our VManager setup at the moment, so it took 
         us a lot of effort to get everything running.

    Conclusion

    I'd strongly recommend vManager for regression execution and coverage 
    analysis.  Being able to map from existing coverage to an existing 
    verification plan is easy and very useful.  

       - I'd estimate that vManager saves our team 10% of our regression
         and coverage analysis effort now that we've had it for a couple
         of years.

       - I would say vManager saves us 1/2 an hour to 1 hour per project
         engineer per day.  

    Note that not everything we want works out-of-the-box with vManager;
    our setup cost per project is roughly 1 to 2 man-weeks.

        ----    ----    ----    ----    ----    ----    ----

    My current company uses vManager and we are happy with it.  To give
    some context on how much it has evolved:

       - Years ago, in my prior company, I owned our regression 
         management methodology/tool and Cadence pitched us eManager 
         (a predecessor to vManager).  

       - My reaction back then was:  "Why the heck would you even want 
         to use this?"  It seemed to me to be very cumbersome and limited
         in what it could do, compared to what we had at the time.  We 
         didn't buy eManager.  

       - Then, at my next company, we also used an in-house tool to
         do our regression management.

    So, although I agreed to look at vManager when someone on my team at my
    current company suggested it, my memories from eManager weren't that
    great.

    But now that we've used vManager for a while, it's a no-brainer.  It's
    super flexible.  We just retrofitted it to our own internal tool that 
    launches our simulations, and we didn't even need to change anything.  

    My attitude has now changed to: 

        "Why would you manage your own regression tool when you can get 
         this tool to do it all for you?" 

        "Even if initially you only use it for regressions, later you 
         can use all its extra features/functions."  

    There are still things that Cadence can improve, such as the tool's 
    pre-processor and error messages, or providing easier advanced filtering
    options, but I certainly would still recommend vManager.

        ----    ----    ----    ----    ----    ----    ----

    We've used Cadence vManager it for about 10 years for verification 
    planning and metric analysis.  It serves as our central portal for our
    entire team's verification efforts -- from planning to execution to 
    reporting and closure.  The vManager of today is vastly different than
    the vManager of 2016.

    We use vManager to manage our verification efforts for Xcelium,
    Palladium, Protium, JasperGold, and some of our internal tools.

    vManager has something called "vPlan" which we use to develop our 
    verification plan.  Part of this is:

       - vPlan Authoring, which lets us connect specific metrics, such 
         as test pass/fail, coverage to a specific feature in our vPlan.
         We can do drag and drop mapping of our verification plan to 
         verification tools.  It also supports team collaboration on 
         the plan.

       - Traceability to the high-level requirement managements systems 
         that we use for our ISO26262/IEC71508 projects.  This required 
         a lot of in-house effort to get working, though we were able to
         use Cadence's open API to write connectors to our internal
         requirement management SW.

       - vPlan Analysis, which we use to analyze results against the 
         vPlan to determine which features are well tested and which 
         aren't.  

       - vPlan Spec Annotation.  We load a spec in PDF formation and
         connect sections (text/pictures) to specific items in the plan;
         vManager maps the connections between specification and 
         verification results.  Specifications portions that change
         are highlighted and can be flagged for attention.

         We use this, but more recently, our teams have been shifting
         towards using a 3rd party requirement management tool and
         linking our vManager verification plan to that, rather than
         to a PDF spec.

       - Coverage Refinement.  We also use vManager's coverage refinement
         to eliminate design & testbench portions that are not needed,
         including automatically excluding connected coverage.  

         This is an absolutely necessary for any team that wants to close
         coverage before release.  It lets you manage/document exceptions.

         Room for improvement: I'd like Cadence to offer a built-in 
         workflow to enable review of the refinements/exceptions at the 
         end of the process. (Instead of using the tool in the same 
         mode as we do during the actual coverage analysis.)

       - Coverage Aggregation & Analysis.  This is another critical 
         for closure.  We use it regularly for:  

           - Automated coverage merging across tests & regressions
  
           - Coverage analysis for different types of coverage

           - Data collection & sampling across all simulations 

         Our shift to using a tool like vManager as a central portal to 
         all regression data ~10 years ago was important.  Today, I see
         these aggregation/analysis features as standard that any 
         verification management would need to offer.  

         We use vManager to aggregate coverage across traditional SV
         simulation (Xcelium) and formal analysis (JasperGold), to give 
         a more complete view of our verification progress and let
         us (excuse the marketing term) "shift left" on coverage closure.

         We now have a more holistic view rather than disjoint tools.  

         Recently, we've looking at adding Palladium/Protium runs into
         vManager.  Not there yet now, but our management likes that
         this will be possible -- and that it's a fully vetted extension
         to our overall vManager use.

       - HTML Reporting/Dynamic Web Reports.  In general, vManager's 
         reporting features work very well.  

         Room for Improvement: I'd like to see its verification plan and
         final annotated vPlan report review aspects more shareable.  
         Its dynamic web reports are nice, but we'd like to then send
         other team members/management as a link to a specific section.

    I'd recommend vManager to anyone already invested in CDNS verification 
    tools.  It would be of less value to those with a lot of VCS or Questa
    licenses given the lack of interoperability of coverage analysis
    between vendors.

        ----    ----    ----    ----    ----    ----    ----

    We use Cadence vManager as our verification and regression cockpit.  We
    use it to manage and control our Xcelium regressions runs, collect 
    results, track pass/fail progress over time.  We also use it to track 
    our progress against our functional coverage goals.  

    All-cloud verification environment 

    First, it is worth noting that our entire verification environment is in
    the AWS cloud.  We have a full environment there -- Xcelium, vManager,
    our verification IP, etc.

    Cadence maintains the full environment, so we don't need to install the
    software or update it.  
 
    Central database 

    vManager lets us to store all the relevant simulation results in a 
    central database.  Having our entire team all contributing to the same
    database makes a big difference.

    For example, we can split the coverage analysis across engineers.  Then
    later merge the coverage to get the for overall coverage number.

    Since we have a smaller team and one development site, we give everyone
    the same access.  For larger, more geographically distributed teams, 
    Cadence also has permissions/restrictions you can deploy.
 
    Failure Analysis 

    vManager's set up for analyzing passing/failing testcases is powerful.

       - If a testcase fails, we can just click to rerun it with 
         different configuration for more debugging detail -- including
         waveforms.

       - We can also do an automatic rerun, with different parameter 
         settings.  For example, we can rerun 10 failing tests while 
         generating waveforms.  
 
    This is very powerful and keeps us from overusing memory limits.  
                    
    Functional & Code Coverage Analysis

    Cadence has quite a good integration for coverage analysis.  Based on
    the testcase results, you can:

       - Merge specific runs.

       - Do a top-down analysis of the coverage to see where you may 
         have holes.

       - Navigate through hierarchy down to sub-blocks to see where 
         coverage doesn't meet your expectations.

       - Refine your coverage.

    This feature definitely saves us engineering effort to improve our 
    coverage over time.  I would estimate we get a 10%-15% time savings,
    as we also have some scripting, though not a lot.  
 
    Reporting

    We also use vManager to generate static HTML reports.  Cadences has 
    enhanced the reports so you can represent data trends and changes over
    time.

    We can look at:

       - Coverage goals -- is this progressing over time?

       - Regressions -- are the number of failed simulations going down 
         over time?

    We can generate reports at the project level, as well as for sub-systems
    and sub-blocks.

    This feature is important so we can see if we are making progress with
    our verification plan.  Otherwise, we'd have to do this manually or 
    write scripts for it.

    I would recommend vManager to other teams, just as it is something our 
    team needs for managing our verification plan in terms of regressions 
    and functional coverage.

    At this point it would not be efficient to try to develop something
    new in-house from the ground floor to do this.

        ----    ----    ----    ----    ----    ----    ----

Related Articles

    Cadence vManager saves 1 hour/day/engineer is Best of 2020 #2a
    CDNS Xcelium-ML gets 3x faster regressions is Best of 2020 #2b

Join    Index    Next->Item







   
 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.






















Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)