( DAC'18 Item 2a ) ------------------------------------------------ [12/14/18] 

Subject: CDNS Perspec is crushing it in the PSS Wars is #2 for Best of 2018

THE 800 LB GORILLA: Now in it's 2nd year, when it comes to the PSS Wars, the
hard data shows CDNS Anirudh's Perspec is now clearly the dominant player ...

  PSS Tool Word Counts in DeepChip "Best of" 2017 vs. 2018 EDA User Survey:

  Cadence Perspec 2017: ############################### (3,097 words)
  Cadence Perspec 2018: ##################################### (3,712 words)

   Breker TrekSoC 2017: ############### (1,476 words)
   Breker TrekSoC 2018: #### (381 words)
                             
    Mentor InFact 2017: . (24 words)
    Mentor InFact 2018: #### (437 words)

 SNPS Mystery PSS 2017: ## (188 words)
 SNPS Mystery PSS 2018: . (18 words)

Or perhaps another way of saying it is Perspec is growing, while Breker is
shrinking?
         
         
Regardless, the rumored Perspec logo count is impressive: Qualcomm, Samsung,
Intel, ARM, ST, Ericsson, Infineon, Mediatek, Renesas -- and my spies tell
me that AMD just switched from Breker to Perspec recently, too!  Ooops!

(But the world is NOT 100% Perspec.  My spies also say HiSilicon, IBM, and
Cavium *renewed* their Breker licenses.  This PSS War is nowhere near over.)

        ----    ----    ----    ----    ----    ----    ----
        ----    ----    ----    ----    ----    ----    ----
        ----    ----    ----    ----    ----    ----    ----

      QUESTION ASKED:

        Q: "What were the 3 or 4 most INTERESTING specific EDA tools
            you've seen this year?  WHY did they interest you?"

        ----    ----    ----    ----    ----    ----    ----

    Cadence Perspec

    Below is my input on trying out Perspec, once the set up was done: 

        - It was easy to create the scenarios

        - Debugging was easier.  You could look at a failed task in the 
          graphic view.

        - Finding coverage holes was good.

    Next, I need to know how much set up is required for Cadence Perspec.

    I've tried to use Breker for SoC verification and it was too much 
    investment to set it up.

        ----    ----    ----    ----    ----    ----    ----

    We use Cadence Perspec and are expanding our usage.  

    Our main growing application is HW verification at the top level,
    building data models and concurrency tests.  

    We also keep an interconnect level verification as legacy.

        ----    ----    ----    ----    ----    ----    ----

    Cadence Perspec provide a test generation framework for I/O devices 
    testing for different platforms (simulation, emulation, silicon)

    We did our evaluation based on the ability to use it across simulation,
    emulation, and silicon.  

    That's its most important feature for us.

        ----    ----    ----    ----    ----    ----    ----

    We did an eval of Cadence Perspec with good results. 

    Our chip is an SoC, with more than 10 ARM cores that run in parallel in 
    any combination and talk to the hardware.  

    This makes it extremely challenge for us to verify our SoC:

        - If we need to write firmware, we must think of all the 
          randomization.  When we write directed tests, how do we know 
          they cover what we want?

        - We must verify our SoC across multiple platforms:  Simulation,
          Emulation and FPGA.

    Based on our evaluation so far, we are hopeful that Perspec can handle 
    these SoC-level challenges.  

        1. Perspec generated more tests in a shorter time than our 
           manual approach.  

            - Manual test development.  Normally, this is linear.  

              In one manual test development situation, we covered 
              50 scenarios in 20 weeks.

            - With Perspec.  The initial 'grunt' work to create/capture 
              our constraints, plus our model development, took 10 weeks
              total.  After that set up was done, it was fast to generate 
              the tests, i.e.  we generated 50 scenarios in the 
              following 2 weeks.  
              
              So, with Perspec, we covered 50 scenarios in only 12 weeks.

        2. Perspec is more productive for us.

            - The model developer knows the SoC architecture in detail 
              and the test writer knows the use case in detail.

            - Capturing the constraints and models, and using them across
              different verification platforms allows both engineers 
              to be more productive.

        3. Perspec's GUI makes your models like a compressed library.  
           It's very visual -- you can drag and drop the various bits 
           and pieces of the complete scenario.  This also reduces 
           human error.  

    Perspec uses constraint-based input, which is now the Accellera standard 
    (vs. graph-based input).

    The constraint-based input has a very positive impact on SoC 
    verification.  If you just do block level verification, you have a clear
    boundary.  For example, if two blocks A&B communicate, they must follow 
    a specific protocol.  

    You can think of the blocks as "atoms".  The "actions" are what you can
    do to the atoms.  

        - You can apply multiple actions to an atom.  Running multiple
          actions on one atom is not a big deal.

        - However, when you must bring multiple atoms into the picture,
          their actions overlap.  E.g.  8 atoms and 12 actions.  A 
          constraint-based approach is more suited for this, because 
          when you connect multiple atoms, your constraints are stricter.

    The constraints matter a lot.  With Perspec, you capture your 
    constraints in a table format in a central location (a cvs file).  It
    helps you to come up with better coverage, including corner cases.  

    It also reduces human error.

        ----    ----    ----    ----    ----    ----    ----

    Perspec + VCS

        ----    ----    ----    ----    ----    ----    ----

    I had a chance to use Cadence's Perspec System Verifier at DAC and here
    is my short feedback.  It's mostly on Perspec, but some points are on 
    the Portable Stimulus standard in general.  

    Perspec is Cadence's Portable Stimulus offering - the tool analyzes your
    portable stimulus models and generates tests for the environment you are
    interested in.  

    For those who don't know yet, Portable Stimulus is now an official 
    standard from Accellera.  I like to think of the standard as providing
    the ability to spec two complementary parts

        - Test intent = scenarios you want to verify.  Think: 
          constraints, coverage...

        - Test realization = underlying services that scenarios are 
          built on.  Think: hardware/software interface, interaction 
          with test-bench components...

    Perspec is fully PSS1.0 compatible - which means they support both DSL
    and the C++ approaches described in the standard.

    Some Perspec features worth mentioning

        - After the user solves the PS model, Perspec shows the 
          scenario(s) as a graph - making it very easy to see what the
          tool has actually generated.  

        - The user can also see the input coverage and generate 
          additional scenarios if they so need.  

        - Perspec generated code is instrumented with enough debug 
          hooks to integrate with Cadence Indago

        - Portable stimulus models can be used to verify designs across
          different environments, such as a Virtual Platform, Emulator,
          or real Si ("horizontal" portability).  Since Perspec 
          generates bare-metal C code that executes on a control 
          processor (real or virtual), all these environments can be
          targeted.

    While the PS models can also be used to verify designs at 
    block / subsystem / system level ("vertical" portability), personally,
    I think the sweet-spot for PS is with complex systems with lots of
    interaction between components that can be either in hardware or 
    software.  

    Enabling the users to think in terms of scenarios (as against 
    traditional CDV) is very effective for verification of such complex 
    systems.  Adding to that being able to specify hardware/software
    interface and generating the low-level drivers will be very powerful.

        ----    ----    ----    ----    ----    ----    ----

    We've been using Perspec simply for random parameter test so far, but 
    now we're working on making the best use of benefits of Perspec, that 
    is, generating random test scenarios.

    Enjoying the benefits, abstraction level of models is significant, and 
    it's not easy to know appropriate level for various test scenarios.

    Also, we would like to see some more features on Perspec:

        - Eliminating gap between SV level and C level

        - Extracting quantitative indicator of route coverage

        - Visualizing hierarchical structure of macro action and token

    I think that hardware-component-based modeling will be practical to 
    reuse.  It does require high modeling skills to create from scratch;
    I hope the PSS standard can break through that barrier.

        ----    ----    ----    ----    ----    ----    ----

    I evaluated Cadence Perspec.  You enter test scenarios described in PSS
    and the tool outputs C/C++ or SV tests; Perspec also lets you edit your 
    portable stimulus.

    I'm interested in porting stimulus between pre-silicon verification and
    software testing.   It's a higher-level abstraction of the configuration
    and operation scenarios.  We use Cadence Xcelium, Palladium, Protium, 
    but we haven't yet integrated Perspec into our current production 
    workflow.  It is on our roadmap, but we haven't done so due to schedule
    considerations.  

    Below is my experience from my eval:

        - Perspec saves time because it allows the test writer to 
          abstract the use case at a higher level.  In term of errors,
          it's just like raising the abstraction level in any 
          programming language - the errors just move from hand written
          code into the re-use code.  
 
        - Debug and coverage logging for the generated C tests have the
          same pros and cons as for tradition verification coverage.  
          The coverage will be as good as how you defined the buckets.

        - Both Perspec's constraint-based input and a graph view are 
          required to generate a real-life scenario.  The graph is just
          a subset of constraints -- to be exact, it's a sequential 
          constraint.

        - In terms of constraint solving, Perspec is a turbo version of
          the good old Specman igen engine.

    New Perspec updates since my eval:

        - Perspec's new GUI is much better compared to a half year ago.
          It now has the same GUI style as the new Cadence tools. It is
          okay for visualizing small scenarios, and not as good for 
          visualizing extremely complex scenarios.

        - Perspec's new integration with Cadence vManager (for combined
          coverage analysis) fits nicely with the pre-silicon 
          verification work flow.   However, our post-silicon SW testing
          won't use vManager.

    Cadence supporting the Accellera Standard is important.  There would be
    no way to get our company to adopt a portable stimulus tool if it did 
    not support PSS standard.   We've learned to stay away from short-lived
    proprietary product.

    If PSS can generate all possible legal scenarios, there is no more need
    to rely on random-constraint generation; the coverage result is just a
    progress tracking system.  In another words, it replaces the
    old-fashioned Excel spreadsheet that auto-fills in from the test runs.

    No one really does these tasks manually anymore.  The real comparison 
    is doing it in PSS or some in-house ad-hoc system.   Our job is to test
    the chip -- not build tools -- and Perspec would definitely save us time
    building tools.

        ----    ----    ----    ----    ----    ----    ----

    My company uses Perspec for system level verification.  My input 
    below is based on attending a presentation -- I'm not a user.

    It helps us to cut down our verification efforts by automatically
    creating test scenarios which we previously did manually. 

    It also helps us because it reduces human error. 

        ----    ----    ----    ----    ----    ----    ----

    Cadence Perspec is a useful tool.  It helps us enhance verification 
    quality and reduce our test scenarios development effort.  

    If a manager has vision, using Perspec can change the exist verification
    working flows.  It not only increases quality, but also enhances company
    productivity for both the pre-silicon and post silicon stage.  

    Unfortunately, Cadence charges for the tool license too early -- it's 
    hard to persuade our managers to put extra resources for a new 
    methodology before Perspec completely replaces the original working 
    model and shows its capability.

        ----    ----    ----    ----    ----    ----    ----

    Perspec offers a constraint solver on a regression.  This means that a 
    team can define a set of required functional coverage and solve offline,
    that a regression, should it be run, will at least do XYZ.  This 
    capability is interesting, principally not to obtain efficient 
    regressions nor coverage closure, but to define/refine regressions 
    offline.  

    Doing this kind of regression-discovery offline is a hard requirement
    when the emulation, verification and post silicon validation teams have
    pre-existing work practices, system constraints and business realities.  

    It seems then that using Perspec for offline regression definition 
    between these teams can uniquely enable cross-team data driven test 
    closure.  Some of the most expensive issues in SOCs span these teams so
    a tool that tightens collaboration appears invaluable.  

    Porting stimulus between simulation and emulation via Perspec may run 
    into practical issues around controllability between SOC/subsystem 
    simulation/emulation at least for first passes before 
    emulation/simulation architectures are adjusted (or not) to meet these
    requirements.  

    It may be interesting to use Perspec to define a set of regressions for
    simulation, observe consequential coverage (internal DUT events) that
    cannot be directly spelled out for Perspec also happen, then blindly 
    reapply some of these regressions to emulation/validation systems where
    visibility of consequential coverage is lost or expensive.  

        ----    ----    ----    ----    ----    ----    ----

    I use Cadence Perspec System Verifier -- a system modelling and test 
    generation tool.  Below is my feedback.

    TEST QUALITY

    The quality of the tests that Perspec generates depends on the quality 
    of the models that you use and the capabilities within the testbench
    that the tests are going to run on.  The higher the quality, the more 
    complex the resulting test, which can lead to long runtimes for an RTL
    simulator.  

    This an area where Perspec has an advantage: tests with long runtimes 
    can be easily regenerated for use on an emulator or FPGA platform with
    little or no loss of test functionality or intent.  

    TEST COMPLETENESS

    The test completeness is also variable and dependent on the coverage 
    model attached to each model.  

    At the SoC or System level, traditionally there has not been an easy way
    to collect functional coverage.  This has made it difficult to tell when
    verification is complete.  

    The Specman-style functional coverage groups provided by the Perspec 
    environment goes a long way to solving this problem.  Now there is a way
    to track completeness or to answer the questions "Are we done yet?" and
    "Have we checked all functions and interactions thoroughly?" at the SoC
    or System level.  

    Before this, there was no easy way of answering those questions.

    TEST PORTABILITY

    Perspec's models are portable in three ways:

    1. IP to Block to SoC (vertical reuse)

    We are interested in "Portable Stimulus" for IP level and multi-unit 
    level (what we refer to as cluster level) to SoC level.  We have not yet
    integrated any of our IP Testbenches into Perspec to allow for IP to
    multi-unit to SoC portability, but we are getting ready to do this.

    2. Across Platforms (horizontal reuse)

    Each test encapsulates an "intent" which we would like to keep as we 
    port them to different levels vertically.  Perspec allows this to happen
    almost seamlessly.

        - We can write a test and run it at the cluster level to check
          a given function and then regenerate and run that same test 
          at the SoC level in an RTL simulator.  

        - When the test becomes too complex and takes too long to run on
          our RTL simulator "platform", we move the test to our emulator.

        - So, we debug and develop on our RTL simulator, and then we
          build in more complex intent which requires a faster platform
          such as an emulator.  

        - In fact, we now build tests and run them directly on our 
          emulator; only when we have issues we would like to investigate
          thoroughly, do we run on the RTL simulator.  

    This ability to move horizontally, from platform to platform, without
    losing intent, is very powerful and makes us much more productive.  

    It allows us to make the best use of the advantage of each platform; RTL
    simulators offer high visibility into what is going on in the DUT but
    are very slow, while the emulator runs almost 6,000 times faster, but 
    offers lower visibility.  

    Although we are still in a trial phase, and running on RTL that was 
    verified using our old methods, we are able to see the huge productivity
    improvements we would have made if we had used Perspec -- especially in 
    the areas of stress, integration, power, and system verification.

    3. Across a family of devices (lateral reuse)

    Vertical reuse is not new but different verification teams will do it 
    differently.  Each group may develop their own method, which may not 
    scale well to a different project or a different family of devices.  

    PSS will solve this problem allowing a robust method of creating tests 
    which can be reused vertically across different levels, horizontally 
    across different platforms, and laterally across different derivatives 
    of a family of devices.

    Even with Perspec' current functionality, we've been able to achieve
    lateral reuse across different derivatives in a family of devices.  
    This is especially useful in multi-core devices where the lower 
    derivatives have a fewer number of cores or configurations, all of 
    which need to be delivered in a timely manner.

    DEBUG AND COVERAGE LOGGING

    This is one of the impressive things about Perspec.  The generated 
    coverage versus simulated coverage correlates perfectly.  

        - If you generate 500 tests to give you 100% coverage, and all
          your tests pass in the simulation, you will have 100% runtime
          coverage without question. 

        - If some of your tests fail, then it depends on whether or not
          your list of tests was generated in an optimum way (using FILL
          or by iteratively tweaking constraints and incrementally 
          generating the tests).

        - If your generation is optimum, then even a single test 
          failure will lead to a hole.  There will be coverage holes
          that only the failing test can fill, but since it is failing
          it cannot cover the hole.  

        - If your generation is not optimum, and you have generated more
          than enough tests to hit coverage then you may still have 
          100% coverage even with a few tests failing.  You may want 
          to do this sort of generation if you do not have a robust 
          coverage model and want to encounter design behavior that has
          not been thought of or captured in the coverage model (This is
          the traditional approach).  

    Of course, we may not yet have encountered a case where the generated 
    test varies from the simulated test such that the intent of the test is
    lost or the coverage collected is different.  We are still learning 
    about the limitations of Perspec, but so far so good.

    ADDITIONAL PERSPEC FEATURES

    Perspec's GUI is very helpful, allowing us to visualize the test in 
    different ways; scheduling, data flow and overall function.  Perspec 
    allows us to view this information after generation and post simulation
    using the test log file.

    Perspec is integrated with Cadence Indago, but I have not used this 
    much.  I find I'm able to rely on the generated test, the 
    post-generation UML diagrams and the log file to do all the debug 
    necessary.

    Perspec handles constraint solving in the same way that Specman does but
    with a minimalist approach to reduce test count.  The constraint solver
    tackles each constraint in the order that they are read in.  Temporary 
    constraints can be specified using soft constraints which can be 
    overridden by a hard constraint.  A soft constraint will only be honored
    if the solver can come up with a solution, while a hard constraint must
    be honored, otherwise an error will be issued.  

    Perspec introduces the concept of a default constraint, which is like a
    soft constraint but must be honored if no other soft or hard constraints
    exist on the attribute.  The final, random value of each attribute is
    obtained from the solution space resulting from the amalgamation of all
    constraints up to the point of generation.  

    LEARNING CURVE & LIBRARIES

    If you have not used Perspec before, it can take a long time to develop
    the models and have them interact with each other inside a test.  The 
    number of combinations of events increases greatly for each model 
    included in your overall test.  Compared to a directed test where the
    number of combinations of events is very limited, the time it takes to
    implement a Perspec test can seem prohibitive.  

    This, however, improves with experience.  The more you know about 
    Perspec the faster you will be able to develop tests.  I recommend full
    training with lots of labs before even attempting to write a Perspec 
    test.

    To help reduce your set up time, Perspec has multiple libraries, such as
    ARM, low power, memory modeling, system modeling, coherency...  Of 
    these, we use the System Modeling library.  This library contains
    methods and actions considered to be commonly used in an SoC environment
    e.g. copy_data and move_data [from one location to another].  This 
    allows for shorter test development times.

    PERSPEC'S CONSTRAINT-BASED INPUT WITH GRAPH VISUALIZATION

    Perspec has a constraint-based input, where you textually describe the
    system resources, and it then uses a constraint solver to identify the
    legal scenarios, which can then be visualized as an annotated graph.  
    Perspec also allows input via a GUI, a "scenario builder", which gets
    internally translated to the constraint-based input.

    In my view, the constraint-based input is the best way to write your 
    stimulus, provided you have understood how the scenario can be 
    visualized.  The graph view is always there to double check what you 
    think has been implemented.  

    This is much like synthesis for RTL.  An experienced designer can look
    at RTL and draw out how the connection between the logic gates will look
    after synthesis whereas an inexperienced designer may want to visualize
    the logic gates before implementing the RTL.

    ACCELLERA'S PSS STANDARD

    Cadence announced that they support Accellera's PSS standard.  A 
    standard always helps to collect the best ways of doing what needs to be
    done.  For example, we need to know how to write our IP use cases so 
    that they can be reused vertically at the cluster or SoC levels.  

    The standard will be the amalgamation of all the methods tried in the
    industry and should help us get it right the first time.

    It is my view that tools similar to Perspec are going to be the 
    mainstream in the next few years.  

    VERIFICATION NEEDS THIS PRODUCTIVITY BOOST

    Verification has needed a productivity boost for some time now.  The 
    greatest worry for project managers has always been "how long is it 
    going to take to verify this part".  

    The design flow is quite efficient these days, but traditional 
    verification is fraught with inefficiencies especially when it comes to
    coverage closure and planning.  

        - At the IP level, the complexity of current Specman "e" or 
          SystemVerilog Testbenches can sometimes have an unwelcome 
          effect on schedules.  

        - At the cluster or SoC level, the problem is usually with too
          little verification; bringing in an "e" or SystemVerilog 
          Testbench can make the problem worse by introducing unnecessary  
          complexity so teams usually stick to directed tests.  

    Perspec seems to solve all these problems of inefficiency and helps 
    manage complexity by allowing a sort of hybrid approach where the good
    parts of complex HVLs (e.g.  coverage tracking) is kept, while the slow,
    compute hungry aspects are replaced by parameterizable directed tests 
    with inbuilt scheduling and resource management.  

    This coupled with the numerous ways in which the models can be reused 
    for generation at different testbench levels, on different platforms and
    on different product derivatives gives a huge productivity boost 
    compared to traditional methods.

        ----    ----    ----    ----    ----    ----    ----

    Cadence Perspec - interesting idea, and what looks like a repurposing of
    the Specman "e" language.  Takes the advantages of "e" for random
    generation and the modification of tests, taking into the system level.

    But it is a superset of the portable stimulus standard from Accellera;
    wonder whether that will go the way of the UCIS standard, where each of
    the EDA Big 3 does their own thing, pays lip service to being compliant,
    and has no interest in driving it forward.

        ----    ----    ----    ----    ----    ----    ----

    Perspec

        ----    ----    ----    ----    ----    ----    ----

    CDNS Perspec

        ----    ----    ----    ----    ----    ----    ----

Related Articles

    User buzz on Siemens/Solido machine learning is #1 for Best of 2018
    CDNS Perspec is crushing it in the PSS Wars is #2 for Best of 2018
    Breker TrekSoc is slipping in the PSS Wars is #2b for Best of 2018
    Siemens/MENT InFact fails in the PSS Wars is #2c for Best of 2018
    Again Spies hint Synopsys is making a PSS tool, but it's slideware

Join    Index    Next->Item







   
 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.






















Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)