( DAC'20 Item 03d ) ----------------------------------------------- [05/21/21]

Subject: Ascent Lint "designer intent" and pre-submit is Best of EDA #3d

SPYGLASS NO MORE: Synopsys is very quietly migrating its SpyGlass lint users
over to "VC SpyGlass Lint".  While they are keeping SpyGlass in the name, it
is a actually a completely new product.  The original SpyGlass is basically
being end-of-lifed!

My spies say Real Intent is seeing new Ascent Lint evals as result.  Which
is no surprise -- anytime companies migrate their users to a new product,
the users take a close at their other options.  It happened when Aart tried
to migrate his IC Compiler and Magma BlastFusion user base over to ICC2; it
created an opening for Anirudh's Innovus to be benchmarked.  Which is why
Prakash has a sudden spurt in Ascent Lint evals.  (Oops, Aart!)

        ----    ----    ----    ----    ----    ----    ----

PRAKASH'S KILLER CUSTOMER SUPPORT: Beyond evals, the other thing that keeps
chip designers choosing Real Intent vs. SNPS/CDNS/MENT is the extraordinary
customer support they provide -- which creates intense customer loyalty.
(Notice how his customer support is cited whether they're talking about tool
set-up, or benchmarking, selecting rules sets, etc.)

   "The tool set-up was relatively straightforward.  That allowed us, with
    our Real Intent AEs help, to become quickly productive with a very 
    short learning curve.  The Real Intent AEs really make for a positive
    customer experience -- they know the product well and were our best 
    advocate."

   "We compared Ascent Lint with Mentor, Blue Pearl, and SpyGlass a few 
    years ago.  We chose Real Intent and have been very happy with its
    performance, runtime, and support.  Real Intent's support is the
    best I have ever experienced in EDA."

   "This low noise differentiator comes from working closely with the
    Real Intent support engineers to find a rule set mix for us that
    matches our design style.  That level of engagement is not something
    that I find in general in the in the EDA industry."

   "I give Ascent Lint a thumbs up.  The thing that makes it stand out most
    to me is Real Intent's customer service.  Whenever we have a question or
    an issue, they are very responsive -- which is something we really 
    appreciate.  Prakash runs a tight ship there!"

   "I must give a shout out to Real Intent's phenomenal customer support."

   "Whenever we run onto an issue, Prakash's people treat us like we're
    Apple or Intel, when we're a tiny Tier 3 customer.  Whatever he sells,
    we'll buy just because we know his guys will make sure it works."

        ----    ----    ----    ----    ----    ----    ----

INFERRING DESIGNER'S INTENT: The second "new thing" that caught my eye was
a user reporting that Ascent Lint looked into RTL context to infer actual
design intent - and then made checks based on that larger context.  Whoa!
   "What we really use Ascent for is to identify where the designer's
    intent deviates from his/her code.  It's able to look at a particular
    structure and say, 

       "I see what you're doing, but this is what's actually going to
        happen.  Is that really what you want?"

    It could be something as subtle as a width mismatch, such as trying to
    assign a 64-bit quantity only 62 bits and discovering that two bits
    are missing.  Or inferred latches."

Wow!  How linting has changed!  Whoa!

        ----    ----    ----    ----    ----    ----    ----

NO LINT, NO SUBMIT: Something new that came up this year is that in addition
to using Ascent Lint for block and chip-level linting sign-off, multiple
companies now, from Day One of a chip, require their designers to run and
pass Ascent before they can submit their RTL code into their repository.
  "No one can submit their code without running Ascent Lint.  The tool is
   fast, so it makes this feasible.  All designers must run and pass these
   checks (all fixes made) before they push "submit" and their code can go
   into the repository."

  "It's now even a requirement for our designers to run Ascent *before*
   any repository submit.  No submit gets into the repository without
   being lint clean first."

And the designers actually LIKE this policy because linting early like this
saves them TIME.
  "At first we thought they'd rebel at this manditory extra step, but our
   designers like this approach of identifying problems early -- when 
   they are really thinking about what's going on and where they are in
   the best position to address them.

  "My guys estimate Ascent cut our verification time by roughly 5 weeks
   because it found design bugs quickly thus reducing our simulation debug
   time.  Why?  We get initial results from it in seconds.

        ----    ----    ----    ----    ----    ----    ----
        ----    ----    ----    ----    ----    ----    ----
        ----    ----    ----    ----    ----    ----    ----

      QUESTION ASKED:

        Q: "What were the 3 or 4 most INTERESTING specific EDA tools
            you've seen in 2020?  WHY did they interest you?"

        ----    ----    ----    ----    ----    ----    ----

    Our designers use Real Intent Ascent Lint as part of our pre-submit
    procedure for our System Verilog RTL, where we do key RTL checks
    before we check the code in.  

    There are several automatic procedures that each designer must launch
    and run on their design module before their code goes into our
    repository.

        - A linting check.  No one can submit code without running 
          Ascent Lint.  The tool is fast, so it makes this feasible.

        - We also have some quick functional tests.  

    All designers must run and pass these checks (all fixes made) before
    they push "submit" and their code can go into the repository.  

        ----    ----    ----    ----    ----    ----    ----

    We use Ascent to lint our RTL designs for syntax and coding errors,
    in the same way a writer uses a spell/grammar checker.  

    Ascent analyzed our mixed-signal neural network chip.  We start using
    it at the block-level and apply it all the way up our design hierarchy
    to the full-chip level.

    Rules Set and Configuration/Editing

    Real Intent's Ascent ruleset is incredibly large and robust; the
    ruleset is massive and covered things we didn't even imagine.

    We needed to determine the rules that were relevant/germane for our 
    design.

        - Our Real Intent AE helped find the most common configurations
          and relevant rules for our design style.  That AE relationship
          was crucial for us when we formulated our rules deck.

        - The rules can be chip-specific and/or coding-style-specific, 
          just as people have grammar patterns, like how Yoda in Star 
          Wars flips words around.

    Ascent Lint's Rule Configuration Editor lets us select/edit rules and
    to turn off the irrelevant ones in our rules set.  

      Rules Configuration.  It was very easy for us to use the GUI
      to configure a new ruleset.  

        - The GUI shows us the available syntax and check options that 
          the tool supports, such as naming conventions., etc.

        - If we use a text editor to edit the ruleset, we must already
          know the syntax and options for each rule in advance to 
          properly configure a rule.  So, the GUI was a much easier
          way for us to do this!

      Rules Editing.  Real Intent produces a text ruleset file, so we can
      then use either the GUI or a text editor to modify it.  

        - We used an Emacs text editor to make simple tweaks and the GUI
          for more complicated modifications.

        - An example of something we might tweak in the rules: change a 
          configuration for "deep macros check" (macros calling macros,
          aka nested macros) that might be only 3 deep or over 10 deep.  

        - The macros expand out -- an analogy would be someone in line 
          at a movie theater letting more and more people cut in front of
          them.  As the macros expand, it can make debug harder, so we 
          use Ascent Lint to limit how deep we want our macros to go 
          before flagging a lint violation.

    Ascent's Violation Report

    We first generate all the messages/violations in the report and then
    filter out the unimportant ones.

    This is also where the Ascent Lint's GUI is extremely helpful, as it 
    categorizes the violations as: errors, warnings, and info.  We can
    focus on the violations in that order and apply labels/tags to fix,
    defer, or waive them.

    Ascent Lint has detailed messages for the violations.  Prior to Ascent,
    our most problematic violations were related to combinatorial feedback.

        - Our prior linter would not identify the actual feedback loop;
          it would only warn a combinatorial feedback problem existed.

        - Ascent gives us specific details to help find the feedback
          loops; for example, it identifies the feedback path in
          a nested conditional.

        - We can add categorize violations as error/warnings to be fixed,

    Waivers Migrate Upwards

    We can take waivers and reuse them at block level, block cluster level,
    and the chip level.  This is Ascent's ability to handle recursiveness
    (meaning when things point back to themselves).  

    Because we are able to specify waivers at the lowest block level and 
    reuse them, we avoid having to create new waivers as we traverse up the
    design hierarchy -- they just automatically migrate up through the
    hierarchy.

    Debug

    We use Real Intent's iDebug GUI for debug.  It's straightforward; it 
    lets us look at our source code in their window panel or their editor.
    We used both views.

        - For something quick, the window GUI is good.  

        - If we need a more exhaustive search or probing, we can use the
          integrated Emacs search features.

    Ascent Lint provides lots of details on the errors it finds.  For 
    example, for a design problem where the right-hand side (RHS) on an 
    expression is 30 bits and the left-hand side (LHS) is only 4 bits,
    Ascent will the identify bit width assignment mismatches.

    TIP: Use their iDebug GUI to view the results.  The learning curve is
    straight forward; so, once you are familiar with the GUI for one tool,
    it's the same for the other tools (e.g., Ascent Lint, Meridian CDC,
    and Meridian RDC.)  This definitely helps our productivity.

    Ascent Lint More Bang-for-Buck Over Formal

    Ascent Lint's RTL linting static checks give us the best bang for the 
    buck vs using formal.  Our experience with formal (like JasperGold) is
    that we can spend long amounts of analysis time -- and still not get a
    definitive "Yes" or "No" answer -- but an indeterminate response.

    We think highly of Ascent Lint and readily recommend it.

    The tool set-up was relatively straightforward.  That allowed us, with
    our Real Intent AEs help, to become quickly productive with a very 
    short learning curve.  The Real Intent AEs really make for a positive 
    customer experience -- they know the product well and were our best 
    advocate.

        ----    ----    ----    ----    ----    ----    ----

    Real Intent Ascent Lint checks for common code problems quickly and 
    easily.  It includes a set of built-in rules that can be individually
    enabled or disabled.

    We compared Ascent Lint with Mentor, Blue Pearl, and SpyGlass a few 
    years ago.  We chose Real Intent and have been very happy with its
    performance, runtime, and support.  Real Intent's support is the
    best I have ever experienced in EDA.

    The tool's inputs are the RTL code, and the .f file list.  

    We review Ascent's report as a text file and use the GUI for waivers.

        - The RTL bugs it reports are things like code syntax errors,
          missing resets, unconnected module ports, assignment width and
          connection width mismatches, logic loops, etc.

        - Left clicking on the report in the GUI opens the file at the
          point of the error.

        - Right clicking on a report in the GUI opens the waiver wizard
          and allows us to waive single reports or wildcard waive 
          multiple error reports.

    Capacity/Performance 

    We design FPGAs, we run Ascent Lint at both the block level and full 
    chip level.  It took one engineer only 1-2 days to get everything 
    scripted to completely automate the linting process.

        - The largest design that we've run Ascent on was ~2M gates.  

        - Ascent's runtime to lint that was under 3 minutes.  

    Rules Set

    My team has a limited set of guidelines; Ascent Lint covers them.

    It's very easy to find the relevant rules, as their rules are all
    listed in the Ascent manuals.

    We can turn off irrelevant rules to reduce warnings.  We edit the policy
    file directly to do this.  (I've heard Real Intent has a configuration
    editor, but I've never used it.)

    Ascent has significantly lower noise than other SNPS/MENT/BP tools we
    tried.  Additionally, it filters/organizes the violations as Errors,
    Warnings, and Info better than SNPS/MENT/BP.  We review its findings;
    we focus on errors first, then warnings, then info.

    Waivers

    What we like about Ascent waivers:

        - Its GUI makes it easy to add comments to waivers -- this is
          critical for our design reviews.

        - Ascent Lint auto logs who waived it plus the time/date.

        - The waivers are portable, so we can use the same waivers at 
          the block-level and chip-level.

    In our experience, compared to Spyglass, Mentor, and Blue Pearl, the
    Real Intent Ascent linter has the best QOR on the market, phenomenal
    support, fast run times, a very low noise/false alarm rate.

    I highly recommend it.

        ----    ----    ----    ----    ----    ----    ----

    We've re-adapted our team's design methodology *away* from linting only
    at later specific points in time.  Instead, we pushed our designers to
    run Real Intent Ascent Lint at the very first design entry -- and then
    have them constantly re-lint throughout the design process.  

    It's now even a requirement for our designers to run Ascent *before*
    any repository submit.  No submit gets into the repository without
    being lint clean first.  

    At first we thought they'd rebel at this manditory extra step, but our
    designers like this approach of identifying problems early -- when 
    they are really thinking about what's going on and where they are in
    the best position to address them.

    "Design Intent" Checker

    We see Ascent lint as an "intent checker" that checks our designs for
    whether our code is (written to be simulation and synthesis friendly)
    acting the way the designer was expecting.

    While Ascent Lint does catch things like syntax errors, which are 
    helpful, what we really use it for is to:

        Identify where the designer's intent deviates from his/her code.

    Ascent Lint is able to look at a particular structure and say, 

       "I see what you're doing, but this is what's actually going to
        happen.  Is that really what you want?"

    It could be something as subtle as a width mismatch, such as trying to
    assign a 64-bit quantity only 62 bits and discovering that two bits
    are missing.  Or inferred latches.  

    To do all this, Ascent's inputs are rule sets and the design source
    code.  It reports all the rule violations, as a list of warnings
    and errors.  

    It's a Productivity tool, NOT a Sign-off tool

    Even though Ascent is our #1 most widely used tool in our EDA tool set,
    and all our code must be Ascent lint-clean to even submit into the
    repository -- we do NOT consider Ascent Lint to be a "sign-off" tool.

    This is because we don't use it to catch every possible error.  One 
    engineer may have an exotic escape from lint that they want to have a 
    checker for.  But there are times where that checker is too expensive 
    from a runtime or memory point of view -- so we don't need to include
    a check if it's too exotic -- such as a one in 20 years occurrence.

    That's not an appropriate "cost" for all our designers who are running 
    Ascent Lint constantly.

    So, we are cognizant of the noise level for our rule set, and not just 
    the "completeness".  We have a bit of a luxury if we don't use Ascent 
    Lint as a sign off tool.  We can optimally balance our signal to noise
    ratio if we don't require it to check everything exhaustively.

    Ascent's checks usually overlap with something else.  For example,
    simulation will look at patterns and catch many of the same issues, but
    it's does that in a much less efficient -- and expensive -- way.  

    We ran an informal internal study that looked at the amount of time it
    took us to fix a bug using Ascent Lint vs. during simulation.  In some
    cases, it took 100X the cost when considering time of machines and
    man-hours to find and debug the issues in simulation versus finding
    them earlier during RTL linting.

    Ascent lets us to find issues in simulation synthesis, CDC, DFT, etc.
    during design entry where they're easiest to find and fix before they
    infect more things downstream.  

    TIP: We've found that CDC analysis tools also catches some of the
    same errors that Ascent does.  

        - However, once again we must wait for somebody else later on 
          down the process to run Meridian CDC.  By then it may take
          several iterations to get a bug fixed versus finding and
          fixing it closer design entry.  

        - Of course, it's not a complete overlap, as Meridian CDC finds
          other errors.

    So, we view lint as Ascent Lint as a productivity enhancement tool 
    rather than a sign-off tool.

    We make this distinction because it's an art form to get the rule set
    to balance with the best signal to noise ratio -- and we have more 
    flexibility if we let go of trying to catch absolutely everything.

    This low noise differentiator comes from working closely with the
    Real Intent support engineers to find a rule set mix for us that
    matches our design style.  That level of engagement is not something
    that I find in general in the in the EDA industry.

    Our Real Intent AE will come to me and ask me:

        "How's this check working out for you? How can we improve it?"

    They are pulling at the same time we're pushing, which leads to a good,
    curated rule set that matches our design methodology.  We get into 
    philosophical discussions about not just what is the best code for our
    current design, but that this check is trying to protect our design 
    from changes that occur in subsequent iterations.

    We are happy with Real Intent Ascent Lint; it fits well with our 
    methodology.  Plus, I really like Real Intent support level and 
    attention.  

        ----    ----    ----    ----    ----    ----    ----
 
    Real Intent Ascent Lint is a good, friendly RTL linting tool.  We use it
    as as classic front-end code cleaner -- usually before synthesis.  It
    also often finds issues for our netlists even after synthesis.  

    Ascent runs fast and its reports are friendly and clear.  

    Elaboration usually only runs for few minutes; on large datapath 
    designs it may take long time.  Here's our run times and server
    memory usage for our larger designs.  

        Block 1 -- Control block

           Size: 54K instances (synthesis result)
        Runtime: 2 min
         Memory: 0.5 GB

        Block 2 -- a few memories and heavy datapath

           Size: 2.5M instances and a few memories (synthesis result)
        Runtime: 52 min
         Memory: 39.0 GB

        Block 3 -- 74 memories and heavy datapaths

           Size: 1.3M instances and 74 memories (synthesis result)
        Runtime: 24 min
         Memory: 35.6 GB

        Block 4 -- 69 memories and heavy datapaths

           Size: 1.1M instances and 69 memories (synthesis result)
        Runtime: 33 min
         Memory: 34.0 GB

    Ascent Lint's basics: 

        1. Rule set

           - Built-in rules for coding guidelines, including finding the
             relevant rules for our own coding guidelines.

        2. Noise reduction

           - Filtering results by errors, warnings, and info.  

           - Tuning the rules, such as allowing exceptions.

           - Turning off irrelevant rules using the rule configuration 
             editor to avoid flagging issues not of interest.  (Though
             we mostly use the same set of rules.) 

        3. Waivers

           - Add comments for a waiver or to be fixed/deferred.

           - Automatic time and date logging -- who did it and when.

           - Waivers - portable/reusable at block-level and chip-levels.

        4. Debug 

           - Custom views and cross probing to RTL design source 

    Ascent Lint also gives details on the errors, such as covering both 
    left-side and right-side widths; this is a bit complicated to use.

    SUPPORT: Real Intent's AEs assisted us during tool setup.  We needed
    some scripts and had an issue with recompiling libs and some with
    VHDL multi-library, but their AEs resolved them all in a short time.

    I definitely recommend Ascent Lint.  Low noise and user friendly.  

        ----    ----    ----    ----    ----    ----    ----

    Ascent Lint makes sure our RTL code and/or design netlist doesn't have
    gross errors that would trip up other tools down the design flow chain.
    I run Ascent at the block level; it only takes minutes -- the runtime
    is never an issue.

    Our goal is to use Ascent to catch errors quickly instead finding some
    'stupid' mistake late that could have been caught and fixed earlier.

        - That saves us hours (or even days) of run and debug time.

        - It also saves us expensive PnR licenses.  

    Ascent helps us ensure good synthesis code.  Also our PnR tools want
    clean input -- or it's pretty much garbage in garbage out.

    I sometimes also run our PnR gate-level output through Ascent Lint.

        - There are certain design problems that can creep in and not get
          caught right away, so I want to verify that an output netlist
          from the synthesis or PnR tools is free of certain problems.  

        - I run Ascent Lint when I'm I trying to debug something to help
          isolate problems.  I occasionally find something -- minimally
          it helps me rule things out.  For example, Ascent Lint checks
          for undriven outputs and inputs that never get used.  These two
          gotchas can cause fatals when integrating at the chip level.

    I like Real Intent Ascent Lint better than Synopsys Spyglass Lint; which
    I used at my prior company.  I was impressed with Real Intent's debug
    GUI -- it's very user friendly and makes it very easy to get a good 
    overview of the health of our designs.  E.g., it shows:

        - all the rules we ran
        - all the waivers
        - which rules are waived
        - how many items are waived

    I recommend Ascent.  Its user-friendliness -- including reducing false
    positives/noise is why.  That is where the innovation in linting is.

        ----    ----    ----    ----    ----    ----    ----

    We run Ascent on our chips' System Verilog code.  We chose it a few
    years ago over other linting tools for a few key reasons.  

        1. Ascent Lint was the fastest tool we tested it on.  That was 
           important because we require that our designers run and pass
           the linting checks before they would be permitted to submit
           their code to our database.  

        2. Ascent Lint had all the rules and the checks that we wanted.  

        3. I was very happy with the Real Intent's customer support team.
           They are  extremely responsive and give us good clarity on
           what they can deliver and when -- as well as why something is 
           not feasible.  

    We run all our synthesizable System Verilog code through Ascent, and it
    reports the code issues/errors.  

    We also use it for stylistic checks, so having our designers use Ascent
    also helps us to enforce better code writing.  When the tool calls out
    every time an engineer writes goofy code, he/she see it's clearly
    waste of time to keep doing it.  It's a fast way to learn.

    Ascent basically takes two different things as input:

        - The actual design code -- in our case, System Verilog.

        - The configuration as to how we want lint to run.  Prakash did
          this part well; it was straightforward to set up the TcL files
          and was about as easy as I can imagine an RTL lint tool could
          be for this.

    The most challenging thing with any lint tool right now is its noise
    level.  We feel like with Ascent we've found a happy balance:

        - If a rule gives us 1000 failures, we turn it off.  If there 
          are stylistic rules that are very difficult to always follow,
          we'll turn them off to avoid getting a ton of false failures.

          We may even turn off some important rules like "type defs".  
          They can be dangerous, but if you flag every single one, it 
          becomes very noisy.  

        - The ideal rules only flag things that are actual errors.

        - Then you have the middle ground.  This is where a rule is 
          useful but might flag 30 false failures.  I.e., it's a little
          annoying, but we can easily write 30 inline waivers, so it's 
          acceptable.

    I don't notice duplicate reporting of the same error with Ascent Lint.
    That's important.

    Ascent Runtime

    Our designers must signoff with Ascent -- with no failures -- before
    they can submit their code.  Every single time.  That's why we really 
    cared so much about it running fast.  Designers don't want to sit
    around for an hour each time.

        - When we run our design on a 200-line module, Ascent Lint is 
          nearly instantaneous; it takes only seconds.  

        - And if we run it across a lot of 200-line files, it can run
          them in parallel, so we can get the result across all of them 
          in seconds (in concept, it may add a bit of time, but not
          anything our engineers see).  This is very valuable.

        - If we run it on our hardened floorplan, it might take a couple
          minutes, which is also fast.  

        - Only if we run it across the whole chip hierarchically, then
          it takes a couple of hours.  

    We find 99% of design errors by running the separate files.  However, a
    few things may come up when we run the whole thing at once; for example,
    we have run the whole top-level at once and seen that a particular
    signal is never used.  We only do top-level runs at key milestones.

    I give Ascent Lint a thumbs up.  The thing that makes it stand out most
    to me is Real Intent's customer service.  Whenever we have a question or
    an issue, they are very responsive -- which is something we really 
    appreciate.  Prakash runs a tight ship there!

        ----    ----    ----    ----    ----    ----    ----

    My guys estimate Ascent cut our verification time by roughly 5 weeks
    because it found design bugs quickly thus reducing our simulation debug
    time.  Why?  We get initial results from it in seconds.

    We do RTL linting with Ascent after we do our RTL design.  We then
    iterate to refine our RTL and the linting waivers.  The iteration
    cycles for linting, debugging, and adding constraints or waivers is
    very short.  

    We always start using Ascent Lint with block-level runs.  We do that
    98% of the time when we lint.  Only later then we do full-chip
    level sign-off.

        ----    ----    ----    ----    ----    ----    ----

    I must give a shout out to Real Intent's phenomenal customer support.

        ----    ----    ----    ----    ----    ----    ----

    Whenever we run onto an issue, Prakash's people treat us like we're
    Apple or Intel, when we're a tiny Tier 3 customer.  Whatever he sells,
    we'll buy just because we know his guys will make sure it works.

        ----    ----    ----    ----    ----    ----    ----

    We use Ascent Lint in our VCS / Palladium flow to keep both Synopsys
    and Cadence honest.

        ----    ----    ----    ----    ----    ----    ----

    It's our policy to use outside 3rd party SW to check our main flows.

    That's why we use Ascent and Questa to check our VCS/PT/DC/FC flow.

        ----    ----    ----    ----    ----    ----    ----
        ----    ----    ----    ----    ----    ----    ----
        ----    ----    ----    ----    ----    ----    ----
SYNOPSYS SPYGLASS LINT USER COMMENTS

        ----    ----    ----    ----    ----    ----    ----

    We're looking at all the linters now that Synopsys did a massive
    re-write of Spyglass.

        ----    ----    ----    ----    ----    ----    ----

    I nominate SpyGlass to keep my SNPS FAE happy.

    He's a good guy who's supported me since the Ajoy Atrenta days.

        ----    ----    ----    ----    ----    ----    ----

    Hey, John,

    Do you have any user benchmarks comparing VC SpyGlass Lint vs.
    original Spyglass vs. Ascent Lint vs. Questa AutoCheck?

    We know that AutoCheck is a formal ABV app, but it's the
    closest thing that Mentor has to a linter.

        ----    ----    ----    ----    ----    ----    ----

    Synopsys Atrenta Spyglass

        ----    ----    ----    ----    ----    ----    ----

    Did Cadence discontinue their HAL linter?

        ----    ----    ----    ----    ----    ----    ----

    Veridi and Spyglass for us.

    We've easily done over 20 full production SoC's using those
    two over the years.

        ----    ----    ----    ----    ----    ----    ----

    We get Spyglass and Verdi as part of our SNPS bundle every
    three years.

        ----    ----    ----    ----    ----    ----    ----

    VC SpyGlass Lint

        ----    ----    ----    ----    ----    ----    ----

Related Articles

    Real Intent low noise/multimode Meridian RDC gets the Best of 2020 #3a
    Users choosing Meridian CDC over Spyglass CDC gets the Best of 2020 #3b 
    Real Intent Verix CDC true multimode analysis gets the Best of 2020 #3c 
    Ascent Lint "designer intent" and pre-submit gets the Best of EDA #3d

Join    Index    Next->Item







   
 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.





































































 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.

Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)