( ESNUG 581 Item 3 ) ---------------------------------------------- [03/23/18]
Subject: User reports InFact PSS prunes coverage space for 30X sim speed-up
With InFact's graph-based approach:
- They can derive the global scheme from the graph.
- They treat each of the graph nodes as a compute entity.
- When you drill down to a particular graph node, inside of that
node, you can enter/execute any code you want, i.e. each compute
node can act like its own procedure.
They organize it to make it more manageable for a human designer.
In my last section I'll discuss how you define your test intent with InFact
so your coverage space doesn't explode, along with debug and coverage.
- [ Light Bulb Man ]
User on why MENT InFact leads graph-based PSS
From: [ Light Bulb Man ]
Hi John,
Here's my part 3.
DEFINING TEST INTENT SO YOUR COVERAGE SPACE DOESN'T EXPLODE
"Full coverage" is a very qualified phrase. The most difficult question to
answer in verification is:
"Are we done?"
Even for something as simple as a single 6-sided dice used in a board game,
you could test all the 6 different values until you are done. Sadly, in
the hardware design world, there is the troubling concept of time delay.
So maybe if you roll a 6, followed by a roll of 3 with 1 second delay in
doesn't cause your board game system to fail. But if you just happen to
delay it 4 seconds between the rolls -- then it fails. Because time delays
are infinitely variable, you can't test for all possibilities. You don't
know when to stop.
So, we bound our goals. When we say we have 100% coverage, we mean we've
met the coverage goal that we set out to cover. We consider those to be
representative of the scenario you want to test the device for. There are
always cases that you cannot anticipate, but it would be impractical to
cover them all.
When you are trying to meet coverage, the number of tests could very easily
explode.
In terms of pre-runtime coverage, InFact reads in all your System Verilog
code and estimates the coverage space -- the number of scenarios -- to
help guide your planning to get your coverage plan to be within usable
limits.
This is another thing I love about InFact.
Mentor's InFact team looked at our coverage plan the first day they visited.
Our engineers were eager, so they were crossing everything. For example,
if we had 3 parameters: A, B, and C, each with 8, 9, 15 possible values
respectively, if you cross them all you have
(A = 8) X (B = 9) X (C = 15) == 1,080 total possible combinations
In our initial attempt to use InFact's cross coverage feature, we crossed
12 variables. The number of bins reported by the tool would wrap around
across the screen 3, 4 times. Our regression ran for 4 weeks with under
1% coverage percentage before we killed the run.
Mentor's R&D engineer who did the demo had a strong math background. He
explained why our regression would be running for 10^66 trillion years
before the coverage number would be met. He told us that:
"The number of coverage points you are trying to hit was
larger than the number of all of the atoms in the known
universe. (10^78 to 10^82 total.)"
As newbies we didn't know that InFact would let know up front to see if
our coverage space was overly ambitious. We have since learned to only
have InFact consider crosses that are significant. If we consider A, B,
and C; and A and C are related, we cross them. But if B does not have
a tight logical relationship with the whole group, there's no reason to
cross B in the middle. That simple optimization alone easily cuts the
run count 90% or more.
InFact will take a first attempt, and alert us if the coverage space is too
large. We then go through the next iteration to more thoroughly examine the
relationships between all the terms to bring it down to a reasonable space.
For example, if you cut the number down from a 100 in half, your runtime
would go from 6 hours down to 3 hours.
BIG WARNING!: only your design engineer can identify which SoC components
are related. You *must* include the logical relationships in your coverage
strategy -- or you could inadvertently under-constrain -- causing your
input space to explode.
I specify with InFact my valid & invalid constraints using System Verilog.
They are strictly functional. I tell it valid U.S. paper money notes are
either $1, $2, $5, $10, $20, $50, and $100. InFact then infers that
there is no such thing as a $3 note -- so they prune it -- plus $4, $6, $7,
$8, $9, $11, $12, $13, etc. ... out of the coverage space to make it a lot
smaller. They make it a targeted space with only the valid cases.
InFact then looks at the constraints and uses the constraint information
to prune out known invalid crossings. "All money transaction will be in
whole dollar amounts" -- tells InFact to prune out all cases that involve
U.S. cents, nickels, dimes, quarters, half dollars.
This valid/invalid pruning can trim the coverage search space by 99%+.
Say you wish to simulate rolling a dice with 6 faces to get a number, where
it's important for your brute force algorithm/testing to see if you have
seen all the 6 faces of the die -- because if you do not test the "dice has
been equal to 2" case, it might later cause your system to crash.
When you toss the dice, it will take many more than 6 times to have all 6
faces shown even if you have a fair dice, e.g. maybe the 1 would come up
3 times.
- InFact avoids duplication. Its solver keeps a record of all the
cases that have already been generated, so it doesn't repeat them.
i.e. you've already generated 1 so the next time it generates
cases, it will only generate only from 2 to 6, etc. (This is where
I ran simulation with and without InFact on the same project, and
InFact was able to achieve the same level of coverage with a 30X
reduction in runtime.)
- Constrained random has a lot of duplication. It uses a random
number generator that is equivalent to binary coin tosses. It's
repeated billions of times a second. A constrained random
algorithm to randomly give out numbers from 1 to 6 could give
the numbers 1 and 2 many, many times. It might take many more
runs before you hit all the 6 different values.
Based on your constraints, InFact derives what they call reachable graphs.
The reachable space is a lot smaller, so we don't have to handle all of the
extra massive bad cases that [Spider Man] mentions.
InFact is highly efficient for this, and it's all done under the hood.
Again, I've found this InFact pruning tends speed up simulations by 30X.
INFACT DEBUG
There are two aspects of debugging with InFact:
1. Debugging your design following the simulation runs.
I use Questa Visualizer for debug, which is integrated with
InFact -- so I don't need to go out of one tool and into
another to debug my code. Most System Verilog debuggers
are on par with each other, but Questa Visualizer seems better
integrated with my UVM testbench than others.
2. Debugging InFact's generated tests/stimulus.
Because InFact does so much done under the hood, after you
input your base application-specific constraints, debugging
the InFact generated tests could be a bit challenging if the
tool didn't work the way that you want it.
Fortunately, InFact has GUI-based visual debug that makes
the test case debug quite intuitive. You can run the simulator
and it will bring up graphs of the generates tests and trace
the particular node alongside the simulation.
When we first learned to use InFact, we used its graph-based GUI debug to
make sure it was working correctly. As we became familiar with how InFact
generates its stimulus, we no longer have to debug a lot of the test cases
generated.
INFACT COVERAGE
InFact's pre-runtime coverage is not an estimate, it's a mathematically
precise analysis of the coverage space. It takes the invalid constraints
into account to avoid any work on "unreachable space". Thus, InFact's
pre-runtime and runtime coverage perfectly correlate -- they are one and
the same.
The invalid constraints, combined with the way InFact enumerates crossings,
rules out the combinations that are impossible to get -- so your sim runs
only work towards the reachable coverage goals.
In all 6 projects where I used InFact, the coverage space got fully covered
or enumerated every time. There's no such thing as cases "never executed"
or skipped because all the logically invalid cases were ruled out during
the pre-analysis phase. In the runtime phase, InFact works through those
possible cases and executes them all.
The InFact runtime engine has a manager. My largest execution spaces
may have 30,000 cases total. The InFact manager farms out the cases to
multiple simulations running in parallel -- each program is a thread.
Let's say there are 10 different simulation engines out there running the
tests. We call them "worker threads", as you can think of 10 workers,
where the manager dispatches one case to worker 1, one case to worker 2,
all the way to worker 10.
If there are 200 jobs, some jobs will take longer than others. The InFact
manager doesn't just give 20 jobs to each worker, but uses a model where:
- For the first 10 jobs, it dispatches one job to each worker.
- Once the first worker that completes a task, the manager will
dispatch another job, and so on, until all the 200 jobs are
complete, such that in the end it kind of evens out the runtime
for all the worker threads for the maximum nice speedup.
InFact has ongoing coverage monitoring, such that when it gets to say task
180 out of 200, it shows that it's finished 90% of the coverage.
It lets us specify the maximum number of errors for a simulation run, before
it will halt the simulations. We don't want it to stop at the first error
because we might want to see more data, but we also don't want to accumulate
too many errors that might overwhelm a human designer trying to debug it.
e.g. up to 15 errors. If all the checks were successful, it continues, but
if it runs into 15 errors, it stops the simulation.
Once everything goes through smoothly, we say it "passed". That means that
we sent all those scenarios/data sets and the whole simulation environment
accepted the input and executed it successfully.
CONCLUSION
InFact has *eliminated* two of my normal tasks for a typical verification
environment:
1) the debug of the constraints to increase coverage, and
2) I now trust InFact's pre-coverage analysis so much that
I only write logical constraints, and then use the
InFact output coverage space as my coverage target.
The combination would typically take 1/2 of my verification team's time;
which justifies our buying the tool. And we already see its value today,
independent of what is going on with the PSS standards.
I recently recommended InFact to another project lead. His team has
already successfully deployed it on their own projects.
- [ Light Bulb Man ]
---- ---- ---- ---- ---- ---- ----
Related Articles
Mentor InFact PSS user convinces boss, cuts project time in half
User kickstarts InFact PSS by using *only* System Verilog as input
User reports InFact PSS prunes coverage space for 30X sim speed-up
Cooley schooled by user on why MENT InFact leads graph-based PSS
Mentor InFact was pretty much an early No Show in the PSS tool wars
Join
Index
Next->Item
|
|