( ESNUG 581 Item 1 ) ---------------------------------------------- [03/23/18]
Subject: Mentor InFact PSS user convinces boss, cuts project time in half
CONCLUSION
We are very happy with InFact and I would definitely recommend it other
users. The learning curve is simple, the GUI is really comfortable to
work with, and we don't have to learn a new language.
- InFact's biggest benefit is with 50,000 cover points it can close
coverage 10X faster by eliminating redundancies. (I would say
10,000 coverage points is the breakpoint where it makes sense.)
- The second benefit is it helps us to capture more design bugs
because InFact automatically generates a lot of corner cases that
we'd normally miss with human coded testbenches. We have caught
certain corner cases that we missed with traditional methods.
- A third very good advantage, it minimizes the number of lines of
testbench code we have to write by hand, because we're moving to
its graph-based approach. That makes it easier to maintain and
pull together across multiple projects.
Right now, what we have with InFact and System Verilog and UVM works. If
a few years from now, PSS comes into play and everyone is adopting PSS, at
some point we might consider moving to the standard's template/language;
which Mentor says they will support.
In the meantime, InFact is straight forward and useful for us without
having to wait for the PSS standard.
- [Spider Man]
User on why MENT InFact leads graph-based PSS
EDITOR'S NOTE: Now a 2nd user supporting Ravi Subramanian's DAC claim that
his InFact PSS is kicking ass. (I may owe an apology on this.) - John
From: [ Light Bulb Man ]
Hi, John,
Can you please keep my name and my company name out of this? Thanx.
I liked [Spider Man]'s Infact story in ESNUG 580 #3. Here's ours.
Roughly 2.5 year ago we evaled Questa InFact and have since used it in 6
production projects since then. We also use Mentor Questa and Synopsys
VCS for simulation.
I liked that [Spider Man] when he says InFact gives 10x faster coverage. He
is right on the granular point tool level, but for us the question was:
"How will using Infact speed up our overall project times?"
What we found was that using InFact on an overall project basis gets us
full coverage in 1/2 the time we used to take before.
Not that granular 10X, but an overall 2X.
---- ---- ---- ---- ---- ---- ----
WHAT WE DID BEFORE USING INFACT
Prior to using InFact, our verification methodology involved partitioning
the effort as follows:
1. We would first model the basic design intent, e.g. simple
things, such as making sure that we don't send in a packet
that has an incorrect checksum. We had to capture this first,
so that when we start to verify the design we didn't include
illegal/invalid inputs.
2. We'd then run a massive number of simulations using different
random seeds, and hope that the random simulation constraint solver
would explore all the input space and cover the area that we want.
3. Because some of the branches or cases are hard to hit -- those
vaguely defined in terms of the built-in VCS/Questa constraint
solver -- we'd hand craft constraints to steer the simulation
engine toward those hard-to-hit areas.
Unfortunately, the moment that we touched the code base, we risked
introducing inadvertent errors -- that would require rework and
costly debug.
Further, even with all our "guidance" for the constraint solver, often the
code base would be bloated and cryptic. (e.g. why would you have to solve
that variable before another variable?) All this would take lengthy
iterations and engineering man-hours.
---- ---- ---- ---- ---- ---- ----
JUSTIFYING INFACT TO MY MANAGER
With InFact, all we really need to do is to capture the intent specific to
our design. I can just concentrate on putting in the functional constraints
that logically relate to my project.
InFact understands the relationship between the different variables in the
large input space. It automates the test generation so that we explore the
input space effectively -- giving us much faster turnaround time to cover
the same space.
A few years ago, my boss pulled me aside and asked me,
"What is the return on investment for this tool? If I were to
open my own pocket to pay for it, why would I do it?"
To convince him, I gathered data on how many total engineering man-hours
that we spent -- and could potentially save -- if we only had to spend our
engineering time developing the functional constraints, and then cranked
through InFact to get the coverage we wanted.
For a typical 6-month project with 200,000 to 300,000 gates, I estimated
we could save 2-3 months, i.e. almost half.
Based on that my boss was able to justify this tool purchase.
---- ---- ---- ---- ---- ---- ----
OUR PILOT PROJECT WITH INFACT
When we first tried to evaluate InFact 3 years ago, we tried to do it on a
legacy project. Unfortunately, we could not retrofit our original test
bench to effectively evaluate the tool.
I realized that sometimes you must change your approach to successfully
adopt a new technology. So, I ultimately decided to use/evaluate InFact on
new project where I could create the test bench from scratch. This was a
better fit because it was early enough where we could morph our design to
fit a different style.
For the eval, we were given the same number of engineers and timeline as
our prior project. We took the first step (designed the project and the
application-specific functionality), and we had InFact generate stimulus
(it guided the simulator) to hit our coverage objective.
As a result, we got the full "half time" of savings that I expected. We
finished so early that I called my contact at Mentor and jokingly asked:
"What do I do with the rest of the time?"
I did not expect a serious answer, but he was resourceful, and said:
"Why don't you use the rest of your compute resources/time and
turn on the stimulus engine to let it run free? Let it chase
corner cases? We'll call that the bug hunting mode."
We did just that, and found a few hard-to-hit scenarios, with barely time to
finish. With our previous approach, we would have never found them.
I've done 6 projects with InFact now. Now I only need to input the
functionality specific to my design and skip the coverage guidance step.
I conservatively believe using InFact *also* saves us 50% of our setup time.
When we did the coverage guidance manually, we also had to do additional
test bench debug to ensure we didn't accidentally introduce a bug into our
verification code -- eliminating that is a major time savings. Although on
our later projects we never compared InFact to a traditional constrained
random approach, I do see we don't need as many resources as we used to.
I also ran simulation with and without InFact on the same project. Since
InFact's optimization cuts down the repetition in simulation runs, we got
the same level of coverage with a 30X reduction in runtime.
---- ---- ---- ---- ---- ---- ----
SIMULATOR CONSTRAINT SOLVING
Our engineers who use InFact come from a background where constraint solvers
and all the guidance that I mentioned to you earlier are see as a headache.
They think of "modeling" as guiding the solver.
A constraint tells the System Verilog simulator:
1. What to do.
Example: For an even parity generator -- when you just count the
number of 1s so that it would come out to be an even number. The
extra "parity" bit on the bus, will make it to be even parity or
not. You can randomly generate the input bits, let's say 8 bits.
The 9th bit would make it odd, so you set the constraint such that
when you add all the bits, the sum of the bits plus the parity bit
would be even.
2. What not to do.
Example: to exclude an illegal action, such as an opcode field
for a processor not be an invalid code.
In computer software, there are two main language domains.
1. Procedural. Most computer languages, such as Fortran and Java
are procedural, where you explicitly tell the computer how to
execute and return the result.
Example: Let's say you want to take two numbers, and compute the
remainder. Let's say 12 modulo 3 would be 0, because when you take
12 divided by 3 the remainder is 0, but for 13 divided by 3, the
remainder would be 1, and so. You could specify in your
procedural to successively go into a loop and take the first
numbers, 12 for example, and successively subtract 3 from it until
the result is less than the second number 3, and that is your
remainder.
2. Goal-oriented. This is different paradigm, used for Artificial
Intelligence, such as Prolog. A goal-oriented paradigm just tells
the computer "I want *this* result" and it leaves it up to the
computer to try to come up with the solution itself.
Example: "I don't care how you perform it but get me the remainder
that would satisfies this mathematical property" -- those are called
constraints. Then the program can do multiple subtractions, or it
just divide and take the remainder.
We first come up with constraint solving for very simple cases, such as on
the send side randomly generating the content of network payload, then to
calculate the checksum across the whole payload. And on the receiving side
get the payload and then add all of them up to compare checksums.
Let's say you send an email to me, sometimes people just change the word
modeling and put an "s" in the end or something. Because you have an extra
"s", when you add everything up, the checksum would be different.
Because the original checksum is sent with the data across the network so
when I receive it that is a very simple case I can detect that somebody has
messed up the message to me.
When constraint solving first come out, we would just set the goal and turn
the constraint solver loose to figure out a way to achieve the end results.
Initially people saw a huge benefit. But over time people realized
was that there's a reason why you hear more about Java and C/C++ instead of
Prolog. Sometimes it got to the right answer, but sometimes it didn't get
to the answer in a reasonable time. But if you spell out the exact recipe
in a procedural way on how to compute the result, the computer software
could solve it much faster.
The quick fix is guide/assist the System Verilog constraint solver. However
engineers see having to add guidance as a pain -- plus humans adding solver
guidance roughly doubles the total number of man-hours a verification project
takes.
In contrast, InFact's killer graph-based constraint solver means I do NOT
have to guide/assist the System Verilog constraint solver. I only set out
objectives (goals) of what I want to solve -- and InFact does the graph or
derivation of the path to get there, as well as manages all the dependencies
and redundancies.
After we adopted InFact, we no longer do extra constraint guidance.
In my next section I will cover Mentor InFact modeling, libraries, and detail
how their hierarchical graph-based works. (See ESNUG 581 #2.)
- [ Light Bulb Man ]
---- ---- ---- ---- ---- ---- ----
Related Articles
Mentor InFact PSS user convinces boss, cuts project time in half
User kickstarts InFact PSS by using *only* System Verilog as input
User reports InFact PSS prunes coverage space for 30X sim speed-up
Cooley schooled by user on why MENT InFact leads graph-based PSS
Mentor InFact was pretty much an early No Show in the PSS tool wars
Join
Index
Next->Item
|
|