( ESNUG 442 Item 9 ) -------------------------------------------- [03/24/05]
Subject: ( ESNUG 441 #2 ) Cadence Comments on the "DFT Compiler MAX" News
> For the Synopsys test rivals (Mentor, Cadence, LogicVision, etc.), how
> do you see this DFT Compiler MAX announcement?
>
> I'm really stumped here and would like your thoughts on this.
>
> - John Cooley
> ESNUG/DeepChip.com Holliston, MA
From: Paul Estrada <estrada=user domain=cadence spot calm>
Hi, John,
You are ever the skeptic. Yes, this appears to be news. Synopsys
previously offered compression only through its SoCBIST product, which
is a "heavy" IP-oriented approach that doesn't seem to have done very
well in the market despite their initial claims for the technology.
With this announcement Synopsys is introducing a modified version of
broadcast scan (a.k.a. Illinois scan). That's the same approach that
we've offered in Encounter Test (formerly IBM Test) for years. We've
stuck with a pure fan-out approach (i.e., we literally tie N scan chains
to the same scan input pin). This works because there are so few "care
bits" in any vector set.
The classic theoretical problem with this approach (known as the
correlation problem) is that it puts a big burden on ATPG to be able to
get high coverage. Empirically, this is not nearly as big of an issue as
those using other approaches would have you believe because of the
dispersion of the scan chains across today's complex ICs. Synopsys'
"Adaptive Scan" is a "combinational decompression structure" circuitry
at the inputs which helps to de-correlate input data. This adds extra
logic and complexity, which should be totally unnecessarily in the vast
majority of cases if you have a strong ATPG engine. Fortunately,
Encounter Test True-Time ATPG engine (for stuck-at and transition
coverage) is strong enough that this has not been a problem at all for
us. In fact, it's a great way for us to show how much more powerful our
ATPG engines are than others on the market.
Looking at some of the Synopsys test marketing claims:
1-pass Test Compression Synthesis: What does this actually mean?
Encounter Test has always had a 1-pass flow, and I don't know why any
supplier would offer a multiple-pass flow. I think you can include this
in your marketing-speak bucket.
10-50x Compression Ratios: These numbers certainly sound reasonable. We
can certainly achieve these ratios. However when it comes to compression
ratios, it's buyer beware time. If your scan tests take 50% of your
total time on the tester, then your marginal return on additional
compression falls off rapidly as you asymptotically approach 50% savings
(2x = 25% savings, 5x = 40%, 10x = 45%, etc.) A little bit of
compression goes a long way. My guess is that less than 10% of designs
at 130 nm and below today have any compression. It certainly makes sense
to get some compression on every chip. BTW, you get the first 2x
compression for free with Encounter Test with guaranteed zero impact on
fault coverage (since it's just a MISR at the output).
5X Test Vectors: Warning, warning, warning. Synopsys is claiming that
adding delay fault models increases by "5 times the number of vectors
that would have been required." Ouch! That is way high and indicates
that you may have a serious problem with your ATPG tool. There's no
reason for any design team to accept this. Encounter True-Time Delay
Test routinely generates >95% stuck-at and >80% (typically >90%) delay
fault coverage with less than 2x vector increase (and remember, we give
you the first 2x in compression for free).
High Fault Coverage: Since Synopsys opened the door, I have to get on my
sandbox here. Transition fault coverage based on delay test is the new
metric of merit for nanometer designs since delay faults are becoming
dominant and stuck-at coverage is relatively easy. However, not all
transition fault coverage is the same. Transition faults that are
apparently covered by some tools are routinely missed by even
at-speed tests. Yes, it's true, and we've proven it. It's the
long-path/short-path problem. If a transition is on two or more paths,
the ATPG engine will most likely cover it on the short timing path which
has slack (because that path is easier to control). So, for example, a
20 ps transition on a path with 200 ps of slack can have a defect that
slows the transition by a factor of 10x (to 200 ps) and still be
undetectable (still 20 ps actual slack). Some vendors will claim that
"n-detect" approaches (covering transitions n different ways) addresses
this. It helps, but it is not deterministic and requires extra vectors.
The real metric of merit is "transition defect" coverage which is the
ability to catch defects of a particular size. Encounter True-Time Delay
Test is the only ATPG tool that uses actual design timing information to
set the transition test times -- including the ability to test
non-critical paths faster-than-at-speed. We've blown customers away with
this by showing that we catch transition defects that other tools missed
despite claiming to have covered. True-Time is so compelling that the
other ATPG vendors are going to have to try to copy it. (Mark my words
here.) We use a built-in timing analyzer which has a number of
advantages. As a short-cut, they'll probably tie in an external timing
engine. Expect this to be slow and produce lower quality results.
No Impact on Design Timing: This is a gimme. Compression is on the scan
chains and out of the timing path.
No Impact on Design Physical Implementation: Uh, this is a bit of
marketing-speak. Obviously, you can't add logic to a design (which their
compression does) without impacting the physical design. That said, the
less logic, the less impact, and they probably have a lot less logic
with this approach than they had with SoCBIST. I must point out that our
simple fan-out approach affects wiring (as all compression techniques
do), but we add no logic at the inputs. Zero.
In summary, DFT Compiler MAX is news. The news is that Synopsys is
following our technical lead. I encourage any design team interested
in DFT Compiler MAX to check out Encounter Test and other competitive
tools at the same time. We love evaluations because we win them.
- Paul Estrada
Cadence Design Systems San Jose, CA
Index
Next->Item
|
|