( ESNUG 365 Item 11 ) -------------------------------------------- [02/15/01]
Subject: ( ESNUG 363 #9 ) PrimeTime 2000 SPF Timing Models Too Optimistic
> With the new 2000 versions of pt_shell, we recently moved to doing timing
> with a SPF (our designs have been too large to attempt this before).
> Doing this however has some implications. PrimeTime claims to calculate
> an effective capacitance form the distributed RC_network in the SPF. For
> this it uses some algorithm, which assumes that there's a montonically
> increasing relationship between input transition/output capacitance with
> the cell delay and this should be expressed in the cell delay table. If
> it is not so, it says it found a negative driver resistance and failed to
> compute effective cap in which case it will use the lumped capacitance.
> This is optimistic for best case conditions (that is if we care!!)
>
> This is a a pretty bad assumption (and we know it!! after doing a lot of
> spicing!!)
>
> I reported it to Synopsys AE. They first didn't believe me, then when I
> gave them the library example, they wanted a test case which we couldn't
> give them because the library is from Avanti. So we asked them to come
> on site. They have been missing since then.
>
> - Ajay Kumar Sinha
> Silicon Systems India
From: David Chiappini <David.Chiappini@matrox.com>
Hi John,
I just wanted to thank Ajay for sharing this problem. I've been fighting
with it for a little while now. It should help having some "global"
confirmation.
- Dave Chiappini
Matrox Graphics Inc. Montreal, Canada
---- ---- ---- ---- ---- ---- ----
From: "Thomas D. Tessier" <tomt@hdl-design.com>
John,
Just want to say that it is good that Ajay has dug so deeply into this
issue. I also ran into something like this by the fact that I could NOT
get PrimeTime and my foundry delay calculator to track using the same
SPEF file; see details in upcoming SNUG 2001 paper. I even found a path
that I provided my foundry vendor to SPICE up and they indicated that
the foundry delay calculator was correct. Unfortunately for us they
would not provide us with the SPICE deck for this path so I had no way
of providing this data to Synopsys. I would say yes Synopsys is aware
but I feel the user base is currently being blocked by either other EDA
tool vendors or foundry's lack of information flow.
Tom Tessier
t2design Inc. Louisville, CO
---- ---- ---- ---- ---- ---- ----
From: Shane Dowd <smd@jennic.com>
Dear John,
This is one of the first pieces of information that I have found regarding
issues and problems with Primetime and delay calculation.
We are using SPF and SPEF from xCalibre to allow post-layout delay
calculation in pt_shell (2000.11 now). Initially I had assumed that things
were going to plan, but when you look in a little more detail you discover
the problems. I can only say that the RC-004 warning that you are seeing
are the same types of warning that I am seeing within pt_shell.
My issue is that I have an I/O library with operating conditions set to
WCCOM, NCCOM, and BCCOM, and a standard cell core library with operating
conditions set to fast, typical, and slow. I have found that for chip
level delay calculation I would expect to set the operating conditions to
those of the I/O library, however when I do this I get the RC-004 issues
for the core cells.
Initially I thought this was related to a units issue within the SPF and
SPEF but have since confirmed that this is not the case. In fact if I set
the operating conditions to the core levels (even though the current_design
is the chip level I do not get the RC-004 warnings. If I allow Primetime
to use the default operating conditions (i.e. I do not set any using
pt_shell commands) I get the same issues with RC-004.)
By admission from Synopsys, Primetime cannot handle multiple operating
conditions - however I have no confidence in the SDF being generated as the
operating conditions are for the core (with incorrect voltage settings).
Only advise given by Synopsys is to check for similar problems with the
library vendor (still waiting for answers). This must be the case for
every-one using a COT flow and DSM technologies, so why is it so difficult
to get information. I would be interested to hear if Ajay have similar
issues with multiple libraries and operating conditions.
- Shane Dowd
Jennic Ltd.
---- ---- ---- ---- ---- ---- ----
From: [ One Of The Lessor Pokemon ]
Hi, John,
Anon please.
I saw Ajay's post/note in your ESNUG post. Here's what I understand:
1. You do need the cell delays to be monotonically increasing with output
capacitance. This is I belive, quite a realistic requirement. If for
some reason your library is not monotonic with increasing load, then I
think its a library problem.
I guess that for the above you will land up getting "RC-004 ... failed to
compute c-effective" or some message of that sort.
2. Now from what I know, the library tables do NOT have to be monotonic
with increasing input transition time. This has something to do with
the way library cells are characterized, for eg. you might measure your
cell delay as being from 60% if your input to 50% of of the input. In
this case if your cell trips then you can have -ve cell delays.
In this case with non-monotonic slopes, I believe that you will not get the
"failed to compute C-effective" stuff.
- [ One Of The Lessor Pokemon ]
---- ---- ---- ---- ---- ---- ----
From: [ PrimeTime Tech Support ]
Hi John,
Ajay's problem is a known issue in the 2000.05 PrimeTime release and occurs
when:
1. Detailed parasitics (SPEF, DSPF) are backannotated.
2. Hold/min analysis is being performed.
3. PrimeTime is unable to create a driver model for its RC delay
calculation because the cell library data appears inconsistent
with rules of physics.
This issue has been resolved in the 2000.11 release of PrimeTime.
Here's some background ...
A cell's library data is created by simulating the cell with a set of
different output capacitances. If this output capacitance is increased,
then it will take more time to charge up, and the delay and slew times
will both increase as well. This sensitivity to output capacitance is
used by PrimeTime to set the drive resistance in its driver models used
in RC delay calculation. Therefore, PrimeTime uses this physics rule
and requires that delay and slew in the cell's library data to both
increase as output capacitance increases. A violation of this rule
in the library data prevents PrimeTime from creating a driver model
since the drive resistance will not be positive. A failure to create
a driver model subsequently prevents any calculation of C_effective.
When PrimeTime is unable to calculate C_effective, it attempts to use
fallback data so that it can still perform a calculation (although it
will not be as accurate). In the 2000.05 PrimeTime release, C_effective
would default to C_total (or lumped capacitance) for both setup (max)
and hold (min) analysis. When doing setup analysis, this will lead
to a pessimistic analysis since C_total will lead to longer delays
than with a true C_effective. However, for hold analysis ("best case"
as Ajay calls it), this will lead to slightly optimistic results.
This is the problem Ajay describes and is found in the 2000.05 release.
This issue of slightly optimistic results for hold analysis was
resolved in the 2000.11 PrimeTime release. In 2000.11, C_effective
falls back to 0, not C_total, during hold analysis. This will lead to
shorter delays than with a true C_effective and hence more pessimistic
results for hold analysis, not optimistic results as before. In many
cases, this may be very pessimistic since the true C_effective may
have been closer to C_total than 0, so the delay used will be much
smaller than the true delay. Nevertheless, it is still pessimistic.
Our strong recommendation is to use 2000.11 PrimeTime when you backannotate
detailed parasitics. You will not get these optimistic hold time
calculations should PrimeTime be unable to create a driver model.
- [ PrimeTime Tech Support ]
---- ---- ---- ---- ---- ---- ----
From: Ajay Kumar Sinha <ajay@siliconsystems.co.in>
Hi, John,
After a lot of test cases and sessions with synopys AEs, here is what I
gathered
1. There is still no support for libraries characterized at different
trip points. Maybe in future, till then bug ur library vendors.
2. RC-004 messages will/have disappeared in Primetime2000.11 for the
following
- Negative delay numbers in the library tables
- even if they are for increasing caps only that the relationship
between increasing cap and delay should be monotonically increasing
- for non-monotonic and monotonically decreasing relationship
between input transition time and delay
( I haven't tested these - maybe in a week or so)
But RC-004 will still appear if the relationship between output cap
and delay is either constant or monotonically decreasing or
non-monotonic. This is because of the way drive resistance calculation
is done based on the fact that 1% increased change in the Ceff will
give you a increased delay, which if it can't find in the table, it
fails.
The workaround for libraries which have constant delay for a range of
cap values is either
- change the library by putting increasing digits in the LS
digit of the delay values OR
- change the table to a scalar table
There is no workaround for libraries where the delay is decreasing
(maybe a insignificant amount but neverthless decreasing) or is
non-monotonic (e.g. for a range of cap values, the delay increases,
but for a certain cap, it goes down and unfortunately your output cap
is between these.) Maybe we should contact noel.menezes@intel.com
and ask him to enlighten us. He was one the authors for this theory
which Primetime uses to calculate Ceff.
Also, there were a few other issues with Primetime which I filed with
Synopsys and now are STARS. Just wanted to share these:
1. Primetime is not writing the SDF out correctly (happens whether you
have backannotation or not). It is treating 1'b0 and 1'b1 in the
netlists as nets and then calculating a delay number for the
interconnects. These interconnects are basically a combination (as
in probabilty theory) of pins connected to 1'b0 and 1'b1. Even with
SPFs Primetime is timing these arcs. It should ignore these nets
(treat as constant). This is a STAR now.
2. Primetime seems to be not annotating nets with long names at random
from the SPF. This is critical for long single fanout nets
(unfortunetly ours happen to be CLK nets!!). This is a STAR now.
3. report_net and report_annotated_parasitics commands give different
number of annotated nets/parasitics. This is also a STAR now.
There is also an issue of Primetime writing out edges for IOPATH delays
for non-sequential cells -- for example for the select to output path of a
MUX. Can people enlighten me on this? This of course happens even after I
switch on -no_edge option in write_sdf. I am still investigatig this and
do not know if the problem is with the library or Primetime. The reason
why this is important is that the latest versions of NC-Verilog (almost
de-facto for gatesims) do not honour such constructs!!
Finally , a point which I hope people are not forgetting about: SDFs from
SPFs need to be generated with the identical constraints with which timing
is done on the SPF -- else results from gatesims (annotated with the
generated SDF) will not match Primetime results. This is because of
multiple paths and distinction made by case analysis. Of course you need
to have the precision same else paths at the edge will fail because of
rounding errors.
- Ajay Kumar Sinha
Silicon Systems India
|
|