Editor's Note: I'm chasing down two stories for my SNUG'03 Trip Report.
If you could answer either of these questions, I'd greatly appreciate it.
1.) If you're a Apollo/Astro user, what's the difference in performance
that you've seen running Apollo vs. Astro? How many instances flat
can you run in 24 hours in Apollo vs. Astro? What was the migration
from Apollo to Astro like for you? Painful? Easy?
2.) If you're a PhysOpt user, do you use PhysOpt as RTL-to-placed-gates
or as gates-to-placed-gates? Why?
And, as usual, if you need to be anon, just say "Make me anon" and I'll
honor it. Again, I'd greatly appreciate your help on these 2 questions.
- John Cooley
the ESNUG guy
( ESNUG 411 Subjects ) ------------------------------------------ [04/23/03]
Item 1: ( ESNUG 410 #2 ) Magma BlastFusion, Useful Skew, Scan, and OCV
Item 2: ( ESNUG 409 #1 ) Matt Weber's Paper About On-Chip Variation (OCV)
Item 3: ( ESNUG 406 #11 ) Conflicting Floorplan Compiler User Impressions
Item 4: Boston SNUG Call-For-Papers Abstracts Are Due Monday, April 28th
Item 5: Ersin's 18 Tips and Tricks and Gotchas for Avanti AstroRail Users
Item 6: A Boatload Of Readers React To Aart's VHDL End-of-Life Notice
The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com
( ESNUG 411 Item 1 ) -------------------------------------------- [04/23/03]
Subject: ( ESNUG 410 #2 ) Magma BlastFusion, Useful Skew, Scan, and OCV
> Using the tool in Magma, it gives excellent results, yet, from discussions
> with other Magma users, I know that some users disable the feature! I
> believe they disable it because they really don't understand the concept;
> or they have learned how to minimize clock skew and they don't want that
> hard-earned skill to be redundant.
>
> - Simon Matthews
> Paxonet Fremont, CA
From: Howard Landman <howard=user domain=riverrock.org>
Hi, John,
You can count me as a recent convert to useful skew. When I first
encountered the practice, it gave me the shivers because it added a
whole set of unwelcome complications to the timing analysis, and the
engineers doing it were (I thought at the time) treating this a little
bit cavalierly.
A B C
------- ------- -------
---|D Q|-- slow logic --|D Q|-- fast logic --|D Q|---
| | | | | |
| | | | | |
clock ----|> | --|>o-|>o----|> | ----|> |
| ------- | ------- | -------
| | |
--------------------------------------------
However the EDA support for useful skew has gotten much better in recent
years, and I now believe that it can and should be used to improve
timing. I've even seen cases where Magma BlastFusion has successfully
laid out and met timing on netlists with "impossible" timing constraints,
such as an input delay longer than the clock cycle time. (Hey, no problem,
just skew the receiving FF a little - as long as you don't create a hold
time problem!) This rather startled me the first time I noticed it, but
I'm getting used to it now. Probably in a couple more years I won't be
able to tolerate a tool that CAN'T do it.
I could still see disabling the feature if I was using a clock distribution
strategy, like a grid, which was inherently imppossible to tweak that way.
But if I'm letting Magma do the clocks, there's no obvious reason to cripple
the BlastFusion tool.
Simon also wondered:
"One person though did express the idea that useful skew could make
on-chip timing variations worse. Do you have any thoughts on this?"
to which Jack Fishburn replied:
"There's no reason that a useful-skew configuration will have more, or
less, timing variation than a zero-skew configuration."
I have to disagree slightly with Jack here. If the useful skew is
created by adding more delay, such that the insertion delay of the clock
is increased, then one should expect that the OCV on that path with also
increase (possibly linearly with the delay, or possibly that divided by
the square root of the number of stages, depending on the exact nature
of the OCV source - see my earlier ESNUG 409 #1 post about this).
- Howard A. Landman
Riverrock Consulting Fort Collins, CO
---- ---- ---- ---- ---- ---- ----
From: Thomas Moehring <thomas.moehring=person company=infineon jot yon>
Hello John,
My company is using Magma's BlastFusion tool which supports useful skew.
It can add significantly to the timing optimization capabilities, e.g. to
automatically create early or late clocks for I/O registers, or, as in
Simon's example, steal some time from the fast path to relax the required
time of the slow path.
On the other hand, with full scan path design as we normally do, useful
skew can become very painful. Depending on the architecture of the scan
flipflops, any local clock skew between two adjacent flipflops in the
scan chain can cause, or increase, a hold violation, which will require
delay buffers for fixing. Applying useful skew extensively can result in
tons of delay buffers. In that case you may decide to switch off useful
skew, or at least restrict it to some 50 or 100 psec.
My approach using Magma BlastFusion is:
- insert the initial clock buffer tree ("run route clock ...")
- analyze timing violations, both setup and hold
- per clock, decide on the best strategy for clock tree optimization
("run gate clock ..."), minimize skew, or meet target latency, or
allow for useful skew, or none
Yes, sometimes no optimization is the best. Happy balancing!
- Thomas Moehring
Infineon Munich, Germany
---- ---- ---- ---- ---- ---- ----
From: Jon Stahl <jstahl=user company=avici aught prawn>
Hi, John,
Magma's use of useful skew does have the potential to make on-chip variation
worse, but mostly in comparison to a balanced clock tree (BCT) structure.
As an example, I taped out a chip in 2000 in 0.25 um applying "useful" skew
(Not with Magma, but with the former Ultima ClockWise tool). I estimated
that this approach saved us up to ~-1nsec max slack and thousands of nsecs
of TNS, and certainly months off of our timing closure schedule.
At that time, zero-skew clock trees were typically built using a "balanced
clock" tree approach. This was popular because of it's simplicity, ease of
implementation, and the fact that most former EDA tools were incapable of
the timing accuracy required to construct a zero-skew pure buffer tree.
In the BCT paradigm, the number of buffers from the root of the tree to any
leaf is constant. Further, BCTs were nominally built with a huge buffer at
the tree root driving large wire(s), perhaps big buffers at the first level,
and then smaller buffers at subsequent levels. Thus the depth of the tree
was shallow, and the clock path divergence small -- not only in number of
different levels, but in the number of cut points at which any two flops
were on different branches.
Downsides included, adding balance loads, pre-planning the clock wires and
perhaps root buffer (if an IO slot driver), power consumption, and EM issues
due to high current drive cells.
In 2000, the majority of ASIC vendors did not perform OCV analysis, both
because the processes didn't require it, and the commercial EDA tools
didn't support it.
These days the modern processes necessitate OCV constraints on the order
of 10-12%, and most tools support the analysis. Further, the tools are
now capable of building zero-skew pure buffer trees. And with bigger
designs with many more clock leafs, the depth of the tree and magnitude
of divergence is very large.
On a recent 0.18 um design with ~60K flops and a worst case 5 nsec insertion
delay, taking 10% OCV into account we had the potential for up to 0.5 nsec
of additional constraint.
Would implementing useful skew have made this worse?
Perhaps, but if was assume 1 nsec of extra skew and complete divergence,
we only lose back 100 picosec (1 nsec * 10%). So it's a net win, unless
the variation happens to hit you on a cut point for an orthogonal tight
path where intentional skew can't provide additional margin. A very low
probability scenario I would think.
- Jon Stahl
Avici Systems N. Billerica, MA
( ESNUG 411 Item 2 ) -------------------------------------------- [04/23/03]
Subject: ( ESNUG 409 #1 ) Matt Weber's Paper About On-Chip Variation (OCV)
> Paul Zimmer asks: "If your path goes through 20 or so elements, and the
> variation is sort-of random, isn't all-min vs all-max a little extreme?"
>
> Yes it is. The mathematics of it would be that if you had N stages which
> all had equal delay, and you model OCV as independent random variables,
> then the variance goes up linearly with N and the standard deviation goes
> up as the square root of N. So you'd expect 16 equal stages to be about
> four times as bad as one stage.
>
> However any deviation from the above assumptions could make it worse. For
> example, some of the variation can be systematic rather than independent.
> Or not all the stages could be equal in delay. And some of these sources
> of variation are linear, not square-root. So a simplified but moderately
> realistic model of OCV might look something like:
>
> On-Chip Variance (OCV) = a*D + (b*D)/sqrt(N)
>
> where D is the total path delay from the point at which the two paths
> diverged. It seems that most tool (and silicon) vendors simplify this
> further to just a linear model. This isn't quite right, but it's not
> too unreasonable.
>
> - Howard A. Landman
> Riverrock Consulting Fort Collins, CO
From: Philippe Duquennois <philippe.duquennois=user domain=philips got mom>
Hello John,
I am writing about this OCV because I beleive that EDA vendors should
investigate and developp what I would call statistical timing analysis.
Perhaps they are already? Various papers have been written on OCV on
transistor devices. One is even correlating the Idsat with the loation
of the device on the reticle. This paper mentions a 15% variation of
Idsat on the same die. (IEEE Transaction on CAD of IC & Systems,
Vol 21 No5 , May 2002, page 544). This paper explains with many more
details what I am writing below...
For a path with N stages at 0.15xD each, the total absolute error on
the delay is indeed sqrt(N)x0.15xD, but the relative error delta(D)/D
(or OCV factor) on the path delay is:
sqrt(N)x0.15xD/NxD = 0.15/sqrt(N)
So the variation (std deviation) on each element is "smoothed" when there
are many elements in the path. The relationship (Normal law) above when
N is large enough (20) and/or the individual error follow the Normal law.
If N is smaller, the Student distribution should be use, but the difference
is small if N > 10.
For example a path with 25 elements shoud have an OCV of 0.15/5 = 3% while
a path with 16 elements should see 4%
Todays's commercial tools (Primetime, Pearl) apply the OCV value we specify
to entire paths without considerind the number of their elements. This is
very pessimistic and unrealistic. However, Murphy's Law is always around
and it may happen that all the elements in a path go in the same direction.
And if you have many paths in your design that are close to the required
period this probability is increased. Now we are talking about confidence
levels... I wish that tools in the future will consider these elements to
do the STA which will then be known as Statistical Timing Analysis. ;-))
- Philippe Duquennois
Philips Semiconductors Palaiseau, France
---- ---- ---- ---- ---- ---- ----
> Matt Weber writes "IBM is the only ASIC vendor I know of that requires
> this analysis". I know of at least one other, but I'm under NDA. :-)
> His discussion of common path pessimism removal was excellent, and this
> issue is critical in OCV analysis; so I'd recommend everyone read it
> through carefully and make sure they understand it.
>
> - Howard A. Landman
> Riverrock Consulting Fort Collins, CO
From: David Norris <david.norris=person company=legerity wrought qualm>
Hi John,
I'm interested in the Matt Weber OCV article referred to in Howard's reply.
Where can I find the article?
- David Norris
Legerity Austin, TX
---- ---- ---- ---- ---- ---- ----
From: Matt Weber <matt=man company=siliconlogic caught psalm>
Hi John,
I thought I'd add my Boston SNUG'02 paper and presentation to this
ongoing discussion on On-Chip Variation (OCV) in ESNUG. Those of us
doing IBM ASICs have dealt with OCV for years. I thought I add our
experiences to this thread.
My paper starts by describing what on-chip variation is, its sources,
and the problems it can cause. It shows how to enable OCV analysis in
PrimeTime and what the resulting timing reports look like. An important
part of OCV analysis is "clock reconvergence pessimism removal," and
I've included a description of what that is and why it is important.
(Just in case its usefulness is not obvious from the 43 letter PrimeTime
variable name that turns it on.)
Most static timing analysis today does not include on-chip variation
analysis. However, as clock frequencies continue to increase and process
geometries continue to decrease, I think it will become more common. I
hope my paper can help people avoid some headaches as they make that
transition.
- Matt Weber
Silicon Logic Engineering, Inc. Eau Claire, WI
[ Editor's Note: Matt's paper is #44 of DeepChip Downloads - John ]
( ESNUG 411 Item 3 ) -------------------------------------------- [04/23/03]
Subject: ( ESNUG 406 #11 ) Conflicting Floorplan Compiler User Impressions
> In Floorplan Compiler, don't ever put a notch in the bottom left corner of
> your block (ie: don't make a rectilinear block that would notch out the
> traditional origin) if you are going to be using Physopt. Ex:
>
> ************************
> * *
> ***** *
> * *
> 0,0 ********************
>
> We had a block that looked like this and PhysOpt (no matter what version)
> completely fell apart on it.
>
> - Russell Petersen
> Scientific Atlanta Lawrenceville, GA
From: Maynard Hammond <maynard.hammond=user domain=sciatl taut sawn>
Hi, John,
We just released our first 0.13 um 6 layer metal chip floor planned with the
Synopsys Floorplan Compiler tool. Using FPC we were able to create a very
dense chip. Individual block utilization numbers were between 85% and 95%.
These results were better than expected. Traditionally, our blocks have
been less utilized (70 - 85%). Our design has approximately 2.2 million
instances in it. We have several clocks ranging from 27 MHz to 250 MHz in
the design.
Before FPC, we traditionally used a standard ASIC flow. We would generate
gates and throw them over to the foundry for layout, route, etc. It wasn't
working because timing iterations, and design cycle times were going up.
We saw that channel widths were increasing, too. We determined that we
needed to control more of the flow. We felt this was especially true when
we went to a 0.13 um flow. As the designers of the blocks we also felt we
had more insight as to how best it should be floorplanned.
Because of this, we started looking into floorplanning tools. Ultimately,
we choose to team up with Synopsys. We liked the ability to insert
feedthroughs and to try an abutted flow.
To build the chip the new way, we used Design Compiler and Module Compiler
to synthesize all blocks. We brought the gates into FPC for floorplanning.
DC was used to group and ungroup blocks into their physical blocks. We
divided the chip into 7 large and 5 small hierarchical sub-blocks. The
individual blocks were farmed out to multiple engineers. Each engineer
floorplanned (FPC) and generated his placed gates with PhysOpt. Common
libraries were used by all 4 tools. The blocks with placed gates and the
top-level floorplan were passed to our foundry for routing with Avanti
tools. Our foundry accepted our placed-gates from PhysOpt and the top-level
floorplan from FPC.
We used an abutted flow. I believe this helps us at 0.13 um. I didn't have
to worry about long parallel routes through channels at the top level. FPC
inserts feedthroughs to eliminate over-the-block routing or channel routes.
It adds the necessary ports and nets to each sub-block automatically. This
allows PhysOpt and Astro the ability to work on placement, routing and
signal integrity issues at the block level. This probably made our overall
high utilization for the chip. Previous chips used channels. The channels
erode overall utilization with their low cell count.
We did run into problems. Most were quickly fixed. Our biggest issue with
Synopsys tools is their ability to generate good budgets. This was not just
an FPC issue; it was a DC/PrimeTime issue too. Budgeting errors caused us
the most problems and cost time. We also had difficulty with FPC producing
structures that PhysOpt couldn't use. (See Russ's ESNUG 406 #11 comments
on rectilinear blocks with a notch out of the origin). It's ironic that FPC
can write PDEF that PhysOpt can't read or would read in incorrectly. We
even found a case were one tool would write PDEF that couldn't be read by
the same tool.
We knew that Floorplan Compiler was new. Synopsys linked us up with their
factory so that as issues came up, we could quickly get a work-around or new
code. Synopsys has been very responsive.
Synopsys has just come out with version 2003.03. I haven't had a chance to
test their budgeting algorithms. We also haven't tried to run the notched
block. Synopsys has told us they have new improved budgeting algorithms.
We haven't been able to test these yet. They have improved PDEF over time.
I hate to criticize when I don't know if there are still issues.
Anyway, our chip has more than 9.7 million gates (~20% memory). Once we
had the initial floorplan we were able to handle RTL changes and ECOs of
the chip -- often in just hours. From final RTL freeze to tape-out was
less than 3 months. FPC was a key tool in our flow. We're pleased with
our final results.
- Maynard Hammond
Scientific Atlanta Lawrenceville, GA
---- ---- ---- ---- ---- ---- ----
From: Neel Das <neel.das=person company=corrent sought brawn>
Hey John,
Any word on where all the Floorplan Compiler folks were hiding during the
San Jose SNUG'03? I was looking for them all around the conference area
as well as at the R&D mixer hall for quite some time. Not even the folks
at the mother-flow (aka Galaxy) table seemed to have seem them! Finally,
I managed to talk to *one* person from the FPC team!
Golly! Could it be that they're all working to help the tool mature and
fix bugs? There was a distinct lack of user FPC papers as well.
What's up, Synopsys? psst: is Jupiter winning the battle of the planners?
- Neel Das
Corrent Corp Tempe, AZ
( ESNUG 411 Item 4 ) -------------------------------------------- [04/23/03]
From: Brian Kane <bkane=user domain=cognio yacht alm>
Subject: Boston SNUG Call-For-Papers Abstracts Are Due Monday, April 28th
Hi John,
The deadline to submit abstracts for the 5th Annual SNUG Boston is Monday,
April 28. The conference dates are September 8-9, 2003. Everything you
need to know about submitting an abstract (but were afraid to ask), is
available at http://www.snug-universal.org/northamerica/na_boston.htm
Remember... we just need an abstract by Monday, a paragraph or two that
explains your proposed topic.
- Brian Kane, Boston SNUG Tech Chair
Cognio Gaithersburg, MD
( ESNUG 411 Item 5 ) -------------------------------------------- [04/23/03]
From: Ersin Beyret <ersin=person domain=reshape shot pomme>
Subject: Ersin's 18 Tips and Tricks and Gotchas for Avanti AstroRail Users
Hi John,
I work at ReShape. We are a new EDA company building a hierarchical chip
implementation system using Synopsys and Cadence physical design tools.
I've just completed embedding AstroRail into our system, which has already
been used on a 10 million gate SoC.
My overall impression about AstroRail is good. Compared to MarsRail it is
definitely superior. However there are still issues to deal with and to
work around, especially if you have a sizeable chip:
Dealing with Memory and Runtime issues during AstroRail EM/IR analysis:
a) I've observed that if you run any of the "po" commands (e.g,
poPowerAnalysis, poPGExtraction, poRailAnalysis) successively, the
memory used by the previous command is not released and the next
AstroRail command just builds on it. This can easily eat up the
memory resources of a server (especially during full chip runs)
unnecessarily and cause AstroRail to start swapping. The workaround
is to divide your power/EM/IR analysis into small chunks that
contain no more than one "po" command.
This effectively means you will "save and quit" after almost each
AstroRail "po" command and start a fresh AstroRail execution for
the next one.
For example, one power rail's full chip IR analysis (poRailAnalysis)
takes about 9 G of memory (even after using the hierarchical abilities
of AstroRail). If you run it back to back (VDD first and then GND in
the same run), AstroRail will try to allocate 18 G of memory. Even
if you have a 16 G memory SUN machine, this run will need to swap.
If you divide the run, your total runtime will be much shorter with
the same hardware resources.
b) The Linux binary of AstroRail seems more efficient in terms of memory
usage. If your runs can be met by a Linux server, use Linux machines.
They are much faster CPU wise anyway. Another advantage of dividing
into small chunks is that some of that chunks can be small enough
(memorywise) to be taken care of by a Linux server. (In our system
we've the capability to choose Linux vs SUN machine per stage/chunk.
Hence, other than minimizing swap need, we can further reduce runtimes
by configuring some of the jobs to be sent to Linux servers.)
c) Use AstroRail's hierarchical capabilities to reduce memory usage. We
have enhanced our library creation utility to generate white-box
models for all hard macro IP, and our chip build system to generate
white-box models for all sub-blocks. This means, during full chip
analysis, this data will not be re-calculated but simply be re-used.
Since most of the data is pre-processed, you've less runtime and
memory usage during full chip analysis. With hierarchy, other than
preventing re-calculations, you also get two things: possiblity of
paralelism and smaller jobs.
d) Polygon shapes in CONN views chow memory usage during poRailAnalysis.
Generate CONN views for only the layers of significant importance to
power grid. (e,g, metal1-2-3 may be ignored for a 6 layer process)
e) With AstroRail, in order to properly use CONN views for hard macro IP,
you need to generate a CSF (current source file), which are in the
form of ASCII files attached to the CONN views. After generation, you
can see them under CONN directory as cellName:versionName_1. However,
the number of current sources in a CSF affects memory usage (think of
a SPICE run that has resistance network and current sources at each
node). Hence you need to be careful when using the poGenCurrSource
command. By default it creates a current source for each metal1-2
intersection. You may want to use the intersection of the highest two
metal layers instead, which will probably give less number of current
sources. The power consumption value of the IP is by default divided
evenly among these current sources.
f) Use poCreateConnViewByHerc to generate your CONN view if you have a
Hercules license. It runs very fast and with low memory footprint.
It is also smarter when figuring out hierarchical connectivity.
g) Branch-off non-critical stages into separate execution paths. e.g,
our system uses AstroRail's reporting abilities to generate various
reports. All these can be done separately from the main execution
branch and save turnaround runtime for the main run.
h) Divide VDD and GND rail analysis into separate execution paths. Since
they use the same database, you can't really paralelize this but this
separation provides on-demand execution where you can totally ignore
GND rail for the first runs (e.g, in order to test the mechanics of
your flow, doublecheck the library data needed by power/EM/IR runs
etc.) Later on you can independently pick and choose which rail you
want to analyze.
i) Also, you may want to use different voltage supply values for power
consumption, EM and IR analysis. We've three totally independent
flows for that.
A flat full chip run to analyze power consumption analysis and then VDD rail
with AstroRail initially took us about 27 hours on a 16G mem, 900 Mhz SUN
machine (GND analysis runtime would be similar amount). This is mostly due
to memory outages during rail analysis, consequent swapping, and unnecessary
recalculations. After my enhancements above, our runtime to prepare white-
boxes of all sub-blocks, analyze full chip power consumption and then VDD
and GND rail "combined" is about 5 hours only. Since the run is divided
into small chunks, rerunning rail analysis (e.g, for testing new p/g pad
points) means running the last segment only, which has about only 45 minutes
of turnaround time.
Bogus/Unrealistic results during AstroRail EM/IR analysis:
a) MarsRail used to have a bug that generated bogus EM violations
whenever a metal piece overlaps with another metal piece at the same
layer imperfectly, causing a small piece of overhang. AstroRail does
not seem to have this problem. However, it still seems AstroRail has
issues with 45 degree polygon shapes same as MarsRail.
These kind of bogus violations are identified only visually, and if
you have a big chip, you may end up having hundreds of such violations
that you need to visually sort through to find out the real
violations. Browsing using a full chip EM map is slow & frustrating.
Similarly, when poDisplayVoltageMap is executed, it takes a while to
load for a big chip.
b) I've found that AstroRail's current density calculation is sometimes
not reasonable. If you would like to doublecheck AstroRail, you can
find a region in your chip that has a single std cell connected to a
power rail, display power map, display EM map, hand calculate the
current density and compare with the number that AstroRail gives.
c) If you are using user-defined tap points and your tap points happen to
coincide with a block boundary (possibly with any cell boundary),
these points are ignored during AstroRail EM/IR analysis. You get a
warning for that in the log file but it is very easy to neglect that.
For example, the AstroRail log file from your first run might be OK,
but in the next run some of the cells happen to be shifted and now
their boundaries overlap with your tap points. You may not feel the
need to look at the log file again, but you should.
d) If you have a CONN view but didn't explicitly generated CSF, AstroRail
seems to silently ignore that hard macro during EM/IR. Make sure to
have a CSF file for *each* CONN view. To test that you can do a block
level IR run on a block that has an hard macro. Run it with and
without a CSF view and compare the IR maps.
e) If, while you are defining hard macro power consumption, you use
defineCellInstancePowerByMaster or defineCellInstancePowerByInstance,
you should always double check that its values are actually being
recognized by dumping instance power summary file.
Also, to verify that these values are actually being used during
EM/IR analysis, isolate a small test case (e.g, the block that
contains them) and run IR analysis with the value and value*1000.
If your results are not different, check if you have current sources
defined for the hard macros by using poDumpCurrSource. If not,
then create one using poGenCurrSource and try the experiment again.
Other Misc Gotchas while using AstroRail:
a) poPVTSDFOut is available in the GUI but not functional.
b) poGenCurrSource generates CSF files which contains weird characters
at the "Cell Name" field of the header section of the file, where the
comments are. Sometimes carriage-return is one of these characters
and that causes the line to break in half and the second half becomes
no more a comment and causes syntax error for the poPGExtraction.
We postprocess our CSF files before poPGExtraction to eliminate this
problem.
c) poCreateConnViewByHerc is not honoring the layer ignoring. That is,
even if in AstroRail'ss GUI you specify to ignore certain metal
layers, it is silently processing them anyway. This means silently
your CONN views contain metal layers that you do not want (e.g, lower
metal layers). This consequently causes increased memory usage (due
to higher amount of polygons) and runtime. To workaround that, we
remove metals from the CONN view with a Scheme script afterwards.
d) If your power/ground pads are one level hierarchy below in a separate
block, you need to use user defined tap points which should correspond
to the bondpads. However AstroRail has a silent bug about this. It
seems like it is using the bondpads but the connection between
padblock and the coreblock is not properly made during simulation.
This can be caught if you zoom on the block boundary at the IRdrop map
and look at the color change from one block to another. If the color
change is abrupt, there is a chance that this is happening. When
properly connected, the color change in the IR map should be gradual.
We pick our tap points on the power/ground ring instead.
Also one thing I'd like to note is that AstroRail doesn't provide a
"differential IR map", which would flag local peaks in the chip in terms of
IR drop. This can help debug localized p/g connection problems which can
not be easily seen in my conventional IR map since it would be crowded out
by the data around. And it would not be caught easily by the ASCII output
that AstroRail provides, especially if the values involved are below
threshold. Even if the absolute IR values might fall into spec, relative
data like this can be very beneficial.
- Ersin Beyret
ReShape, Inc. Mountain View, CA
( ESNUG 411 Item 6 ) -------------------------------------------- [04/23/03]
Subject: A Boatload Of Readers React To Aart's VHDL End-of-Life Notice
> Aart replied that his R&D group wasn't developing any new VHDL-based
> tools, but he also said it will take years to phase out VHDL because
> leaving customers in the lurch would be bad form. In short, he wasn't
> abandoning VHDL as much as promoting System Verilog. "This is a big
> statement. We are putting the Synopsys weight behind this language for
> RTL plus design," said Aart. "I do believe in the long term, though,
> that System Verilog will be the dominant language."
>
> So after years of the Verilog vs. VHDL wars, in one speech, Aart had
> kicked VHDL out of the big-money ASIC flows. And VHDL was now the new
> Latin; a dead language supported only by a few obscure holdouts in the
> small-money FPGA world. Verilog (make that a beefed-up Verilog) had won.
>
> - from http://www.eedesign.com/columns/industry_gadfly/OEG20030407S0056
From: David Bishop <david.bishop=user company=kodak naught lawn>
I was there when Aart did this speech, and I got the same message.
However, I saw this as a fairly desperate attempt by Aart to get System
Verilog acceptance. Verilog just isn't a system level tool. People just
don't do very many ASICs any more. They do FPGAs. FPGAs are far more
cost effective these days. Most FPGA designs are done with VHDL.
- 3 years ago, my group did 5 ASICs, and 20 FPGAs
- 2 years ago, we did 2 ASICs and 25 FPGAs
- last year, we did 0 ASICs and 20 FPGAs (some big ones)
- so far this year, 3 ASICs (all small ones) and probably at least
30 FPGAs. One of the ASICs we are doing will most likly be an FPGA.
The NRE necessary for the Synopsys tools just is not worth it, especially
when there are now other complanies that have better and cheaper synthesis.
Especially when you can use older techonologies that do not require the
sub-micron tools.
I really think that Aart is blowing smoke here. However, as someone who
has landed on his feet several times now, I plan to hedge my bets.
VHDL is by no means Latin yet. There are still plenty of VHDL legions out
there counterattacking the Verilog Visigoths.
- David Bishop
Kodak Rochester, NY
---- ---- ---- ---- ---- ---- ----
From: Guido Kinast <guido.kinast=person domain=siemens lot psalm>
Hi John,
When I read the "VHDL, the New Latin" headline of your email I thought:
"Wow, VHDL will be the source of many new developments in the EDA world.
It will be used on and on."
So as Latin was the starting point for a lot of modern languages - and it's
still used after more than 2500 years! But in the article the contrary
turned out - quite odd. So please make sure you pick the right comparisons
in the future, please.
- Guido Kinast
Siemens
---- ---- ---- ---- ---- ---- ----
From: Neel Das <neel.das=man company=corrent sought prom>
Hi, John,
Just read your piece on Aart's VHDL announcement. Guess it's up to us, the
user community, to incentivize Synopsys and other EDA vendors one way or
the other.
His speech from a few years ago about Windows-support for EDA tools comes
to mind. Anyone remember Aart's DesignWare will rule the world speech?
- Neel Das
Corrent Corp. Tempe, AZ
---- ---- ---- ---- ---- ---- ----
From: Brian Dickinson <bdickins=user domain=esperan brought gone>
Hi John,
Many thanks for the heads-up on Aart De Geus' latest pronouncement. It
caused great amusement here in the office, particularly in the glee with
which you report the death of VHDL & your choice of two completely unbiased
commentators in Cliff and Stu.
A few intriguing questions are raised. Doubtless better qualified engineers
will debate Aart's assertion that System Verilog is 100% backwards
compatible, but I'd like to know how Aart expects the thousands of European
and US VHDL designers to react? Are we supposed to be thrilled at having to
switch languages? Presumably Synopsys' decision has nothing to do with the
completely lame VHDL simulator they are currently offering... Maybe this is
just another one of those Synopsys "Hype Today, Gone Tomorrow" initiatives
like Behavioral Compiler, SystemC Synthesis, etc etc.
Finally shame on you for dissing FPGA designers. For your punishment I
suggest you learn by heart Richard Goering's "EDA isn't just ASIC's" article
http://www.eedesign.com/columns/tool_talk/OEG20030127S0022
Seriously tho' - many thanks for the effort you put in on ESNUG & DeepChip.
Without you we would all be a lot less informed...
- Brian Dickinson
Esperan, LTD.
---- ---- ---- ---- ---- ---- ----
From: [ The Iraqi Information Minister ]
John,
This is not for attributed publication.
VHDL may be dead.
Verilog may be dead.
But you forget the obvious possibility: Synopsys may be dead. Many people
are looking at Synopsys and simply dumping it. Way too expensive; the
budget to maintain their stuff is a bankbuster. Way too hard to use.
Takes way too much training. And takes way too much effort to keep
competence at it.
One possible future is that they will wither, catering ultra-expensive tools
to a small niche of engineers, which will drive the price even higher, the
documentation will get worse, and Synopsys will be in a death spiral.
- [ The Iraqi Information Minister ]
---- ---- ---- ---- ---- ---- ----
From: Steve Weir <weirsp=person domain=atdial.net>
John,
I think the handwriting has been on the wall for VHDL for a long time.
Now I just wish we could get past some of the things in Verilog that are
still a grand pain in the rear, but trivial in VHDL, if not as always
seems to be the case with VHDL verbose.
- Steve Weir
---- ---- ---- ---- ---- ---- ----
From: [ Afraidy Cat ]
Keep my identity to yourself, Cooley. I may need a job at Synopsys some
day. I walk the walk, teaching a grad VHDL (admittedly with increasing
Verilog content) class at a local university for 7 years.
// point-counterpoint mode ON
John, you ignorant slut. You forget your history. Had there been no IEEE
VHDL then Verilog would have remained a proprietary property of Cadence,
licensed out to a few start-ups at usuary rates and the entire industry
that guys like us make a living off of would have turned out far smaller
and less innovative, though certainly more profitable for Cadence.
It was the positive threat of a growing, open-market, diverse, cheaper
VHDL-based toolset that nudged a reluctant Cadence into relinquishing
verilog to the IEEE (and btw THAT's why Joe calls VHDL a $400M mistake)
and setting the stage for... System Verilog as a standard. And every bit
as complicated as VHDL. :-(.
And another thing, the VHDL LRM sets the standard for defining a bullet-
proof execution model. We don't see VHDL simulators getting different
results too often. Aart can say what he wants. He might even be right.
But it's the marketplace that decides these things. One language is a
topic dream of tool developers. As soon as you declare victory somebody
pops up with a newer, better one. Go figure.
// point-counterpoint mode OFF
- [ Afraidy Cat ]
---- ---- ---- ---- ---- ---- ----
From: [ TI Used To Make Bic Pens You Know ]
John,
If you allude to this don't use my name, there are too many "politicals"
here at Texas Instruments that I would have to answer to.
Not that I can dance, but I did a little jig when I read your VHDL is Dead
column in EE Times yesterday. It took a little bit of time, but it's good
to see consolodation into a space where there really was no room for two
languages. That's my opinion only, not shared by everyone here.
I'm going to have to do some reading up on System Verilog, but I suspect
that System Verilog has many constructs that are much higher level than
Verilog (any one could assume this.) Doesn't this mean SystemC and Vera
might be in danger?
Specifically, if you can now code your verification environment in System
Verilog, then why oh why would you want to bind in another language
simulator through the PLI to do verification?
Guess who I'm questioning?
- [ TI Used To Make Bic Pens You Know ]
---- ---- ---- ---- ---- ---- ----
From: Sue Vining <s-vining=engineer company=ti grot guam>
John,
Until Verilog supports function overload, operation overload, string,
pointers, enumerated types, constrained range integers, aliases, records,
pass by reference to "procedures", wait on signal for time, wait until
condition for time, asserts, mapped procedure and function calls, and
whatever other features I cannot think of off the top of my head, VHDL
will not go away.
Much of the extensive effort to create dedicated languages for assertion
and test are for Verilog users. VHDL can easily accomplish most of these
tasks with wait and assert statements.
VHDL is to Verilog, what Verilog is to schematic capture.
- Sue Vining
Texas Instruments
---- ---- ---- ---- ---- ---- ----
From: Fred Hinchliffe <f.hinchliffe=user domain=att.net>
Hi, John,
I remember attending government sponsored VHSIC workshops in the days before
the IEEE 1076 VHDL standarization effort, 1984-1986 time frame. Prabhu Goel
was there, tirelessly promoting Verilog on his own time, and presumably at
his own expense, to whoever was willing to spend an hour in a classroom
listening to his maverick ideas. I think it was when the initial hack at
VHDL was rejected by the design community, before the IEEE standardization
got underway, that Verilog achieved enough strength to finally make it,
although it may not have been apparent a the time.
The full unfolding required Cadence to acquire Gateway and required Synopsys
to be founded and grown to size. It says two things (at least): the effect
of initial conditions on events far distant in time can be huge; and the
effects of a determined visionary can also be huge.
- Fred Hinchliffe Still River, MA
---- ---- ---- ---- ---- ---- ----
From: Aime Watts <aimew=person domain=sprintmail nacht tom>
John,
I just read your article (INDUSTRY GADFLY: "VHDL, the new Latin") and was
wondering about the impact of such a bold move on the DoD industry, which
at last glance was exclusively VHDL. It was my understanding that the DoD
used it because of its simplicity (one language does it all) and for
maintenance. Do you think that multi-billion dollar industry will have
any say on what lives and dies?
- Aime Watts Somewhere, NH
---- ---- ---- ---- ---- ---- ----
From: [ A Cadence Employee ]
John, please keep me anon.
A few points:
1) Does the US Department of Defence still insist on VHDL? They're pretty
"big-money", are they not?
2) Synopsys != World
3) Everyone knows that Synopsys lags behind the other big two when it
comes to mixed-language simulation. Synopsys was hardly ever at the
forefront of VHDL simulation anyway. Now they're just admitting that
they never really liked it much anyway. (Ironic, considering who
used to own Verilog.)
Even those who use VHDL for RTL often tend to use Verilog for netlists,
simply because it is more compact, so they won't lose any back-end stuff.
- [ A Cadence Employee ]
---- ---- ---- ---- ---- ---- ----
From: Howard Wanke <hjwanke=man domain=hotmail clot balm>
John,
Given what I have seen regarding VHDL, Verilog and System Verilog, I think
it will be years before System Verilog can even dream of replacing VHDL.
This assumes that System Verilog is totally backwards compatable with
Verilog, otherwise it won't replace Verilog either.
- Howard Wanke
Boeing
---- ---- ---- ---- ---- ---- ----
From: Gregg Lahti <gregg.lahti=user company=microchip knot awn>
Hi, John,
Great article, once again you're on the money. It was pretty evident from
Aart's presentation that VHDL is now legacy baggage in the RTL/design
methodology. Someone in the audience asked Aart about VHDL enhancements
and referred to the EU community asking for enhancements from Acellera.
True?
I would suspect you'll get a backlash from the VHDL supporters on this
one. :^)
- Gregg Lahti
Microchip Technology, Inc. Chandler, AZ
---- ---- ---- ---- ---- ---- ----
From: Tim Davis <timdavis=person company=aspenlogic spot gone>
Hi, John,
My opinion is that the productivity gap in EDA was caused by Synopsys not
VHDL. If we are lucky, neither VHDL nor Verilog nor their derivatives will
stand the test of time. However, if VHDL is in fact the new Latin, then it
will far outlast Verilog. Latin has defined the scientific and cultural
words of society for millenia and doesn't show any signs of being replaced.
No country uses it as its official language however it is still taught in
high schools and centers of higher learning all across the United States.
In contrast I suspect that French classes in public high schools will see
a dramatic decline.
- Tim Davis
Aspen Logic, Inc.
---- ---- ---- ---- ---- ---- ----
From: Mike Ciletti <ciletti=student university=eas.uccs.edu>
Yay, John!
Ding, dong, the wicked VHDL witch is dead!!!
- Mike Ciletti
University of Colorado Colorado Springs, CO
---- ---- ---- ---- ---- ---- ----
From: Oren Rubinstein <orubinstein=engineer company=nvidia hot bomb>
John,
I think VHDL has been dead for a while. This was just the official act of
signing the death certificate. :-)
- Oren Rubinstein
Nvidia
---- ---- ---- ---- ---- ---- ----
From: [ Chicken Man ]
This is the anonymous chicken man again ..
Remember John, that this was an American conference. Aart wouldn't have
said the same thing at a European conference. I personally believe that
this is Synopys' marketing which wants people to buy the newer, more
expensive tools, in pretty much the same way Microsoft builds the Windows
hype in the OS world asking customers to upgrade.
That said, we're evaluating a new version of the Presto DC VHDL Compiler.
Our first impression is that elaboration times and memory requirements are
significantly better, although some bumps need to be ironed out.
At an European conference, I'm sure Aart would have highlighted this and
downplayed the Verilog-only future. One of the reasons why Synopsys is
doing well, is that they are smart business people. They wouldn't want
to kill the hen that lays the golden egg in a hurry.
- [ Chicken Man ]
---- ---- ---- ---- ---- ---- ----
From: Ludovic Rota <lrota=european company=integration not calm>
John,
This a very good opportunity for a new company to grow and replace Synopsys
in supporting VHDL tools. Personaly, knowing both Verilog and VHDL, I will
not give up VHDL because of few meanless words spoken by an arrogant CEO.
For me, this is just the latest action in the new cold war taking place
between Europe and USA, as everyone know VHDL is extremely popular in
Europe, and Verilog is barely known. When brainless childish CEO generals
will decide to stop the war of words and to became grown up, the world will
surely be a better place to work.
With all the genius of human kind, there is better things to do, so much
still to create, even in the ASIC world.
I bless the "free world" of GPL where we can avoid such people as Aart. He
surely has lot of time to waste. I don't.
- Ludovic Rota
Integration Associates, Inc. Mountain View, CA
---- ---- ---- ---- ---- ---- ----
From: Ashish Kulkarni <kashish=engineer domain=crosswinds.net>
Hey John,
Back in the mid-90s, when I was a fulltime Masters student at the Indian
Institute of Science, Bangalore, we had to take a course in digital systems
design with VHDL as the main RTL language. I had just resigned from a SW
job for a major telecom company and I had some professional experience
learning the flexibility of C & C++. During the course, I was quite
frustrated having to use VHDL as I found it had too many structural layers.
Quite simply, VHDL was like going to a government office for a routine task.
I never got to liking VHDL but completed that course. If I had a choice
between VHDL and Sanskrit, I would have choosen Sanskrit!
My impression why VHDL was popular in Europe is because the European ego has
to always have a standard different from the rest of the world.
I have been using Verilog for the last 5 years. I'm excited about the new
features in the upcoming System Verilog. I think its time to "retire" VHDL.
- Ashish Kulkarni
Hughes Network Systems
---- ---- ---- ---- ---- ---- ----
From: [ A Synopsys Employee ]
Hi John,
Another analogy -- you can think of VHDL as Verilog's sister, who wasn't
sexy enough to catch the atttention of the typical male engineer. :)
- [ A Synopsys Employee ]
---- ---- ---- ---- ---- ---- ----
From: Daniel Leu <daniel=person company=inicore naught con>
John,
Verilog had won by adding all VHDL features to Verilog. So it's not Verilog
who won. :)
- Daniel Leu
Inicore
---- ---- ---- ---- ---- ---- ----
From: Sreesa Akella <akella=student school=engr.sc.edu>
Hi John,
My name is Sreesa Akella. I recieve your mails from time to time. Your
recent email about "VHDL, the new Latin" has hit me really hard as all
through out my graduate study I have been led to believe VHDL is here to
stay for a long time. This is why I concentrated all my energies on this
language and not really learnt or designed in Verilog a lot. Now after
reading your email I feel that I need to change my approach and completely
shift my energies to Verilog. Do you think this would be a smart move?
- Sreesa Akella
University of South Carolina
---- ---- ---- ---- ---- ---- ----
From: John Sanguinetti <jws=person company=forteds fought bon>
Hi, John,
VHDL has been on life support for several years now. All Aart did was to
pull the plug.
- John Sanguinetti
Forte Design Systems San Jose, CA
---- ---- ---- ---- ---- ---- ----
From: Justin Spangaro <justin=engineer company=spangaro thought lawn>
Hi John,
Landmark article. Good one.
- Justin Spangaro Australia
---- ---- ---- ---- ---- ---- ----
From: Aran Idan <aran.idan=person domain=flextronics cot yawn>
Yes, John, but who speaks Latin these days?
- Aran Idan
Flextronics Semiconductor
---- ---- ---- ---- ---- ---- ----
From: Francis Wolff <wolff=professor university=eecs.cwru.edu>
Hi John,
What happened to SystemC?
- Frank Wolff
Case Western Reserve University
---- ---- ---- ---- ---- ---- ----
From: Juergen Baesig <juergen.baesig=docktor university=fh-nuernberg.de>
Dear John,
Did Aart De Geus something say about the future of SystemC?
- Juergen Baesig
FH-Nuernberg Germany
---- ---- ---- ---- ---- ---- ----
From: Takashi Hasegawa <thasegaw=engineer company=jp.fujitsu jot don>
Hi John,
How about SystemC ?
- Takashi Hasegawa
Fujitsu Akiruno Technology Center Japan
---- ---- ---- ---- ---- ---- ----
From: Alan Strelzoff <astrelzo=user domain=cadence fraught qualm>
John,
Do you think this means that SystemC also will have only a temporary life?
I think that once Synopsys gets System Verilog out, they will begin to
de-emphasize SystemC as well, and position System Verilog as "all that you
need" in a single language. What do you think about this?
- Al Strelzoff
Cadence
---- ---- ---- ---- ---- ---- ----
From: Herbert Kargan <hkargan=person company=emsdevelopment not fawn>
Hello John,
Years ago when Aida was a up and coming shooting star in the software world
it was said that C and C++ were dead. To be buried in the same family plot
as Basic. Where is Aida now? Used in Military applications. C, C++ and
Basic in one form or another abound. This is a case of the development SW
companies trying to drive the market. While it may come true they may have
to wait until the present day engineers have gone to their final reward. To
paraphrase Mark Twain "The rumors of VHDL's death may be exaggerated."
- Herbert Kargan
EMS Development, Inc.
---- ---- ---- ---- ---- ---- ----
From: Ron Goodstein <rongood=user domain=world.std aught pomme>
John,
VHDL is mandated by the US government for defense work. It is also the
choice for FPGA design. It has always been second-tier to Verilog, and will
continue to be. It may take 20 years for it to fade out, but it will be
around for a long time, as it has been around for a long time now.
- Ron Goodstein
First Shot Logic Massachusetts
---- ---- ---- ---- ---- ---- ----
From: Andy D Jones <andy.d.jones=engineer company=lmco pot mom>
Hi, John,
You failed to mention that a HDLcon'01 presentation by MGC CEO Wally
Rhines showed that VHDL was the #1 language for FPGA development, and
that Verilog was not even #2! (Verilog was #3, behind PALASM)
But, you also consider VHDL "a dead language supported only by a few
obscure holdouts in the small-money FPGA world". Interesting, especially
since Synopsys' own FPGA Compiler II is a distant 3rd in the FPGA synthesis
tool market. I guess it is a small-money market for Synopsys. That's what
they call markets where their tools aren't doing very well. I'd bet a few
other vendors would beg to differ on the classification of that market.
I'm sure productivity will increase when the verilog users have access
to the same language features VHDL has had for a decade or more. Just
think where they'd be if they'd used VHDL all along?
- Andy D Jones
Lockheed Martin Moorestown, NJ
---- ---- ---- ---- ---- ---- ----
From: Paul Ramondetta <paul.w.ramondetta=user domain=lmco hot bomb>
John,
I guess Synopsys is following the old GE edict...
If you're not #1 or #2 and you can't fix it, get out of the business!
- Paul Ramondetta
Lockheed Martin Moorestown, NJ
---- ---- ---- ---- ---- ---- ----
From: Dave Chapman <dave_c=user domain=goldmountain wrought balm>
Hi, John,
I saw Aart give the lunch-time talk at the OpenVera Developer's Forum
earlier today. He started by saying that it's not true that VHDL is "like
Latin", and that they would continue to support it for at least 5-10 years.
It definitely sounded like "Goodbye" to VHDL.
One of the slides he put up showed that VHDL goes away in 2003. What caught
my eye was that it also shows Vera, OVA, and PCL going away in 2004.
There were 47 people there (including Sysopsys employees & panelists). Last
year, I counted 89. It doesn't look good for Vera or VHDL.
- Dave Chapman
Goldmountain Consulting Sebastopol, CA
---- ---- ---- ---- ---- ---- ----
From: Jim Evenstad <jevenstad=person company=tsi naught tom>
Mr. Cooley,
I just read your article "VHDL, the new Latin", in the latest EETimes. I am
an FPGA designer and haven't done an ASIC in 20 years. We exclusively use
VHDL and I am responsible for recommending PLD development trends, this is
very important to us. I was disturbed by your article quoting Aart DeGeus
about the demise of VHDL. Was your closing paragraph cynicism or is it what
you believe will happen? Do you see System Verilog taking over FPGA design
or will it stay with ASIC people? If so, what time frame?
- Jim Evenstad
TSI, Inc. St. Paul, MN
---- ---- ---- ---- ---- ---- ----
From: [ A Synopsys Employee ]
Hi John,
It's not absolutely true that Synopsys' R&D group isn't developing any new
VHDL-based tools. They are in-fact developing a *new* Presto VHDL Compiler.
It's no big secret that it is being developed from scratch.
Keep me anon please, if you decide to publish this.
- [ A Synopsys Employee ]
---- ---- ---- ---- ---- ---- ----
Editor's Note: Go to http://www.DeepChip.com/gabe_vhdl.html if you want
to see another more personally humorous story relating to this. - John
============================================================================
Trying to figure out a Synopsys bug? Want to hear how 16,683 other users
dealt with it? Then join the E-Mail Synopsys Users Group (ESNUG)!
!!! "It's not a BUG, jcooley@TheWorld.com
/o o\ / it's a FEATURE!" (508) 429-4357
( > )
\ - / - John Cooley, EDA & ASIC Design Consultant in Synopsys,
_] [_ Verilog, VHDL and numerous Design Methodologies.
Holliston Poor Farm, P.O. Box 6222, Holliston, MA 01746-6222
Legal Disclaimer: "As always, anything said here is only opinion."
The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com
|
|