Editor's Note: Sorry for the late restart of ESNUG for 2001. When I did
that Physical Synthesis Tape-Out Census and Recount, it totally messed up
my consulting schedule for the month of December. I've just now caught
up (after a doing a lot of hustling!) Life's back to normal now...
- John Cooley
the ESNUG guy
( ESNUG 363 Subjects ) ------------------------------------------- [01/25/01]
Item 1 : ( ESNUG 338 #1 ) Customer Endorsement Of Chip Architect Retracted
Item 2 : ( ESNUG 362 #2 ) Golson Warns Of One Minor Gotcha With DC Presto
Item 3 : You *Must* Re-read PDEF Into Your DB Before Using PhysOpt 2000.11
Item 4 : Newbies Struggle Trying To Control VCS from the PLI Interface
Item 5 : How To Tell A Device From The Extracted View In Cadence Diva LVS
Item 6 : Stupid Default DC Clock Settings And Tcl [get_object_name $thingy]
Item 7 : Can DC Shell Circuit Optimizations Be Disabled For IBM Designs?
Item 8 : ( ESNUG 362 #8 ) Janick Spanked For Dissing Superlog & C++ Design
Item 9 : PrimeTime 2000 Uses Dangerously Optimistic Models For SPF Timing?
Item 10 : Has Anyone Used RealChip? Are They Good Or Bad News To Work With?
Item 11 : Our Cadence Hierarchical PBOPT Physical Chip Design Results Suck!
Item 12 : ( ESNUG 361 #1 ) Thad Follows Up On His 2 PKS TSMC Tape-Outs
Item 13 : How Can I Now Model Dual-Edge Triggered Flip-Flops In LIBERTY?
Item 14 : How To Insert Pass-Thru Signals In Hierarchical Physical Designs?
The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com
( ESNUG 363 Item 1 ) --------------------------------------------- [01/25/01]
Subject: ( ESNUG 338 #1 ) Customer Endorsement Of Chip Architect Retracted
> We decided to look at Chip Architect because of certain key features.
> Like a lot of other folks these days, we also believe that hierarchical
> layout is the way to go. Chip Architect promised a natural way to do
> this, starting not only at the gate level, but the ability to do planning
> at the black box and RTL level, too. The tight integration between
> placement & timing that Chip Architect promised really got our attention.
>
> - Jon Stahl
> Avici Systems N. Billerica, MA
From: Jon Stahl <jstahl@avici.com>
Hi John,
I ran into problems very late in the game with Chip Architect and spent some
time working with Synopsys to try and resolve them, and finally gave up.
I had to use a combination of various other tools to get timing closed and
the design taped out.
For those interested, the major problem that I ran into was involved and may
not still apply: although the placement Chip Architect produced was very
good, it did not completely meet timing and the ECO timing improvement
features of the tool were broken.
I attempted and was able to write a complex script to have Chip Architect do
repeater insertion, but it would only work after I flattened the entire
design (a hierarchical attempt only produced core dumps). This resulted in
timing being met, but led to further misery as the tool only had the ability
to produce a flat netlist from the flattened physical hierarchy and did not
keep separate logical and physical views. This in itself was ugly, but only
a show-stopper when we attempted to run several other tools which couldn't
handle a totally flat netlist for a design of that size.
So my apologies to the community for those doing do diligence on tools and
only seeing my pre-tape-out good news report without the later accompanying
bad news. It took a while for the dust to settle, and my job was to get the
design out.
I have heard that Chip Architect may be re-targeted as a front end to
Physical Compiler (which wasn't available at the time of my original
posting) and priced accordingly, as Synopsys makes it's attempt to produce
a full P&R solution. But we are not currently using the tool.
- Jon Stahl
Avici Systems N. Billerica, MA
( ESNUG 363 Item 2 ) --------------------------------------------- [01/25/01]
Subject: ( ESNUG 362 #2 ) Golson Warns Of One Minor Gotcha With DC Presto
> Presto makes me nervous. Changing the way MUXes and arrays are translated
> to gtech sounds like two different circuits to me. Infer enumerated types
> should be defaulted to off to be compatible with earlier releases.
>
> - Dennis Milton
> Stratus Computer Marlboro, MA
From: Steve Golson <sgolson@trilobyte.com>
Hi, John,
The Verilog netlist reader available with the new Presto HDL Compiler is
great! It's much faster and the memory requirements are far less than the
old netlist reader.
However, after you've read in your netlist, the current_design points to the
*first* module read from the netlist file, not the last one! This is
backwards from how the old HDL compiler worked. So, be sure and do a
current_design to the top-most module after you read in the file.
As explained in the 2000.05 release notes, here's how you enable the new
netlist reader:
set enable_verilog_netlist_reader "true"
read -f verilog -netlist my_huge_file.v
Enjoy!
- Steve Golson
Trilobyte Systems
( ESNUG 363 Item 3 ) --------------------------------------------- [01/25/01]
From: [ A Boston Synopsys PhysOpt AC ]
Subject: You *Must* Re-read PDEF Into Your DB Before Using PhysOpt 2000.11
Hi John,
I just wanted to give you a heads up on a change that will affect nearly
every Physical Compiler customer. With the 2000.11 (PSYN2.0) release, the
database schema has been changed to handle distance units down to 0.001
microns. When PDEF is read in, Physical Compiler will automatically extract
the appropriate database unit to use with no user interaction required.
You must re-read PDEF into your design DB before using the 2000.11 release!
We have also tried to be consistent in terms of units used throughout the
UI. Therefore, the following variables and commands have changed from
using design database units (typically 0.01um) to always using 1um:
set_wire_capacitance_multiplier
set_wire_resistance_multiplier
set_wire_min_capacitance_multiplier
set_wire_min_resistance_multiplier
physopt_horizontal_capacitance_multiplier
physopt_horizontal_min_capacitance_multiplier
physopt_horizontal_resistance_multiplier
physopt_horizontal_min_resistance_multiplier
physopt_verticalal_capacitance_multiplier
physopt_vertical_min_capacitance_multiplier
physopt_vertical_resistance_multiplier
physopt_vertical_min_resistance_multiplier
lbo_vertical_capacitance
lbo_horizontal_capacitance
lbo_vertical_resistance
lbo_horizontal_resistance
If your scripts use any of these commands or variables you must change their
value to reflect the fact that they now use units of 1um. Thus, you will be
changing these values by multiplying by 100x. If you forget to do this,
expect your timing numbers to be very small.
You must make sure that your RC values are specified using distance units of
1 micron!
- [ A Boston Synopsys PhysOpt AC ]
( ESNUG 363 Item 4 ) --------------------------------------------- [01/25/01]
Subject: Newbies Struggle Trying To Control VCS from the PLI Interface
> I am attempting to drive VCS directly from another application using the
> PLI but I am having trouble actually getting the simulator to run for a
> specified length of time, e.g. a clock cycle. I am fairly new to the PLI
> but I can talk to and read stuff from VCS. Is there a PLI routine which
> will allow the simulator to be run for a specified time?
>
> - Richard Wilkinson
> Cambridge Silicon Radio UK
From: Petter Gustad <pegu@dolphinICS.no>
It has been a while since I did this but if memory serves me right you call
tf_setdelay with a given delay and you PLI routine will be called with a
reactive message after the given simulation time.
- Petter Gustad
Dolphin ICs Norway
---- ---- ---- ---- ---- ---- ----
From: Srinivasan Venkataramanan <srini@realchip.com>
I, too, am a newbie to PLI and am first focusing on VPI (PLI 2.0) which is
not supported by VCS yet. So the solution that I offer here is more
bookish than through experience. I think what you need is a "way to
synchronize your PLI to the Verilog simulation". In PLI (1.0) there are
"Misctf routines" for this purpose. These could be called due to
several reasons (like endof_compile, end_ofsim, value_change_on_object
etc.) If you have Sutherland's book with you, please refer to Chapter
no. 12 for more.
I think you will need the routines like tf_setdelay (or tf_setlongdelay
etc.) for your purpose.
- Srinivasan Venkataramanan
RealChip Chennai (Madras), India
( ESNUG 363 Item 5 ) --------------------------------------------- [01/25/01]
Subject: How To Tell A Device From The Extracted View In Cadence Diva LVS
> I am having trouble with Cadence LVS. I constantly get the error message:
>
> ( sbmix2 schematic ) in library learning has been
> changed since it was last extracted.
> si: Netlist did not complete successfully.
> End netlist: Jan 8 17:38:42 2001
>
> The schematic is not changed after the layout was extracted and it does
> not help saving the schematic. Does anybody now what my problem is?
>
> - Joacim Rolsson
From: atl@cray.com (Tony Laundrie)
You have to run Check-And-Save, not just Save.
On a related note, occasionally I use Skill to make tiny edits to schematic
cellviews (like changing text on a sheet symbol), and don't want those
changes to force a new Check-And-Save. In that case, dbReplaceProp(cv
"lastSchematicExtraction" "time" getCurrentTime())
- Tony Laundrie
Cray
---- ---- ---- ---- ---- ---- ----
From: Pratyush Kamal <pratyush.kamal@philips.com>
The netlist generator sees the time stamp for the extraction and so it might
be possible that a cell that you are using in your design is coming from
some other library where it's been changed but not yet extracted. The would
suggest you to generate the netlist using CIW->file->Export->CDL and then
the same for streaming out the GDS2 if working with Cadence and then UNIX
command for LVS.
- Pratyush Kamal
Philips Research Laboratories Eindhoven, Netherlands
---- ---- ---- ---- ---- ---- ----
From: Edward J Kalenda <ed@kalenda.com>
The use of the term extracted is unfortunate. The schematic editor does an
"extract" during the check and save command. What it extracts, I don't
know. The datestamp set by check and save is compared to the last modified
datestamp set by the editor. If modification is after extraction you get
that message. It has nothing to do with the Diva extracted view.
That message can be even odder and more confusing when you are netlisting an
extracted view. Since the extracted view is structurally the same as a
schematic view, just without the interconnect lines being drawn, the same
netlisting code is used. If you manually edit the extracted view, the
lastSchematicExtraction property needs to be set. I don't recall how you
keep the last modified time from being changed by the setting of the
lastSchematicExtraction.
- Ed Kalenda
---- ---- ---- ---- ---- ---- ----
From: jynx@thepdu.org (Greg Michaelson)
Do you have any opinions on how to get LVS to tell the difference between a
device from a schematic and a device from an extracted view? I am currently
using a CDF parameter that I set to '1' for schematic and '0' for extracted
but I figure there has got to be an easier way... Any ideas?
- Greg Michaelson
---- ---- ---- ---- ---- ---- ----
From: Edward J Kalenda <ed@kalenda.com>
In the property comparison function the layout device properties are always
the first parameter. In the combine function there is no indication of
which netlist is being processed.
Why do you need to know this?
- Ed Kalenda
---- ---- ---- ---- ---- ---- ----
From: jynx@thepdu.org (Greg Michaelson)
Currently, we check a parameter to determine whether the device is from a
schematic or extracted view. Then we build the comparision lists based on
the value of this parameter. How can I build a comparison list without
knowing whether the device is from a schematic or extracted view?
I have attached a text file containing an example of this parameter use:
;Here is the list building function:
(procedure buildbjt(d1)
(prog (dvcnt outlist)
if(d1->m != 0 then ;schematic device
dvcnt = d1->m
outlist = list(list(concat(d1->pmodel) list(list(list(1 1) dvcnt))))
else ; layout device
outlist = list(list(concat(d1->pmodel) list(list(list(1 1) 1))))
);if m != 0
return(outlist)
)
)
;This is the compare function which calls the list building function:
(procedure fooComparebjt(l1 s1)
(prog (tmpString)
if(l1->dvl then
l1->dvl = evalstring(l1->dvl)
else
l1->dvl = buildbjt(l1)
l1->id = vtcElmIdNum++
)
if(s1->dvl
s1->dvl = evalstring(s1->dvl)
s1->dvl = buildbjt(s1)
)
tmpString = compareLists(l1->dvl s1->dvl 'bjtCompare 'bjtSelect)
if(tmpString == nil then
return(nil)
else
sprintf(tmpString " id#: Q-%L %s" l1->id tmpString)
return(tmpString)
)
)
)
;This is the LVS command calling the compare function:
compareDeviceProperty("Npn" fooComparebjt )
I'm not looking for answers by no means. I'm just looking for another way
of doing this stuff. The LVS checking is a legacy thing here that has been
passed down over the years with no one taking the time to ask 'What the
heck is going on here?' Luckily, that's my job... HA!
- Greg Michaelson
---- ---- ---- ---- ---- ---- ----
From: "Gerry Vandevalk" <van@americasm01.nt.com>
In my "DIVA" LVS code, I have found that the first item is always the
extracted rep, and the second is the schematic. (i.e. from the beginning
of an LVS deck:
lvsRules(
rDefault=1.0
minWidth=0.8
minLength=1.4
putd( 'compareCap nil )
procedure( compareCap( extracted schematic)
...
if(schematic->tolerance tolerance=float(schematic->tolerance) tolerance=1.0)
...
The compareCap code in our system "knows" that the tolerence CDF property
only exists on "schematic" views and not on "extracted" views. This has
always worked for me. YMMV.
- Gerry Vandevalk
Nortel Networks Ottawa, Canada
---- ---- ---- ---- ---- ---- ----
From: jynx@thpdu.org (Greg Michaelson)
Thanks Gerry. That is exactly what I am getting at. The fact that you need
to have a parameter in order to distinguish whether a device is taken from
the schematic or extracted views. I was just hoping I wasn't missing
something obvious about this whole deal.
- Greg Michaelson
( ESNUG 363 Item 6 ) --------------------------------------------- [01/25/01]
From: Rod Ramsay <rod.ramsay@intel.com>
Subject: Stupid Default DC Clock Settings And Tcl [get_object_name $thingy]
Dear Santa John,
I have been a very good boy this year and these are the things I wanted
for Christmas from Synopsys.
First, how come I always have to set_dont_touch or set_ideal or whatever for
my clock trees when you freely admit that Design Compiler can't balance
clock trees anyway? Couldn't it be the default to leave clock networks
ideal and use unset_dont_touch clk? Whew.
Second, why on earth would I ever want a reference to a list? Since there
is no pointer arithmetic or operations I could perform on it anyway, why
should I have to use [get_object_name $thingy] at all? Just give me a
function to get the reference [get_object_reference] and return the object
by default. I'll never use get_object_reference anyway. DuhHUH.
- Rod Ramsay
Intel
( ESNUG 363 Item 7 ) --------------------------------------------- [01/25/01]
From: Daniel Geist <geist@il.ibm.com>
Subject: Can DC Shell Circuit Optimizations Be Disabled For IBM Designs?
Good Day John,
My name is Daniel Geist. I am in charge of the development of IBM's model
checking tool which is used both in IBM and also sold to a few customers
outside IBM. We have a problem with a few external customers. We have our
own Verilog compiler for Solaris but our VHDL compiler runs exclusively on
AIX. Porting to the VHDL Compiler to Solaris will be a major effort so we
suggest to our customers that use Synopsys in their design flow to do the
following:
1. Use dc_shell to compile their code and translate it to a very
simplified gate-level VHDL.
2. Use a small utility we supply to read in the simple format into
our tool.
There are two problems with this solution.
A. Minor: Its not seamless and from time to time the details have to be
reworked due to changes in newer versions of dc_shell.
B. Major: No matter how much we have tried to control dc_shell it
continues to lose signal names from the original design in the
translation. Since our tool produces counter-example (error traces)
the users that have to debug them are quite annoyed since they have
to guess what the trace implies about the missing signals (which
get renamed to some generic name). It can prolong the debugging time
from minutes to hours.
The Synopsys support team has looked into this problem (B.) and basically
said there is no good solution. We also tried using dc_shell to translate
to Verilog and using our Verilog compiler. That actually improved the
situation considerably. However, Synopsys support said that its not the
way to do it which worries me .
I thought before we tried to go into another perhaps costlier solution your
readers may be able to assist in this matter.
- Daniel Geist
IBM Haifa Research Lab. Haifa, Israel
( ESNUG 363 Item 8 ) --------------------------------------------- [01/25/01]
Subject: ( ESNUG 362 #8 ) Janick Spanked For Dissing Superlog & C++ Design
> Taken from co-design's white paper "Evolving the Next Design Language",
> here are the "new features" that Superlog adds to Verilog:
>
> Software aspects:
>
> - User defined types
> - Enumeration
> - Structures
> - Pointers
> - Recursion
> - Stack, heap
> - Strings
>
> System & Verification aspects:
>
> - Dynamic processes
> - Protocol checking
> - Behaviors to I/O ports
> - Extended FSMs
> - Hierarchical accesses
> - Dynamic arrays and queue
>
> Woohoo! Verilog has been evolved to the software engineering standards
> that prevailed in the eighties!!!! Break out the champage! That may
> represent progress on the design side, but for verification, it does not
> even come close to Vera or Specman. I'll stick with real progress,
> thank you.
>
> - Janick Bergeron on loan at
> Qualis Design Corporation Grenoble, France
From: "Simon J. Davidmann" <simond@co-design.com>
John,
I would like the chance to comment on Janick Bergeron's recent SUPERLOG
comment in ESNUG.
I was somewhat surprised by many comments made, given that Janick has no
first hand experience of the SUPERLOG language or products, and has never
even requested that we provide him a copy, unlike many other people in the
industry.
His comments somehow contradict that age old saying "you can't judge a book
by looking at the cover." This is exactly what Janick has done. Clearly,
from the technical comments he makes, he does not know the content of
SUPERLOG, and has therefore made some guesses. Its sort of like announcing
a victor in an election - before even looking at the votes :-)
I do though agree with some of what Janick says with regard to C usage and
moving forward in general. Presumably, if he "discovers" that we do have
many of the capabilities that he brings up, he would look more favorably on
the language that others have started to use productivity.
Just think (and I know Janick would want this): imagine that the features
and technology in Vera and Verisity were included AS PART of your HDL
simulator, with a several X SPEED IMPROVEMENT over your old
simulator/testtool combination, that works SEAMLESSLY WITH C/C++. Then
dream that you already are familiar with most of the syntax, and in fact
that it is the same language that you wrote your design in... That is
SUPERLOG.
What others have done is create a new syntax with some nice capabilities
for verification specifically (and good luck to them). However, this is
one small, slightly sideways, step.
Evolving a language to have all the features for advanced hardware system
design and verification - now THAT'S VISION - and that's what the creators
of SUPERLOG have accomplished. The fact that they have modelled it on
languages that designers already know just makes it easier.
I am sorry that Janick has only glanced at the cover. I will let him have
an early copy of the book when it is published.
- Simon Davidmann, President & CEO
Co-Design Automation, Inc.
---- ---- ---- ---- ---- ---- ----
From: Lauro Rizzatti <lrizzatti@get2chip.com>
Hi John.
I enjoyed reading Janick Bergeron's piece in ESNUG, "Janick Pisses On
Superlog". Janick writes superbly and his pieces are really entertaining.
Still, I believe this time he went over the edge.
Hardware verification languages are evolutionary not revolutionary as he
stated. They are an excellent example of how the human genius can adapt and
come up with interesting ways to overcome obstacles: minimum code for
maximum efficiency.
Should today verification engines (simulators and accelerators) be capable
to process millions of cycles per second on multi-million gates designs
driven by traditional testbenches (Verilog/VHDL), I wonder whether users
would embrace special languages for help. Today, they don't have a choice
if they want to see the results of their verification efforts in their
life-time. Ironically, despite their code efficiency, hardware verification
languages slow down ever further the verification engines.
As for that pond scum turning into an IPO, I would keep an eye on Superlog,
who knows?
- Lauro Rizzatti
Get2Chip
---- ---- ---- ---- ---- ---- ----
From: David C Black <dcblack@qualis.com>
John,
I would like to respond to the Superlog comments. At Qualis, we take
the independent viewpoint on design methodologies/tools, and that
means we get to play devil's advocate from time to time. For example,
I respect Janick's engineering abilities; however, I disagree with
his comment that "revolutions are necessary" and the implication that
Superlog is a minor evolution. Superlog is quite a rich language
extension over Verilog. There are some more basic issues to consider.
First, we know that it's not the language that makes an activity like
verification successful. It's the methodologies and application that
dictate success or failure. At its core we can do everything with
Verilog or VHDL just as they stand. We also know that language and
tool features can greatly simplify the implementation of a particular
methodology. For example, who needs 'for loops', why not just use
while loops? So adding features to a language can be extremely useful.
The other extreme is adding so much syntax to a language that it
becomes difficult to use. Some of these new "verification" languages
tend towards that latter extreme. Some find Specman Elite to be too
much language with an insufficient gain for the large learning
investment required. For example, there's much hoopla about temporal
expressions, but anyone can do similar things in Verilog. With
Superlog's "process" statement (similar to Vera's "fork/join none"
construct) you can have checks overlapping in time (think of it as
firing the same always block concurrently). In many cases, the extra
syntax is just something more to learn, but unnecessary.
Second, evolution is the way of our species. Evolution is a good way
to preserve investments in ideas that are known to work. Too many
companies have spent much good time and money creating designs with
Verilog and VHDL to want to rewrite them. Certainly, new designs
might want to use new language features, but not if they won't
interoperate smoothly with old code. Reuse was a buzzword of the
recent past that continues to make sense. If we evolve our languages,
reuse is more achievable. Simply, extend existing engines to
understand new language features while preserving the old.
Another problem with revolution is due to the heavy investment on the
personnel side of the equation. Those companies that buy into the new
languages may later regret if they have pressure to change back. An
investment is learning a revolutionary language is expensive in both
time and personnel. Is the industry really going to change completely
to the new verification language, or will this be a small blip? Will
the new languages really be standardized? How much do you stand to
lose if you're wrong?
It might be argued that the new verification languages allow the old
code to continue, because they interface to the existing simulators
via PLI and compiler calls. So we can continue developing with the
existing HDL's. On the other hand, it has been shown that these are
the self-same interfaces that have been a bottleneck for some
methodologies that attempted to use the PLI and C code for the
interface.
Also, these new revolutionary languages are narrow in the sense that
they improve only the verification side of the equation. To get more
productive (faster time to market), we need to improve all aspects of
design. There is a definite need to raise the bar. Yes, improve
verification capabilities. In addition, we should raise the synthesis
capabilities.
Behavioral Synthesis is still in its infancy (relative to the use of
RTL). New language constructs will allow this technology to go much
further. Perhaps SystemC is a good way to go. Perhaps we can leverage
all the existing C++ tools from the software industry.
I think more companies will be interested in an incremental evolutionary
change to their toolset and employee investments. In-house tools that
manage aspects of HDL's won't have to be rewritten.
I for one am in favor of evolutionary changes and the approach Superlog is
taking seems good.
- David C. Black
Qualis Design Austin, TX
---- ---- ---- ---- ---- ---- ----
From: Stefen Boyd <stefen@boyd.com>
John,
While I agree that there are currently more advanced features in Vera and e,
I have seen that the value of these features are offset by two factors.
First, there is the language barrier. This is less significant for
companies that have the luxury of dedicated verification teams who are
trained in these languages. But for companies who have their engineers do
both verification and design, getting those hardware engineers to embrace
a completely new language will be slow and difficult.
Secondly, even if the engineers are pushed into using these new languages,
only a few verification oriented engineers will ever exploit the language.
Here lies the problem. A few (sometimes only one) verification engineers
will create a nifty, complex environment that no one else understands.
Worse, that one engineer who understands the environment is a consultant
and the company is left with a wonderful environment that it can't maintain.
I personally don't want to be called back to support something because it
couldn't be maintained by the client. My idea of repeat business is *new*
work, not maintaining an environment they don't understand.
I've done some elaborate Vera environments, but if only one or two engineers
left the company, they would have no one left who would understand it enough
to enhance or adapt it to a new project.
If it had been a Superlog environment, it may not have had all the object
oriented software feel, but it would have been maintainable by the rest of
the team.
- Stefen Boyd
---- ---- ---- ---- ---- ---- ----
> And what is the big deal with C and C++ anyway? C was designed 30 years
> ago. C++ 20 years ago. They may be the most widely known languages
> today, but they are also the most widely abused, spagetthi coded,
> shoot-yourself-in-the-foot, memory-leaking, obsfucated languages ever
> used. I do not find the "I already know the language" argument
> convincing. What you already know is the *syntax*. You do not know the
> intricacies or process for using it within the context of the
> SystemC/Cynlib/whatever class library. THAT you have to learn and THAT
> is what takes time. Learning the syntax of a new language (such as Vera
> or e) takes only a few hours. You won't require anymore time to learn
> how to use it than how to use that other C++ class library. And what's
> a few hours in a 1-year project?? Especially, if you end up with a more
> efficient environment in the long run?
>
> - Janick Bergeron on loan at
> Qualis Design Corporation Grenoble, France
From: Mark Glasser <mxg@cadence.com>
John,
Well, we're getting into religion here and that's not something that
can be supported or refuted on logical grounds. Language wars have
been going as long as computer languages have existed. Way back
assembly programmers would duke it out with Fortran programmers over
essentially the same issues that we have now: which language has the
"better" compute model and which has the "better" set of features.
Apparently Janick Bergeron has taken sides, which is unfortunate
because it muddies his position as an objective industry
observer. John, you, on the other hand, take pot shots at all the EDA
companies equally and don't appear to favor any one over the other.
It's that (appearance of) unbiased editorial that gives your
newsletter the prestige it commands today.
At the risk of escalating the language wars, here are some bullets
about why we think C++ is the "better" choice for testbenches:
* C++ is a general purpose programming language. You can do anything
with it. You can connect any C/C++ based system to it. That
includes golden models, non-synthesizable behavioral models
(memories, queues, custom functions), testbenches, ISSs, etc.
* C++ is truly object oriented.
* C++ is standardized and is in widespread usage all over the computer
industry. You can easily find C++ programmers and you can easily
find books and classes on C++ and related topics. Further, tools
that generate, manipulate, and instrument C++ are also widely
available from many vendors which is not true for proprietary
languages such as Vera or e or Superlog.
Some drawbacks to C++:
* It's not synthesizable (yet). For testbenches that's not necessary
(yet). However, work is going on in this area in both industrial and
academic settings. I don't know if anyone is working on
synthesizing Vera or e.
* The basics of the language are straightforward to learn and use, but
things can get tricky. A C++ programmer needs to have a good handle
on constructors, destructors, operator overloading, inheritance, and
a number of similar topics that may not be familiar to most hardware
designers. Of course, any language that is truly object oriented
will sport similar features and thus require the same level of
knowledge.
Of course, TestBuilder isn't a language, it's a class library for
building testbenches. It supports a large number of features, too
many to list here. You have to look at the whole package and not just
the underlying language to determine if it's going to solve your
verification problem. While no system is perfect, we believe that
TestBuilder in conjunction with the rest of the Verification Cockpit
does a good job of covering the functional verification problem space.
Some comments on Janick's comments:
* While TestBuilder's current random constraint mechanism is fairly
simple, the upcoming December release will have a more advanced
constraint mechanism that supports constraint expressions with the
operators &&, ==, and !=. An even more sophisticated true
constraint solver is under development for release next year.
* Transaction Explorer (TXE), part of Cadence's Verification Cockpit
(as is TestBuilder) is all about functional coverage. It supports
querying for functional coverage from multiple simulation runs. You
can save query results and read them back for display in the GUI.
Query results can be further combined to form queries of arbitrary
complexity. We are adding a sophisticated merging capability that
will merge query results from separate runs. This additional
capability will also be available next year. I like Janick's
succinct description of the requirements. In one sentence he's
captured the essence of TXE.
* The argument that just because an idea is old it's no good is
getting shopworn. Yes, C/C++ is relatively old compared to other
things but I don't think that has any bearing on the applicability
of the language to the problem. Semiconductors were invented in the
1940s. Does that make them obsolete? Fourier transforms were
invented a couple of centuries ago, but we still use them in
designing communication systems. So what? The really good ideas
withstand the test of time.
* His comment is misleading about how learning the syntax of a
language is trivial. He's correct that the syntax is easy to learn
and the semantics and application are more difficult. What he fails
to point out is that's true for e or Vera as well as C++. Since
most engineers have at least C if not C++ in their backgrounds,
learning the additional semantics of TestBuilder will be as easy as
learning e, if not easier. As I see it there's essentially no
difference in the time or effort it takes to learn e or
TestBuilder/C++.
Janick rambles on about revolution and evolution without making a
relevant point. No doubt, new ways of thinking about electronic
systems will necessitate new ways of thinking about the tools to build
them. That doesn't imply anything about the current tools that are
competing in the market place. If we look at the history of computing
languages and systems we see that non-proprietary ideas are more
stable and have more longevity than proprietary ones.
Go to http://www.testbuilder.net and see for yourself.
- Mark Glasser, Engineering Director
Cadence Design Systems, Inc.
( ESNUG 363 Item 9 ) --------------------------------------------- [01/25/01]
From: Ajay Kumar Sinha <ajay@siliconsystems.co.in>
Subject: PrimeTime 2000 Uses Dangerously Optimistic Models For SPF Timing?
Hi John
With the new 2000 versions of pt_shell, we recently moved to doing timing
with a SPF (our designs have been too large to attempt this before). Doing
this however has some implications. PrimeTime claims to calculate an
effective capacitance form the distributed RC_network in the SPF. For this
it uses some algorithm, which assumes that there's a montonically increasing
relationship between input transition/output capacitance with the cell delay
and this should be expressed in the cell delay table. If it is not so, it
says it found a negative driver resistance and failed to compute effective
cap in which case it will use the lumped capacitance. This is optimistic
for best case conditions (that is if we care!!)
This is a a pretty bad assumption (and we know it!! after doing a lot of
spicing!!)
I reported it to Synopsys AE. They first didn't believe me, then when I
gave them the library example, they wanted a test case which we couldn't
give them because the library is from Avanti. So we asked them to come on
site. They have been missing since then.
I just wanted to know if other people have encountered this in PrimeTime
2000 and does Synopsys acknowledge issue?
- Ajay Kumar Sinha
Silicon Systems India
( ESNUG 363 Item 10 ) -------------------------------------------- [01/25/01]
From: [ Grover, from Sesame Street ]
Subject: Has Anyone Used RealChip? Are They Good Or Bad News To Work With?
Hi John,
First off thanks for ESNUG over the years!
Second no names on this one, we're currently negotiating with the company
mentioned below.
We've been doing ASICs for years here at [ Name Deleted ] with VLSI/Philips
and OKI and are looking at other options. So the question is: Has anyone
used RealChip and what has their experience been? Areas of interest: NRE,
availibility of IP, layout iterations, time in layout, ECO problems, test
issues, proto and volume delivery, piece price, responsiveness, and ability
to deliver on promises made. Any other comments greatly appreciated.
- [ Grover, from Sesame Street ]
( ESNUG 363 Item 11 ) -------------------------------------------- [01/25/01]
From: Chris Simon <Chris.H.Simon@gd-is.com>
Subject: Our Cadence Hierarchical PBOPT Physical Chip Design Results Suck!
Hi, John,
I've never written to ESNUG before so I'm not quite sure if this is the
right place to ask this, but I'm desperate so here goes.
We're in the process of designing 3 standard cell ASICs and decided to use a
hierarchical approach for the physical design. The reasons for this are
many and varied, and I won't go into them here.
We use Cadence BuildGates (Ambit-RTL) to do synthesis, but also use Synopsys
in the flow to insert scan. All of this is done on a block level, where a
block can include anywhere from 4,000 to 50,000 instances as well as SRAMs
and register files. We then go to Cadence Logical Design Planner to
floorplan the block and place the cells using Qplace and QPOPT (their
replacement for PBOpt). We then ship the block off to a third party to
insert clocks, route the block, and extract parasitics. They use CTgen,
Silicon Ensemble, and HyperExtract. For the few blocks that we've completed
we've had very good luck achieving timing closure at the block level (even
though we aren't using Physical Compiler or PKS). This is one of the
reasons we chose the hierarchical approach.
As we get a good idea of the block sizes and shapes we floorplan the top
level, with around 15 blocks and the I/O cells. We ship the top level off
to the third party and they insert clocks at the top level. So far so good.
Now they try to fix the timing on the long nets between blocks using QPOPT,
and the results are horrendous. At this point we would have already
generated timing models for each of the blocks, and they are inputs to the
QPOPT runs. In many different attempts at top level timing optimization,
QPOPT has not been able to put in an appropriate number of buffers/repeaters
to achieve reasonable timing. I did some experimentation with long nets and
various numbers of buffers and found that I should be able to go 5 mm in
about 1.2 nS even with a less than optimum repeater scheme. QPOPT isn't
even getting close. When we talked to Cadence R&D about this they basically
said that QPOPT isn't intended to do this type of optimization.
So my question is, how are other people doing the timing optimization
(buffer and repeater insertion) at the top level for a hierarchical physical
flow? I've heard a lot of talk about hierarchical physical design being
necessary as designs get larger, and I've seen a couple of recent articles
on hierarchical design flows in ISD magazine, but they don't say much about
this topic. I tried to reach the authors of these articles to question
them, but had no luck. Would your readers have any answers here, or is
hierarchical flow still a pretty rare thing?
By the way, you're providing a great service to those of us out here trying
to get the damn tools to do what we need. Keep up the good work.
- Chris Simon
General Dynamics Information Systems Minneapolis, MN
( ESNUG 363 Item 12 ) -------------------------------------------- [01/25/01]
Subject: ( ESNUG 361 #1 ) Thad Follows Up On His 2 PKS TSMC Tape-Outs
> I'm a chip designer but my sidebar task here at Geocast is to also be the
> methodology guy, too. I set up our PKS flow here and we recently did two
> 200 Kgate tape-outs to test the PKS flow with our in-house custom standard
> cell library. We used MOSIS TSMC 0.18 and TSMC's CyberShuttle service.
>
> - Thad McCracken
> Geocast Networks Systems, Inc. Beaverton, OR
From: Thad McCracken <thad@geocast.com>
Hi John,
I don't often see people sending you followups on chips they've taped out,
so I'm not sure if this is something you'll be interested in. Nevertheless,
I thought I'd send it and let you decide:)
We got our chip back about a month ago and have had enough time to really
wring it out by now.
Functionally the chip came up w/ no problems (which shouldn't be a huge
surprise given that most of the logic was second generation). We were
able to fault grade the chip (to 99.78% - hooray for full-scan and BIST)
and get functional vectors passing very quickly w/no problem.
Perhaps more interesting, however, is the correlation between the speed at
which the die ran and pre-tapeout analysis of the same. We obviously wanted
to make sure we were very close here as we get ready to produce larger chips
using the same PKS flow and TSMC 0.18 process.
As mentioned in my previous write-up, our original timing target for this
chip was 160Mhz, and post-route timing came in 160 ps off this target, or
156Mhz. We expected the chip to run at this frequency for the following
operating conditions (those at which we characterize our cell library):
- typical process
- 1.62V
- 110 deg C
We duplicated these conditions to the extent possible in the lab, and
found the chip to run up to 163 Mhz given the following:
- 1.62V (measured at perimeter of die)
- 48 deg C (measured on surface of die)
Characterization of the ring-oscillator on the die and comparison of the
results to spice sims put the die within measurable limits of typical
process. Spice sims for this same ring-oscillator for speed vs. temp (at
1.62 V), and subsequent linear-fit of that data was used to extrapolate an
expected frequency of operation given a die temp of 110 deg C. That
frequency was 151 Mhz.
So our silicon appears to run within ~3% of the expected frequency, given
the same operating conditions (and extrapolation where they couldn't be
duplicated or measured).
Obviously we were really happy w/ these results, and feel very good about
our tool flow and cell library going forward. Our experiences with TSMC
were very good as well.
- Thad McCracken
Geocast Networks Systems, Inc. Beaverton, OR
( ESNUG 363 Item 13 ) -------------------------------------------- [01/25/01]
From: Keith Howick <keith.howick@siliconmetrics.com>
Subject: How Can I Now Model Dual-Edge Triggered Flip-Flops In LIBERTY?
John,
I'm having difficulty discovering how to model some odd types of functions.
I was wondering if you know of a resource or web-site that has examples of
modeling such functions in LIBERTY. My current quest is for a dual-edge
triggered flip-flop.
- Keith Howick
Silicon Metrics Corp.
( ESNUG 363 Item 14 ) -------------------------------------------- [01/25/01]
From: Jeff Winston <jeff.winston@conexant.com>
Subject: How To Insert Pass-Thru Signals In Hierarchical Physical Designs?
Hi John - hope all is well with you - got a question for ESNUG...
We are having our first experience with a large hierarchical design and are
trying to find the best way to handle global routing of signals that cross
blocks. We don't think the top-level routing ability of Apollo will do a
good enough job, and it also won't allow us to place buffers or flops where
we need them. We are thinking instead of embedding pass-thrus in different
blocks where needed to better facilitate the travel of these global signals
across the chip. As best we can tell, this will require adding the
pass-thru connectors (and flops if we use them) to the RTL of the blocks
containing the pass-thrus. However, we'd prefer not to mess directly with
the RTL that the designers are working with.
One idea we had was to put wrappers around the top-level RTL blocks and add
the pass-thru's (and flops if needed) to the wrappers. One could envision
developing a scripting environment to make this more less painful. However,
this will still change our hierarchy and require a fair amount of effort.
We were wondering if anyone else had come across this problem and how they
solved it.
- Jeff Winston
Conexant Systems
( ESNUG 363 Networking Section ) --------------------------------- [01/25/01]
San Diego, CA -- Intel Wireless Cellular Communications Group seeks 2 to 3
Jr/Sr ASIC Designers. No headhunters, please. "ihab.mansour@intel.com"
============================================================================
Trying to figure out a Synopsys bug? Want to hear how 11,000+ other users
dealt with it? Then join the E-Mail Synopsys Users Group (ESNUG)!
!!! "It's not a BUG, jcooley@world.std.com
/o o\ / it's a FEATURE!" (508) 429-4357
( > )
\ - / - John Cooley, EDA & ASIC Design Consultant in Synopsys,
_] [_ Verilog, VHDL and numerous Design Methodologies.
Holliston Poor Farm, P.O. Box 6222, Holliston, MA 01746-6222
Legal Disclaimer: "As always, anything said here is only opinion."
The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com
|
|