Editor's Note: I've been having a real adventure trying to find a new
home for www.DeepChip.com. Got a number of offers, but I'm being
constantly reminded that there are no free lunches in life. More than
once, I get close, but when I check it out, there's some unacceptable
hidden agenda that pops up. And having an EDA vendor do it REALLY
clouds things. So now I'm talking to the many on-line and hard copy
trade publications and inadvertantly getting to see their dark sides. My
best insight came last night when I received the following e-mail:
"Sorry for not getting back to you sooner but I'm afraid that we can't
host ESNUG. Unfortunately, we are dependant on sponsor revenue and
your freewheeling, no-holds-barred approach is simply too risky."
I won't tell you who wrote this. But, in my own weird way, I'm personally
very, very proud of that rejection.
- John Cooley
the ESNUG guy
( ESNUG 346 Subjects ) ------------------------------------------- [3/00]
Item 1: ( ESNUG 345 #1 ) Magma Sorta Worked; PhysOpt Not; But Who Cares?
Item 2: Seeking Experiences With Commercial C++-to-Verilog Porting Tools
Item 3: And I'm Seeking User Experiences On Verilog-to-C Conversion Tools
Item 4: ( ESNUG 345 #5 ) EDA Vendor Wishes No Anon ESNUG Customer Letters
Item 5: ( ESNUG 341 #1 ) PhysOpt, Gambit, Scheme, PDEF, & Avanti Saturn
Item 6: Customer Script Shows Memory Leak In report_power In DC 99.10-04
Item 7: You Won't Find Cadence Dracula DRC Decks Floating Around "Free"
Item 8: ( ESNUG 345 #10 ) Six More User Letters On C/C++-Based HW Design
Item 9: Freeware Emacs Vera Mode Editor Now Available At www.emacs.org
Item 10: How To Find The Number Of Instances Of Each Module Inside Of VCS
Item 11: ModelSim's Tcl/Tk vs. Cadence/Synopsys/Exemplar's Tcl Interfaces
Item 12: Cadence 4.3 DFII Includes PIPO For Their GDS-II In/Out Streaming
Item 13: DC & Other SNPS Tools Can't Read The SDF File It Just Wrote Out !
Item 14: Probable Mis-install Causes Cadence cdsSpice To Not Display HSPICE
Item 15: 0-in Now, And The Ethical Problems Of Kurt Baty Owning 0-in Stock
The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com
( ESNUG 346 Item 1 ) --------------------------------------------- [3/00]
Subject: ( ESNUG 345 #1 ) Magma Sorta Worked; PhysOpt Not; But Who Cares?
> I'm just now getting a peek at wiring congestion within the IBM tools.
> Although I did not run a separate PhysOpt run to specifically address
> congestion, nothing I see today looks unreasonable or any worse than
> our previously released ASICs. The 5% timing goal achievement is close
> enough from my standpoint. The assumptions made during RC extraction are
> gross, and the assumptions made about global wiring obstructions are
> incomplete. The IBM tools are fully capable of taking the PhysOpt output
> and completing the P&R task.
>
> I can say from experience that I'm now at least one month, maybe two,
> ahead of where I would have been without PhysOpt and it only cost me a
> week's worth of work. This result has definitely generated a stir around
> here at Cray/SGI.
>
> - Roger Bethard, Design Engineer
> Cray/SGI Chippewa Falls, WI
From: [ Tony, the Tiger ]
Hi, John,
If you decide to publish this, please do not publish my name, my username,
or my company name. We're in hip-deep with 3 of the 4 companies mentioned,
and I'll be in a world of hurt if they saw me saying this. Thank you.
I've worked in layout automation and chip design for over 10 years. Of all
the engineers I know working _physical_ design on ASICs and processors, most
won't even look at PhysOpt because they don't want to correlate a placement
engine that doesn't have its own router (making me wonder how PhysOpt even
qualified in the GDS II category since it can't get to polygons under its
own steam). It seems to me that lack of a router in PhysOpt leaves alot of
heavy lifting to the Avanti/Cadence/IBM back-end flows (try notch-filling,
antenna checking, and cross capacitance, for instance!)
I'm not talking out of my ass here, John. We looked at PhysOpt. We
couldn't even get PhysOpt to run -- but that may be more symptomatic of
where we are on the Synopsys support/sales pecking order than the actual
quality of the tool.
Who is trying PhysOpt? Most (not all) of the people I've found tempted by
PhysOpt are logic designers who have been brainwashed by the Synopsys sales
machine into believing that they can close aggressive layout timing from
within the physically antiseptic confines of Design Compiler.
These are the same Synopsys people who told us tightening our wireload
models would give us good timing correlation. Now they've figured we need
to see some layout, but not all of it. For crying out loud, look at real
timing! Tune your wireload models, back-annotate your loading, check
your Static Timing Analysis reports, but for God's sake, GO LOOK AT THE
LAYOUT. Its not evil. It's just design data.
The people I've seen eager to get on next generation physical design tools
are those stuck with a non-timing-driven placement tool in 0.35 or 0.25um.
Talk about taking a knife to a gun fight! No wonder these people cry for
a better solution. I don't doubt that PhysOpt could beat the pants off of
a congestion-based placers like the old LSI PD or old Cell3. We went
through 40-60 trips through LSI Logic's CMDE no-clue-about-timing placer to
close our aggressive 0.35um ASICs. We doubted we would be able to close
the designs at all. It was design hell. BUT, once we got _timing-driven_
placement flows in place, every single ASIC has closed timing in one pass
through the final layout flow using standard Avanti & Cadence software.
It wasn't a slam-dunk. It wasn't like we took the bright-colored wrapper
off the tools and found ourselves instantly enveloped in design nirvana. We
had to change the way we structured our projects. We had to put a lot more
up-front emphasis on the development of _quality_ timing constraints. But
to get out of that nasty timing thrash, it was more than worth it.
Anyway, I got some tantalizing experimental results with Magma on a nasty
0.35um design. The Magma results were faster, denser, and produced with
less synthesis effort. Why am I _not_ pushing Magma for my next set of
ASICs? Because it's not that simple:
- Cycle time. Magma tromped every non-timing-driven placer on these
0.35um designs. But when I matched it up against a timing-driven
placer, it only does *slightly* better (5% better vs. 10-20% better).
- Density. I'm pretty much pad-limited in 0.18um, and I'm going to
be pad-limited out the wazoo in 0.15um. Density's nice, but I'm not
going to re-make my flow to get it. Zero points for Magma here.
- Synthesis effort. OK, our DC resynthesis effort to close timing
*without* a timing-driven placer was brutal. But since we:
1) ramped up *with* timing-driven placement,
2) partitioned our designs, and
3) emphasized good timing constraints,
our gate resizing and buffer insertion have closed 99% of our paths.
DC with timing-driven placement works. Could Magma have reduced our
synthesis effort even further? Likely. How much further? Don't
know. Don't care.
There's also the TTTMF factor -- Time To Trash My Flow. My friendly
neighborhood ASIC vendors just adopted timing-driven placement in the last
year, even though this technology ramped into the industry 5 years ago.
These guys are allergic to physical design solutions that haven't been
road-tested. Who road-tests new physical design technology? Processors.
Take a look at plots of Merced, UltraSparc III, Athlon -- how much place &
route do you see there? I'm thinking 10%, maybe 15% of their logic, tops,
and the number will almost certainly go down the further they go past 1 GHz.
Are they going to want to road-test new P&R solutions for something that
makes up 5% of their chip? I wouldn't.
Bottom-line: If I'm smart about it, I can close timing with what P&R tools
I've got today. I see a lot of promise in Magma, but I can't guarantee a
payoff worth its stabilization cost. And I don't see any processor guys
interested in getting the kinks out. So the way I see it, all these new
physical synthesis tools are a waste of time.
- [ Tony, the Tiger ]
( ESNUG 346 Item 2 ) --------------------------------------------- [3/00]
From: Dan Joyce <dan.joyce@compaq.com>
Subject: Seeking Experiences With Commercial C++-to-Verilog Porting Tools
Hi, John,
I am starting on a project where a team of architects, familiar with C++,
are writing all the ASIC functionality in C++ (for simulation speed). Then
they are handing it off to a team (me and others) to port to Verilog. I was
wondering if you could provide me with a list of commercially available
tools that can port C++ to Verilog, and some idea of who has the lion's
share of the market. Also, any info on strengths and weaknesses of these
tools would be great.
- Dan Joyce
Compaq Austin, TX
( ESNUG 346 Item 3 ) --------------------------------------------- [3/00]
From: Rich Johnson <richj@fc.hp.com>
Subject: And I'm Seeking User Experiences On Verilog-to-C Conversion Tools
I'm looking into the possibility of making a C code imitation of a Verilog
model for the purposes of performance enhancement. I'm curious if there
are any tools available that could do the job. I would appreciate any
leads that any of you could give me.
- Rich Johnson
Hewlett-Packard Fort Collins, CO
( ESNUG 346 Item 4 ) --------------------------------------------- [3/00]
Subject: ( ESNUG 345 #5 ) EDA Vendor Wishes No Anon ESNUG Customer Letters
> Please keep me anonymous on this one!
>
> We use ClearCase for ASIC design & it works great. We had used DesignSync
> from Synchronicity on one project and it was a disaster. DesignSync is
> like an early beta version of RCS that they charge a lot of money for! We
> had problems with corrupted databases and lost files. In ClearCase we see
> nothing like this. It just works in the background as a good revison
> control system should do. In the past we have used RCS on many projects
> but it started to consume a lot of the designers time to manage the system
> and create scripts so we started to look around for alternatives. After
> the DesignSync mistake we have settled on ClearCase, which is also the
> system our software designers are using.
>
> - [ We Got Burned, Too ]
From: Uri Farkash <urif@sd.co.il>
Hello, John,
I follow your ESNUG forum for a long time and I appreciate many of the Q&A
raised there. However, I have to admit that there are two many anonymous
inputs published in ESNUG. In my opinion and in the opinion of others I
discussed with them about it, this issue contributes to a depreciation of
the credibility of the stuff published by you.
Could you find a way in which anonymous stuff will not be sent to people
which are not interested in such kind of stuff ?
- Uri Farkash
Summit Design EDA, Inc. Israel
( ESNUG 346 Item 5 ) --------------------------------------------- [3/00]
Subject: ( ESNUG 341 #1 ) PhysOpt, Gambit, Scheme, PDEF, & Avanti Saturn
> Perhaps my knowledge of the basis of PhysOpt placement is incorrect. Did
> this placer come from technology developed by Gambit? What kind of
> utilizations do you typically achieve, or, what kind of transistor
> densities do you usually achieve?
>
> I was also wondering what the total placement times were for the block(s)
> in question. I can see how the INITIAL placement for PhysOpt or a variety
> of other placers would be better than Avanti. Avanti's initial placement
> frequently is not that great. But in our evaluations (admittedly some
> time ago) the Avanti placer achieved the best final placement in the
> shortest amount of time.
>
> - Nick Summerville
> Ford Microelectronics Colorado Springs, CO
From: Bob Prevett <prevett@nvidia.com>
I'm not sure where Synopsys got their placer from. We achieved
utilizations ranging from approximately 85% to 95%, depending on the block.
We found that the timing and congestion driven placer for PhysOpt took
longer than the Avanti non-timing-driven placer. Sorry, I don't have data
comparing run times for PhysOpt against the Avanti timing-driven placer.
> Keep me anon. I work in the Methodology Development group at LSI Logic.
> I read Bob's review of Synopsys PhysOpt in ESNUG and am very interested in
> finding out more of your issues in working with an Avanti backend. Were
> there issues with getting the PDEF into Avanti? Was it translated via
> SCHEME? Did both cell placement and global routing information transfer
> from PhysOpt into Avanti? Did PhysOpt have a good understanding of
> blockage due to memories, power rails, etc? Also, he didn't mention
> whether his "old" flow utilized Avanti Saturn for physical re-synthesis.
> Did it?
>
> - [ An Engineer At LSI Logic ]
Well, I'm a design engineer, not a layout engineer, but I'll tell you what I
do know about this issue. We had to write Scheme scripts to be able to dump
out a version 3.0 PDEF file for PhysOpt to read in, since the Avanti PDEF
files were dumped out in version 2.0. An apps engineer from Synopsys helped
set this Scheme script up for us.
PhysOpt uses congestion analysis in addition to timing analysis to generate
the placement. However, global route information/estimates used by PhysOpt
for the congestion analysis are not dumped out. So, only placement
information, in the form of PDEF files, was transferred between the Avanti
and PhysOpt tool sets. All routing was done in Avanti Apollo.
Before running the PhysOpt placement, we used Avanti to do the floorplan for
the layout partition. This includes blockages due to memories and other
custom macrocells. Also, any blockages needed to relieve routing congestion
issues in known problem areas are also added at this floorplanning stage.
This floorplan with the various blockages is read into PhysOpt via the PDEF
file.
We did experiment with Avanti Saturn in the old flow; our results we not
that great at that time. However, that was almost a year ago, and it's
likely that Avanti has improved this tool since then.
- Bob Prevett
NVIDIA Santa Clara, CA
( ESNUG 346 Item 6 ) --------------------------------------------- [3/00]
From: Gerd Jochens <Jochens@OFFIS.de>
Subject: Customer Script Shows Memory Leak In report_power In DC 99.10-04
Hi John,
We are using DC 99.10-04 in the Tcl mode and the report_power command for
power characterization of DW-components for a LSI library. The power
characterization requires several calls of the report_power command. We
have found that report_power does not de-allocate memory which leads to a
program abortion for larger characterization runs.
I have attached a simple DC test script to demonstrate the problem.
# program to demonstrate memory leak produced by report_power
# create module from designware-library
elaborate DW02_mult -arch csa -lib DW02 -update \
-param "A_width = 4, B_width = 4 "
current_design
compile -map_effort medium
ungroup -all -flatten
change_names -rules vhdl -hier
# calculate power
while { 1 == 1 } {
set_switching_activity -period 10 -toggle_rate 0.7 [all_nets]
report_power }
One should start the script and observe the memory requirement to see that
already for this small component each report_power command requires some
more Mbytes of additional memory.
- Gerd Jochens
OFFIS Research Institute Oldenburg, Germany
( ESNUG 346 Item 7 ) --------------------------------------------- [3/00]
Subject: You Won't Find Cadence Dracula DRC Decks Floating Around "Free"
> Is it possible to get a sample/template rules file for verification of
> designs using Cadence Dracula? If yes, please let me know.
>
> - Hartej Singh
From: Grant Erwin <grant_erwin@halcyon.com>
It depends on who you are. If you are, for instance, a customer of a major
merchant foundry (e.g. TSMC), then you can most likely get their DRC deck.
Consider, however, that DRC decks contain very precise and proprietary
information about a process, and also that coding and debugging of a DRC
deck contains perhaps hundreds of hours of expensive work, and you'll see
why "real" decks just aren't floating around free.
- Grant Erwin Kirkland, Washington
( ESNUG 346 Item 8 ) --------------------------------------------- [3/00]
Subject: ( ESNUG 345 #10 ) Six More User Letters On C/C++-Based HW Design
> Having actually used C to design hardware a few of times, I'm not
> impressed with Synopsys SystemC. The "design with C" tools fall into
> 3 categories:
>
> (1) "Let's make C look/behave enough like Verilog/VHDL so that
> Verilog designers can use C". SystemC and CynApps are in
> this category. This category is distinguished by user-guides
> which explain an execution model and explain how to express
> non-blocking assignments.
>
> (2) "Let's add something to C to make it a HW design language".
> Handel-C is my favorite example - if you're one of the 10k
> people who know CSP, it may be the tool for you.
>
> (3) "Let's figure out how to translate a C subset into gates."
> Currently, the big player in this area is C-Level Design.
> (I'm getting close to releasing a product which competes
> with their RTL-C product.)
>
> Category (1) tools promise faster simulation and easier integration with
> a world simulated in C/C++. I don't think that the former advantage is
> sustainable -- the "fast" HDL simulators can use the same shortcuts.
>
> - Andy Freeman
From: leonid@vnet.ibm.com (Leonid Gluhovsky)
Andy, could you please elaborate on your Category (3)?
I understand that if we take some hardware block, & look at the computation
it does in one cycle, we can express this computation as C functions which
take the block's inputs and current state as arguments, and produces the
block's outputs and next state. Such C function(s) can be written in purely
sequential style, without explicit parallelism.
I don't understand how one proceeds from this point. Suppose there are two
such functions, modelling two neighboring hardware blocks. How is it
possible to hook them together to get a model of the parent block? If at
least one of the blocks is a Mealy machine, it's impossible to compose them
as black boxes and still keep purely sequential style.
Category (1) has a solution for this (& that solution has the many problems
which you mention). What is the solution in category (3)?
- Leonid Gluhovsky
IBM
---- ---- ---- ---- ---- ---- ----
From: Andy Freeman <anamax@earthlink.net>
Leonid,
The problem is comes from part of the "suppose", namely the quest to map
each Verilog module/block onto a single C procedure. The RTL-C paper on
the C-Level Design web site shows how they use multiple C procedures to
model/express a single Verilog module. That example also shows how they
implement a hierarchy.
If you design with registered inputs or registered outputs, this scheme
requires two procedures per module. There are registering schemes which
let you use a single procedure per module. If you let an electron cross
multiple block boundaries between clock events without going through a
storage element, in some cases you get to use more than two procedures per
block.
My solution is not completely different.
- Andy Freeman
---- ---- ---- ---- ---- ---- ----
From: [ Been There, Done That At SUN ]
John:
You might be interested to know that this C++ HW design fad is getting some
support within SUN Microsystems. Things tend to repeat themselves here
because you constantly have young guys coming in and they've never had the
previous experience. And the lessons from one generation often don't get
passed to the next. I have no motivation to tell the young guys what we
learned the first time we tried C-based design because they think I'm an
old fart anyway. They'll just have to learn this mistake on their own.
- [ Been There, Done That At SUN ]
---- ---- ---- ---- ---- ---- ----
From: [ An Ex-Hitachi RISC Designer ]
Hi, John, (anon please)
Several years ago I worked at Hitachi on their RISC processor. Our RTL was
developed in C and the scheduling of modules was done by designer. Our bit
masking was a little different since all our regs were unsigned long and
everything had to be masked for any operations. The design was latch based
and all latches were defined separately from the library. So, the designer
only partitions his design into combinational, or state machine modules and
calls all the RTL modules (functions) in the order of execution.
Many C constructs such as "while", "for", and "case" were not used. Only
nested "if else" constructs were allowed.
It was a fast cycle-based simulator. And all C functions were converted to
PLA and boolean equation which were read in by dc_shell. It was linked and
DC generated a EDIF file.
To me, there is no question of if the future is C cycle based simulator.
It's only a matter of how soon. (But I must add that Hitachi did abandon C
based HW design due to complexity of supporting it for their SH4 processor.)
- [ An Ex-Hitachi RISC Designer ]
---- ---- ---- ---- ---- ---- ----
From: Daghan Altas <daltas@yahoo.com>
Dear John,
We met at the SNUG'99 Boston Conference. We were the lucky team who won the
Play Station in the drawing. I hope you remembered me. I was reading your
comments on C++ based languages. These are my insights. At one point you
mentioned:
"And, yes, these guys tried designing hardware with C years ago."
There is a common misperception about C++ and C. The fact that they start
with the same letter does not mean that they are the same thing. If you
look at http://www.systemc.org, you will realize that object-oriented
programming is the "new continent" waiting to be discovered. I don't think
anyone did any black magic. People just realized the problem and put the
pieces together. Again, it is not C at all. It is C++. We are not talking
about mere syntactical changes, but rather a whole new perception based on
CLASSES. I know it sounds like the "same old story". But, weren't people
making same kind of arguments for VHDL and Verilog 10 years ago?
C++ can be as physical and electrical as it can get. VHDL and Verilog are
also object-oriented languages, though they hide it very well.
- Daghan Altas
McGill University Montreal, Canada
---- ---- ---- ---- ---- ---- ----
From: Tom Coonan <tcoonan@subasic.sciatl.com>
John,
Another point I worry these C/C++ efforts might miss is that it isn't just
the Verilog/VHDL languages -- it's also the supporting tools/utilities that
have grown up along with each of the languages. When I use Verilog, I'm
also using Verilog's VCD capability and associated waveform viewers. When
I use C++, I usually expect a superior source level debugger. Likewise,
when I use MATLAB/Simulink, I don't just want the matrix features in that
language -- I want MATLAB's overwhelming library of existing functions and
its superior ability to plot data in beautiful and complex graphs. So,
it's not *just* the HDL language itself; it's the supporting environment,
tools, and utilities they have. Any unified language must bring along all
these assets, otherwise it's a step backwards and I'd rather continue to
"switch gears" between my MATLAB/C++/Verilog tools.
Hey, let's just rename our old-fashioned MATLAB/C++/Verilog schemes to
something like "Open SoC Hybrid Component-based EDA Solutions" and then
they'll become hip!
- Tom Coonan
Scientific Atlanta
( ESNUG 346 Item 9 ) --------------------------------------------- [3/00]
From: Reto Zimmermann <reto@synopsys.com>
Subject: Freeware Emacs Vera Mode Editor Now Available At www.emacs.org
There is a simple Emacs Vera Mode available now. The mode includes the
following features:
- Syntax highlighting
- Indentation
- Word/keyword completion
- Block commenting
- Works under GNU Emacs and XEmacs
http://www.emacs.org/hdl/vera-mode.html
- Reto Zimmermann
Synopsys Design Reuse Group
( ESNUG 346 Item 10 ) -------------------------------------------- [3/00]
Subject: How To Find The Number Of Instances Of Each Module Inside Of VCS
> Is there a tool using which i can find the total number of instances of a
> given module in my design? e.g. I may be instantiating DFF's or MUX cells
> everywhere, and I need to know how many of each type are instantiated
> across the entire hierarchy. At least VCS does not seem to print out this
> info.
>
> - Sudheendra Hangal
> Sun Microsystems Mountain View, CA
From: Paul Campbell <taniwha@taniwha.com>
You can do it through PLIs, but it's a pain. A much simpler, low tech way
to do it is:
1) add:
// repeat this for all objects you want to count
module top_count;
integer df_count;
initial begin
df_count=0;
#2 $display("%d DF flops instantiated", df_count);
end
2) and to the DF module (or other objects you want to count) add:
initial #1 top_count.df_count=top_count.df_count+1;
As I said -- low tech but it works. If your DF module is intended to be
synthesised you might want to surround the initial statement with a
"// synopsys translate_on/off" pair.
- Paul Campbell
---- ---- ---- ---- ---- ---- ----
From: "Per Edstrom" <pedstrom@home.com>
An easy way to do this in VCS is to specify -Xman=0x11 to put all source
files into tokens.v.
Then: grep ^<module name> tokens.v | wc
- Per Edstrom
Axis
( ESNUG 346 Item 11 ) -------------------------------------------- [3/00]
Subject: ModelSim's Tcl/Tk vs. Cadence/Synopsys/Exemplar's Tcl Interfaces
> Has anyone experience using tcl/tk with ModelSim? What is the interest
> of using tcl/tk with this tool?
>
> - "Jeff" France
From: Volker Hetzer <volker.hetzer@abg1.siemens.de>
So far I've only seen Tcl/Tk with Cadence Affirma and Synopsys Design
Compiler. There, the benefit was that you've got a whole language to do
scripts in, including socket communication, procedures, file i/o and such.
However, my experience with Synopsys was that it's possible to create a
really messy, ugly Tcl-interface if one tries hard enough. The guys at
Synopsys certainly did.
What Tcl version does your version of ModelSim use? Try "info patchlevel"
to find out. Does it, perchance, offer Tk too? If yes, you've got a real
winner because then you can't do your own graphical interface to ModelSim.
- Volker Hetzer
Siemens Germany
---- ---- ---- ---- ---- ---- ----
From: Evan Lavelle <eml@riverside-machines.com>
Tcl seems to be everywhere now - Modelsim's on 8.0 and on Exemplar's
Spectrum rev. 7.6p2. You can modify the GUI on Modelsim - don't know
about Exemplar Spectrum.
- Evan Lavelle
Riverside Machines, Ltd. UK
---- ---- ---- ---- ---- ---- ----
From: mark.luscombe@lineone.net (Mark Luscombe)
I have been using Modelsim for a while now, and am starting to do some
larger and more complex simulation runs. Previously, I have just used a
linear sim.do file for providing input data patterns. Now I wanted a more
sophisticated test pattern, and wrote a nested set of sim.do files with
passed arguments using TCL for loops and bit manipulation etc. This works
really well. I was wondering what other designers opinions are concerning
test benches implemented like this with TCL scripts, and ones that use VHDL
with it's file IO capabilities.
- Mark Luscombe
---- ---- ---- ---- ---- ---- ----
From: Richard Guerin <guerin@IEEE.org>
IMHO, testbenches are much more powerful and offer more flexibility. For
instance, you can include bus functional models (i.e. CPU, SRAM, Flash,
UART ....) to help test external interfaces, use assertions statements
to help automate testing, instantiate multiple versions of the design
(like RTL and Post-Route) and perform regression testing against outputs,
use TEXTIO packages to read in test vectors from ASCII file (can also write
out test results to ASCII file), etc.
Personally, I'll use a script file for simple design entities or modules.
I'll use a testbench for top level design or any entity that includes
complex interfaces, timing, or protocols. I've found that the extra time
required to generate and debug a test bench is worth it as it can save you
much more time down the road during system level checkout/integration.
- Richard Guerin
---- ---- ---- ---- ---- ---- ----
From: Ed Hepler <elh@vu-vlsi.ee.vill.edu>
I have used both Tcl based testbenches and VHDL based testbenches with
ModelSim. It has been a few months since I last used a Tcl testbench.
Note that there is a limit of 9 ($1 - $9) arguments from the command
line when passing arguments to a Tcl function via a "do".
I have also noticed a significant speed improvement when using a
VHDL-based testbench.
- Ed Hepler
Villanova University
( ESNUG 346 Item 12 ) -------------------------------------------- [3/00]
Subject: Cadence 4.3 DFII Includes PIPO For Their GDS-II In/Out Streaming
> How to do you write a Cadence SKILL program to stream out a layout?
> Which function can to do this ??
>
> - "oCxu" Taiwain
From: "David Covell" <david.l.covell@intel.com>
You didn't mention which platform you use, but Cadence includes PIPO for
GDSII in/out translation. In Cadence 4.3 DFII the PIPO and stream functions
are loaded from transUI.cxt .
PIPO is asynchronous, indicating the probablility that it's a stand-alone
executable rather than a SKILL utility. Look for it under your Cadence
directory; you may be able to make a simple system call to it since it
supports template files for the numerous parameters.
- David Covell
Intel
( ESNUG 346 Item 13 ) -------------------------------------------- [3/00]
From: Andrew Pagones <pagones@labs.mot.com>
Subject: DC & Other SNPS Tools Can't Read The SDF File It Just Wrote Out !
John, please forward this to ESNUG. Thanks.
We are modeling flop output-to-output timing arcs in our .lib. That is, we
have CLK->QB and QB->Q arcs. DC handles these just fine until it comes time
to write an SDF. Then it decides to simply write CLK->QB and CLK->Q arcs.
The delay numbers are correct, but of course this file cannot be imported
into DC because of arc mismatches!
SolvNET revealed that each tool may or may not have its own SDF writer
code. (See Static_Timing-171.html and Static_Timing-191.html). In my
opinion their proposed workarounds are laughable.
Has anyone been through this before and found which tool's SDF writer
produces the most 'reasonable' (i.e., bug-free, actually USABLE) output?
- Andy Pagones
Motorola Labs
( ESNUG 346 Item 14 ) -------------------------------------------- [3/00]
Subject: Probable Mis-install Causes Cadence cdsSpice To Not Display HSPICE
> Why can't Cadance display an HSPICE waveform? I'm using Cadence 97a and I
> didn't have this problem when I select cdsSpice simulator to simulate my
> own model.
>
> - Chiranut Sa-Ngiamsak
> UMIST Manchester, UK
From: han@ice.el.utwente.nl (Han Speek)
Sounds as if the Cadence interface stuff (which should have come with the
version you're using) isn't properly installed, or possibly not installed
at all. Look for a file called cds.tar in the HSPICE installation directory;
that contains the stuff you need to get Cadence and HSPICE to understand
each other. If you've recently installed a newer version of HSPICE, don't
forget to update the Cadence interface as well - it "may" not work with an
older version.
- Han Speek
University of Twente Enschede, The Netherlands
---- ---- ---- ---- ---- ---- ----
From: Emmanuelle Laprise <emmanue@photonics.ece.mcgill.ca>
I have sometimes found that when I can't plot a Cadence waveform directly
after simulation, I can still plot it using the Cadence result browser.
Maybe that could solve your problem. This is mostly the case when I am
doing mixed signal simulations. There is also a hot fix for Cadence hspice
for one of their versions (98.2, I think)
- Emmanuelle Laprise
McGill University Montreal, Canada
---- ---- ---- ---- ---- ---- ----
From: Pedro Ventura <pventura@chipidea.com>
Is there anyway of Cadence displaying Hspice waveform without PSF features,
like cdsaawaves and others, from Avanti? I can simulate from Cadence, but
I can't see the results...
- Pedro Ventura
Chipidea Microelectronics, Ltd. Porto Salvo, Portugal
---- ---- ---- ---- ---- ---- ----
From: han@ice.el.utwente.nl (Han Speek)
It depends on the type of simulation. With the Cadence stuff comes a viewer
called "simwave" (but it also responds to "wd"), which you'd normally use to
view the output of Verilog-XL simulations. It is, however, also capable of
displaying HSPICE results, but only transient, no AC stuff. (Don't know if
that's at all useful to you, but it's an option worth knowing.)
- Han Speek
University of Twente Enschede, The Netherlands
( ESNUG 346 Item 15 ) -------------------------------------------- [3/00]
Subject: 0-in Now, And The Ethical Problems Of Kurt Baty Owning 0-in Stock
> Anders then praised 0-in's register leak check. "It found one register
> leak bug that would have been very hard to debug in the lab. It would
> have really confused our software designers."
>
> And Anders' experience was with an older version of 0-in. "I know 0-in
> has evolved since I last used it. It's apparently much easier to use now.
> It's still an interesting complement to simulation that help you find a
> class of bugs that are very difficult to find just by simulation,"
> concluded Anders.
>
> I wonder what the 0-in user experience is now. ( http://www.0-in.com )
>
> - from "A Partial 0-in Update"
From: John Andrews <john@tensilica.com>
John,
I read your description about 0-In. We're currently using a pre-release
version of the tool here at Tensilica (www.tensilica.com) so I can fill you
in on the basics of what the new tool does. The basics of their new
methodology are simple:
1) Insert 0-In checkers in your Verilog. These are assertions (in
comment form) about the correct behavior of your RTL. 0-In currently
has about 30 checkers and the library of checkers will continue to
grow. The checkers vary from extremely simple checkers (e.g. x == 3)
to complex checkers that span multiple cycle counts or that check
correct FIFO or arbiter behavior.
2) Use their CHECK tool to make sure that no checkers fire using your
simulations as stimulus. CHECK generates Verilog which you include
in your simulations along with 0-In PLI to detect if any checkers fire
during your simulation. The overhead of CHECK is pretty low. You would
typically run with CHECK during all or a large part of your regressions.
3) Use their SEARCH tool to look for stimulus different from your simulation
stimulus that causes checkers to fire. SEARCH is a formal tool that
uses your simulation input as a "seed" space to explore around. It
searches N clocks (where N is a relatively small number) around your
seed, trying to see if it can drive something other than what you drove
on the primary inputs to make a checker fire.
BTW, you can define some of your checkers as *constraints*; the rest
are SEARCH *targets*. Constraints are assertions that SEARCH will
not violate; targets are assertions that SEARCH *tries* to violate.
Typically constraints are statements about primary I/O, while targets
are internal to your design.
CHECK is just an assertion library (although I don't mind paying for a
good one, and I like theirs), so SEARCH is their main product. 0-In claims
that by using this hybrid of simulation with formal techniques, they can
deal with much larger designs than pure formal tools.
I think this flow is much different (and easier) than the one I saw in your
Industry Gadfly ("A Partial 0-in Update"). I'm pretty happy with the tool
so far, although we are just getting started doing serious work with it.
- John Andrews
TenSilica
---- ---- ---- ---- ---- ---- ----
From: Rui DosSantos <rdossantos@argon.com>
John,
Ouch. We had Kurt Baty at my place of employment pushing this tool with
disastrous results, until new management decided that enough was enough
and dumped it.
Some of us could not really figure out why he kept pushing it. Turned
out that Kurt Baty was a primary investor and technical advisor at 0-in and
that's why we were using it in the first place. We continued to use it
despite continuous setbacks even though there was a great effort by our
team to make our design 0-in friendly.
The party was over when this scam of his was finally revealed.
I don't know, neither do I care, if 0-in is better or not. We spent a lot
of time debuging the tool for them, or better yet, to safeguard Mr. Baty's
personal investment.
You're a very smart guy. A lot of people take your advice. I hope you
are not inadvertently pumping something you're not sure about. Or did
Baty put in a request himself?
- Rui DosSantos
Argon Networks Littleton, MA
---- ---- ---- ---- ---- ---- ----
From: [ Vacation Boy ]
Hi John,
I just came back from vacation watching Cactus league baseball and what do
I get? An update about 0-In ??? Yawn!!! The early employees there used
to joke about "0-in.com" said quickly sounded like "zero income" !! I am
guessing you just had a beer with Kurt Baty or something. Did you believe
http://www.techweb.com/directlink.cgi?EET19980601S0055 when it first
came out? Anon. Pls.
- [ Vacation Boy ]
|
|