Editor's Note: Aahhh, Summer! According to last year's police report,
we had 700 people at our Woodstock Revivial'97 here on the Poor Farm. It
was a big bash that my landlord organizes and we tenant poorfarmers love to
be a part of. Last year, we had 5 bands & about 200 people set up tents
in the back pasture. At 2:AM, we got to watch my landlord smooze one very
angry Assistant Chief of police (plus his entourage of 11 other cops) into
*not* citing any of the open campfires (made without permits) nor any of
the under-age drinking so blatently going on in front of everyone. Whoa!
And we're doing our 3rd Annaul Mini-Woodstock Revivial again this Saturday
-- except this time we have four rent-a-cops to keep Mr. Assistant Chief of
Police away. (It's funny how money solves so many problems, isn't it?)
On a more personal note, with the new girlfriend coming over to my place
for the first time, I'm under a MAJOR Bachelor Apartment Clean-Up Alert.
(Ugh!) We've sterilized the kitchen, got to work on my bathroom last
night, and I'm going to K-Mart tonight to buy new bed sheets, pillows, a
couch cover, and whatever else I see that'll make my place less, how shall
I say it,... "bacheloresque". I'll be happy one the cleaning is done
and the party starts. ( Women. Can't live with'em. And they can live
quite nicely *without* single-guy-messes-thank-you. :^( Clean! Clean! )
- John Cooley
the ESNUG guy
P.S. If you think you might be in the neighborhood this Saturday, send
me a reply e-mail ASAP & I'll reply w/ party directions, etc. :^)
( ESNUG 296 Item 1 ) ---------------------------------------------- [7/98]
Subject: ( ESNUG 295 #12 ) A User's Comparison Of VERA vs. SpecMan
> Systems Science and Verisity are two companies providing integrated VLSI
> verification environments that work seamlessly with most Verilog (& VHDL)
> simulators and even some emulation systems. Systems Science sells their
> VERA language and tools built on it, while Verisity sells a tool called
> SpecMan using their "e" language.
>
> Like me, I presume that many of you have looked at one or both (& possibly
> other offerings) in some detail, and I'd like to trade notes. Any takers?
> Can we do it here on ESNUG?
>
> - Paul Tobin
> Fusion MicroMedia
From: Rudolf Usselmann <rudi@logic1.com>
John,
I haven't used SpecMAN, nor do I know any insides in to SpecMAN. I have
used VERA for over two years now and found it very helpful in my work.
Both of my clients (very large companies) who use VERA did evaluate
SpecMAN and choose VERA instead. Both projects where in networking
applications and had a headend/master with multiple clients. My
responsibility both times was to verify a chipset by architecting an
environment from scratch and implementing it.
I am not associated in any way with System Science (the makers of
VERA) nor am I being compensated by them (or anyone else) in any way
for writing this. I just like VERA and hope it will continue to grow.
I am open to compare specific features of SpecMAN and VERA if theres an
SpecMAN expert out there who is willing to have a friendly comparison.
At first, VERA appears a bit exotic, since it looks more like C/C++.
However, even without C/C++ knowledge many of the team members where
able to pick up the language very quickly.
VERA integrates in to verilog as one black box (single module) that
interfaces to the rest of the system. At first this might appear as a
draw back, but later on everyone seemed to appreciate to have one
central point for all exchanges between VERA and the rest of the
system. Besides this module level interface, VERA also allows the user
to attach to any signal in the design by specifying the full verilog
path to the signal.
One of the strong points of VERA, is the way you specify the interface
to your design. Each interface has its own clock domain, and can
consist of module level pins or any arbitrary signal within the verilog
design and environment. Each signal in an interface is driven and
sampled in respect to the interface clock, unless it is declared as an
asynchronous signal, at which point it can be sampled and driven at any
time. The individual interfaces can be bound together to create larger
interfaces containing numerous clock domains. This is done by creating
port definitions (virtual interfaces).
Tasks that communicate with the design can be written without having a
specific interface in mind. They can be written to adhere to a port
definition, and the actual interface to which they supposed to attach,
can be specified when the task is called. A good example for this is
when one wants to verify a multi port networking medium, and has a task
that lets say generates packets. When calling the task one can choose
the interface (those the port) on which to generate traffic. I have
found this a very powerful and useful feature, reducing code complexity
and making the architecture of your testbench alot simpler.
On each signal a Value Change Alert (VCA) can be setup to monitor
unwanted changes.
VERA supports object oriented architecture. Users can create classes
containing data structures, and tasks that are unique to them. Classes
can be expanded and derivative classes may be created. A 'new' method is
provided for initialization of classes. There is no 'destruct' method,
as VERA does automatic garbage collection.
Except for the interface part, VERAs syntax looks like a mix between
verilog and C++. The code structure is like C, no always or assign
statements. State machines or continues assignments can created by
VERAs very nice implementation of fork/join. The thing that makes VERAs
fork/join so effective, IMHO, is that you can specify when to join:
immediately without waiting for any of the statements to finish, when
any of the statements finish or when all statements finish.
In addition to all the standard language constructs, VERA has a very
powerful expect statement, that allows a signal to be sampled within a
window, and generate an error if the expected action did not occur
within the specified time frame. It takes only one line of code to do
the above expect!
All tasks in VERA are re-entrant, and as mentioned before, can be
written to communicate with the rest of the system by using specific or
virtual signals. When virtual signals are used, the task is bound to an
interface when it is called.
Another feature in VERA is string manipulation. I have read that VERA
is capable of pretty powerful string operations, but have never used
these features yet. This feature was added recently.
VERA supports a variety of synchronization primitives. They are all
very straight forward and easy to use. One of the most useful features
I found, are mailboxes. Mailboxes allow different tasks in the system
to synchronize to each other and exchange information.
A new feature in the latest VERA release is the ability to communicate
between several independent simulation using VERAs socket interface.
This is essential for very large distributed simulations. One could
simulate one chip on system a and another chip on system b, and send
information back and forth between the two simulations at a much higher
speed that simulating both chips on one system, if feasible at all.
The sockets can also be used to communicate to C programs running in
parallel to your hdl simulation.
Last but not least, there is the VERA debugger. With time, System
Science made the debugger to be a very helpful tool. It looks similar
to some of the C debuggers I have seen on Suns. It has a source code
window where you can monitor the execution of the current context. It
allows you to view and modify variables and set additional breakpoints.
It still lacks the possibility to view a class and all of its members
or an array. But it is still a helpful tool if you are stuck.
- Rudolf Usselmann, Consultant
Logic One, Inc.
( ESNUG 296 Item 2 ) ---------------------------------------------- [7/98]
Subject: Seeking Scripts For Multiple Voting D FF's in Rad-Hard Enviroments
> I am investigating the existence of any standard Synopsys (DC) scripts
> available for adding register redundancy into designs to improve the
> radiation tolerance of a design. I have been informed that such scripts
> exist, if anyone has any information on any such scripts could you please
> share them here on ESNUG?
>
> - J P Bertram
> UK Defence Evaluation & Research Agency
From: Alasdair MacLean <alasdair.maclean@gecm.com>
There's an application note at :
http://rk.gsfc.nasa.gov/richcontent/fpga_content/synopsis_actel.pdf
There are a few other links on the site that you might find interesting
too. Enjoy !
- Alasdair MacLean
GEC Marconi Electro-Optics Edinburgh, Scotland
---- ---- ---- ---- ---- ---- ----
From: ees1ht@ee.surrey.ac.uk (Hans)
You can find the App note on Actel or on Nasa's web site. The latest ACTMAP
synthesis tool (currently still in Beta) will support TMR on S-modules.
FYI, there is an upcoming conference on programmable logic for military and
avionics applications.
http://rk.gsfc.nasa.gov/richcontent/Ksymposium/kSymposium.htm
Lots of info on FPGA for space applications including
Xilinx!
- "Hans"
University of Surrey, UK
( ESNUG 296 Item 3 ) ---------------------------------------------- [7/98]
Subject: Cadence Support Ain't Worth Shit For Verilog-XL Behavioral Profiler
> I've been unsuccessful trying to get the Behvioral Profiler option in
> Verilog-XL to work and was wondering if anyone out there may have had
> similar experiences. I've tried Cadence Customer support for the past
> 3 months (!), with no luck at all. They've had me install OS patches,
> upgrade to the latest rev, re-install from new CDs that they shipped to
> me, all with no luck. I'm really frustruated at this point and was
> hoping someone out there might be able to help.
>
> When I bring up a design with the "+profile" and issue a "$startprofile"
> command, I get the following message: "Signal 26 current ignored".
>
> Running for a few cycles after this and issuing a "$reportprofile"
> command gives a mesage that are 0 samples in the report -- I end up
> getting no useful information at all.
>
> I had the same problem with my previous rev of XL (2.6.1 ?), which is
> why I upgraded to 2.6.8 (release 97B) in the first place. I'm running
> on a Sparc-10, running Sun 0S 4.1.3. I also had the same result with a
> sparc-20 running 4.1.4. I've also tried it with and without the "+turbo"
> and "+gui" options.
>
> Cadence has pretty much given up at this point, though they tell me that
> this specific problem was fixed when they went to 2.6.8. I don't
> understand why they can't give me at least a pointer as to what to look
> for, if they have seen this error before.
>
> I'm very disappointed with the support I've gotten from Cadence. Not
> only is my problem not solved after 3 months of going around in circles,
> but the support person didn't even know the proper install procedure
> when I was upgrading. He said that I had done it incorrectly (I followed
> the directions that came with the CDs) only to retract later and have
> to re-do it the way I had done originally.
>
> - Chip Natarajan
> Young Chang R&D
From: Achim Gratz <gratz@ite.inf.tu-dresden.de>
If that program runs at Cadences support site, have them send you the
output of ldd for each executable involved plus the size and xsum of
the libraries in question and cross check with yours. If you're
lucky, you are just picking up an uprev dynamic library that Cadence
in it's infinite wisdom has also partly linked in statically. Make
sure your environment and especially LD_LIBRARY_PATH is set to the
absolute minimum and create a wrapper script to keep it that way for
the cadence commands you use.
- Achim Gratz
TU Dresden Dresden, Germany
( ESNUG 296 Item 4 ) ---------------------------------------------- [7/98]
Subject: (ESNUG 294 #1) Response To Charles Small's "Bad Software Explained"
> The solution to the pressing problem is as drastic as it is obvious once
> you have the facts about human intelligences: Technologists having high
> visual intelligence and possessing visually oriented tools must replace
> the current crop of textually oriented programmers and their antique
> text-based tools.
>
> The difference between the visual and textual methods is dramatic. A
> diagrammatic specification for a complex control program takes only a few
> pages. An engineer blessed with extraordinary visual intelligence can
> easily comprehend the structure of such a system. And equivalent textual
> version of the same program would span many pages grouped in to many
> separate files -- literally incomprehensible by even those having
> extraordinary textual ability.
From: [ Wally Rhines, CEO of Mentor Graphics ]
John,
A really fascinating article from Charles Small!! Thanks for sharing it.
It stimulates a lot of great possibilities for solving a significant basic
problem with software tools.
- Wally Rhines
CEO of Mentor Graphics
---- ---- ---- ---- ---- ---- ----
From: [ A Synopsys Field Rep ]
Hi John,
It was an interesting article, but I think that it puts programmers in
much more bad light than they deserve to be. I am not a "programmer" per
se, but with HDL being the norm, I am not sure anymore.
One topic that the author has not touched upon is "How did Einstein (one
of the famous dyslectics featured) communicate his ideas to others?" The
answer is that he used mathematical equations and formal proof to
establish what he could "visulaize". In my eyes, this is nothing but
"poking little groups of characters" into a page. To anyboudy outside
the stratified field of space-time continuum and relativity, it would
appear to be pure gibberish. And, there have been may, who have never
directly communicated with Einstien, but could read the "little groups
of characters" and further his ideas.
Another thing that Small conviently forgets is "What is programming?"
For him, it apprears that programming is simply the mundane and
finger-breaking task of "poking little groups of characters into a text
file". Maybe, you should refer the author to Niklaus Wirths's "Prgram
Development by step wise refinement" (can be dowloaded from ACM
[www.acm.org], classics of the month - december 95)
For me, the actual act of programming is not in "writing" the prgrams,
but in describing what the data structures are and how they interact
with each other when the program runs and the algorithms to be used. The
actual typing of the program _is not programming_. Also, for every
software crash, you could point to 10 "other types of crashes", ranging
from sheer ignorance to oversight to bad design.
Also, the author ignores the large body of "standard practice" availabe
for software development. Any good book on programming practices will
list them (ex. Bjarne Struostrup, C++ programming 3rd Edition - Read the
advice at the end of each chapter. Just that is sufficient to make a
good programmer out of the reader!)
- [ A Synopsys Field Rep ]
---- ---- ---- ---- ---- ---- ----
From: simon@formasic.com (Simon Read)
John,
ESNUG 294 worries me. Software is bad because there is no economic
reward for making it good. The article contains some interesting, old
and thorny points, but also a lot of urban legends.
For a good argument about why software engineering is so hard, check out
"The Mythical Man-Month" (ISBN 0-201-83595-9) by Frederick P. Brooks, Jr.
Chapter 16 of the new edition (the famous "No Silver Bullet" paper) is
especially worthwhile. Brooks was a quite brilliant, and famous hardware
architect for IBM. He then managed the OS/360 software development, where
as he admits, he made a "multimillion dollar mistake". Through his writing
and research since, he has tried to guide others away from the same
mistake[s]. We continue to make those mistakes. I have made them. I have
seen them made. I find his arguments more compelling than those of Larry
Small, perhaps because they are based on more than 30 years of careful
observation and argument.
I cannot recommend Brooks' book highly enough to people who are involved
in 'development in the large' in software, hardware or systems.
- Simon Read
Formal ASICs, Ltd. Annapolis, Maryland
---- ---- ---- ---- ---- ---- ----
From: Jim Cardwell <jim@sd.com>
John,
Thanks! After reading the article in the last ESNUG, now I know why those
hardcore RTL design engineers resist converting to graphical entry tools.
Here's my deduction...
* the article claims visual vs textual intelligence is gender linked:
"At the risk of being Politically Incorrect, it is worth noting that
over a half a standard deviation appears to exist between the average
visual intelligence of males versus females."
* If RTL designers converted to graphical entry tools, males might have
an unfair advantage.
* An unfair advantage for males may reduce the number of female designers.
* So, you guessed it -- RTL designs are coded in text for the women!
Also, since an RTL designer is really half-engineer, half-programmer, I
couldn't help but notice that the article, when duly whittled down, makes
for some great marketing copy for own graphical entry tool! :^)
- Jim Cardwell, Account Manager
Summit Design Campbell, CA
---- ---- ---- ---- ---- ---- ----
From: [ Just Another EDA Developer ]
John,
(Keep me anonymous as I work for an EDA company) while I found the article
on buggy software interesting, I think that it misses the real issue, which
is easily understood by writers and users alike: economics.
What is a bug: typical bugs come about when the software is required to
handle a situation the writer did not consider.
Inspection techniques (Fagan, etc) can find many of these, at a cost.
Testing can find others, again at a cost. Quite simply, the cost of
finding and fixing these bugs is less than the cost of users living
with and working round the bugs, or software suppliers providing a hot fix.
If you look at software for the space shuttle, I would be willing to bet
that the S/W costs at least 10x per line of code to write than EDA s/w.
Similarly, PC s/w such as Windoze and Office probably costs several times
as much to write per line than EDA s/w.
The question is: how many chips would be designed if your P&R s/w cost
10x the current price? How many copies of Windoze '98 would be sold if the
cost were, for example, $400?
Users choose what gives them best value, and best value means living
with bugs. Bug-free s/w would be too expensive in most situations. There
are special cases, (such as the Space Shuttle) where bug free s/w has more
value, and the user is then prepared to pay the cost. EDA applications
are not one of those.
- [ Just Another EDA Developer ]
( ESNUG 296 Item 5 ) ---------------------------------------------- [7/98]
From: Yaron Kretchmer <yaron@lsil.com>
Subject: Multibit Register Inferrence Problems
John -
While inferring multi-bit-registers in 1998.02, I stumbled across the
following problem:
Synthesis of a multi-bit component (a register back, in my case) takes
FOREVER -- for instance, an 128 bit flop bank is mapped to "normal" 1-bit
flops in less then 30 seconds, but can take several hours for multi-bits.
In the library there are a LOT of multi-bit cells (1-64 bits wide), which
seems to be the problem, but using the "undocumented magic variable"
< mbm_filtered_pack_per_cell_best = 1> does not help.
Any ideas?
- Yaron Kretchmer
LSI Logic Ramat Hasharon, Israel
( ESNUG 296 Item 6 ) ---------------------------------------------- [7/98]
From: Paul Fletcher <paulf@chdasic.sps.mot.com>
Subject: DesignWare, get_licence, remove_licence, and Infinite Looping Bugs
John,
I just couldn't resist writing about this after I read the DesignWare
Technical Bulletin. I wonder how many other users have had this problem?
On page 4 of the DesignWare Technical Bulletin V.3 1.2 Q1 98, it shows a
script to get the DesignWare-Foundation license.
get_license DesignWare-Foundation
while (dc_shell_status == 0) {
sh "sleep 60"
get_license DesignWare-Foundation
}
It should be noted that this must be done before any pre-compiled db files
are read in that use any designware components.
If this is not done the loop will hang forever.
I have had this problem with the get_license function in the past.
I have suggested that Synopsys fix the get_license command. The fix would
be that if you already have a license AND execute the get_license command
trying to get the license you already have, then the get_license command
should return a "1" so that the while loop will not loop forever.
The get_license command is already smart enough to know that you already
have the license you are trying to get so it should be easy to fix it to
work correctly. Message from while loop if you already have the license:
Information: You already have a 'DesignWare-Foundation' license. (UI-31)
Information: You already have a 'DesignWare-Foundation' license. (UI-31)
For all of the other licenses it is possible to work around this BUG in the
get_license command by first removing the license before trying to get it.
This will also work for the DesignWare-Foundation license as long as it was
not already gotten by reading in a db that required it.
If this happens and you try to do a remove_license DesignWare-Foundation
you get a message stating that you cannot remove the license until you
remove the design.
I will admit that it is possible to kludge up scripts to work with the
get_license command, it just seems like it would make lots of users
lives easier if Synopsys fixed the command to work as one would expect.
- Paul Fletcher
Motorola SPS Chandler, AZ
( ESNUG 296 Item 7 ) ---------------------------------------------- [7/98]
From: Henry George Berkley <asic@netcom.com>
Subject: Chrono VCS Bug When Comparing Time & Integer Variables
Hi John,
When comparing time variables to negative integers, VCS gives erroneous
results. One of the busses on my ASIC has a negative hold requirement.
When writing a bus monitor to check this spec, I discovered the bug.
From bug1.v:
module bug1();
time Tvar;
initial begin
Tvar = 100;
if ( Tvar < -20 )
$display("We have a bug.");
end //initial
endmodule
From bug1.log:
Command: vcs -R bug1.v -l bug1.log
Chronologic Simulation VCS Release 4.1.1 Tue Jul 14 17:40:49 1998
Copyright Chronologic Simulation/Viewlogic '91-'96. All Rights Reserved.
This Licensed Software contains Confidential and proprietary information
which is the property of Chronologic Simulation
Compiling bug1.v
Top Level Modules:
bug1
1 unique modules to generate
1 of 1 modules done
Invoking loader...
Chronologic VCS simulator copyright 1991-1997
Contains Viewlogic proprietary information.
Compiler version 4.1.1; Runtime version 4.1.1; Jul 14 17:40 1998
We have a bug.
V C S S i m u l a t i o n R e p o r t
Time: 0
CPU Time: 0.067 seconds; Data structure size: 0.0Mb
Tue Jul 14 17:40:59 1998
My client is running VCS 4.1.1 on a Sparc running SunOS 4.1.4
The workaround is to use real numbers.
- Henry George Berkley
Electronic Consulting Santa Cruz, CA
( ESNUG 296 Item 8 ) ---------------------------------------------- [7/98]
Subject: (ESNUG 295 #4) Sun Workstation Crashes Once A Day!
> I've been having a Sun Sparc station stability problem for the last couple
> of months, and now it seems to crash about once a day (and for no appearent
> reason). I have a dual-CPU 200 MHz machine with 1GBytes of RAM and believe
> to be running the latest kernel patches of Solaris OS. The latest theory
> that we have is that it's some sort of multi-CPU problem, since the crashes
> seem to occur unpredictably. We have plenty of swap space allocated. Has
> anyone run into a problem similar to this?
>
> - Victor J. Duvanenko
> Truevision
From: [ A Synopsys AC ]
John -- please keep me anonymous, just sign me "a Synopsys AC".
Victor -- I don't have any specific details about your crashes so this could
be a lot of things, but I've seen bad RAM cause symptoms like this at one
site, and a bad motherboard (not sure if it was the CPU or something else)
at another site.
- [ A Synopsys AC ]
---- ---- ---- ---- ---- ---- ----
From: ryan@dogbert.fsd.com (Ken Ryan)
John,
I have in fact experienced this twice; once with a Sparc 10/ dual HyperSPARC
and again on my Ultra 2. The first instance turned out to be a CPU module
that went bad (after ~8 months of hard work). Ross replaced it and all was
well. The second instance was a bad memory SIMM. Again, it had been working
fine for around a year then my system started crashing.
To check your memory: shut down your system to the "ok" prompt (shutdown
-i0). Type "setenv selftest-#megs 1024" (sets system to test 1GB RAM on
reset) then "setenv auto-boot? false" to disable boot-on-reset. Now type
"reset"; this will make the system reinitialize and test all 1GB memory.
If you want to leave auto-boot true, you can interrupt the initialization
process with stop-A / L1-A and type "test-memory" at the "ok" prompt.
Note: Don't let the system cool down before the test in case the problem
is temperature dependent. Type "setenv selftest-#megs 1" to reset it to
minimum. Note also: testing 1GB memory takes for *ever* so be patient.
If a SIMM is bad it should come up in this test (it usually even tells you
which one). It is possible, however, for this test to miss it. If the
memory test comes up clean, I'd suggest pulling SIMMs until the machine
stays stable (I assume you don't have SIMMs to spare). I did a binary
search: pulled 1/2 the memory - it worked; replaced that memory with the
other 1/2 - it failed; replaced 1/2 the installed memory with the known-good
(first set), etc. (Think about the six-coins-one-fake problem). This is
how I actually found my problem. This technique effectively requires
reasonably repeatable crashes and tasks that need less than the full memory
(and aren't important!). Also, you DO keep backups, RIGHT? :)
Double-check the hardware manual for your system for valid SIMM
configurations. The Ultra 2, for instance, needs SIMMs in multiples of 4
identical; the Sparc 10 or 20 can use singles but the first slot must be
filled.
Likewise, if you suspect a CPU pull one of them (you have two) and see if
that makes the crashing go away. If not try running with only the other
one. Again, check documentation for which CPU slot needs to be filled (I
don't know offhand).
Another note: make SURE you are properly strapped for ESD - the SIMMs are
very sensitive, the CPU module much more so!
If all else fails, head for the comp.sys.sun.hardware newsgroup (don't
forget to check the DejaNews archives, too)!
Good hunting!
- Ken Ryan
Orbital Sciences/Fairchild Defense Germantown, Maryland
---- ---- ---- ---- ---- ---- ----
From: Stephen McInerney <stephenm@faraday.ucd.ie>
Hi John,
Is the machine exclusively running Synopsys? I tracked down crashes due to
very weird memory conflict between Synopsys/Cadence.
- Stephen McInerney
University College Dublin Dublin, Ireland
---- ---- ---- ---- ---- ---- ----
From: Victor_Duvanenko@truevision.com
John,
Well, after struggling with what falsely appeared to be bad memory modules
and very few utilities to diagnose any kind of hardware problem, the Sun
guys came out and swapped out the CPUs. They mumbled something about a CPU
bug and a silent recall, and our workstation has been up and running for
the past week solidly. I have no idea if I'll be charged for these CPUs
yet, but go figure that Sun's "simulate crashes instead of crashing" ad
campaign! I thought UNIX/Sun was immune to this kind of stuff and only
Intel/PC's were plauged with it. Gash darn that complexity thing is at
it again, Vern!
- Victor J. Duvanenko
Truevision
( ESNUG 296 Item 9 ) ---------------------------------------------- [7/98]
Subject: (ESNUG 295 #3) Cadence's PB-OPT Trounces Synopsys reoptimize_design
> It seems like Avant! is playing catch-up game on the In-Place
> Optimization front. Cadence's PB-Opt (Placement Based Optimization)
> product does post-layout in-place optimization and is well integrated
> into our flow.
>
> And BTW, this product has been out there for more than 3 years now...
>
> - [ One Of Those Evil Cadence Marketing Guys ]
From: Brian Arnold <briana@mail.fusionmm.com>
John,
With regard to Cadence's PB-OPT, I implemented this tool flow (in a former
life) and used it on very high performance designs. The results seemed to
match Cadence's claims and this option enabled me to achieve a one-pass,
timing-convergent, design flow from RTL->layout. This additional feature
allowed me to replace my previous methodology, which was to iterate
(sometimes > 15 times) through Design Compiler's reoptimize_design and
Cell3/Silicon Ensemble's ECO place and route.
- Brian Arnold
Fusion Networks Corp. Longmont, CO
---- ---- ---- ---- ---- ---- ----
From: [ Some HP Elves ]
Conversation I had yesterday with our local place & route expert:
"Why aren't we using Cadence's "well integrated" flow instead of this
cobbled-together patchwork of bugs they sent us?"
Basically, it was a cost-saving measure. We *could have* gone for the
"enable functionality" option, but it was an additional $200K. I guess
you can't believe everything you hear from Cadence marketing, huh?
Please keep us anonymous.
- [ Some HP Elves ]
( ESNUG 296 Item 10 ) --------------------------------------------- [7/98]
From: Robert Cooney <rcooney@net.com>
Subject: Cooley's Recommendation Of "HDL Chip Design" Ain't Worth Crap!
Dear John:
I remember reading your column in February in the EE Times. In this column
you gave a great many kudos to the book "HDL Chip Design". Recently, I have
been using this book in conjunction with others to learn VHDL. I must tell
you I am less than satisfied.
From a purely mechanical standpoint the book does not appear well
constructed. I have only been using this book for about 2 months and it
is already falling apart. Pages are coming out of the book already.
From an accuracy point of view I also find it troublesome. On a number
of occasions I have taken VHDL code directly out of the book. The result
is that the code does compile correctly but the intended action is
incorrect. Specifically a good portion of the code for generating clock
waveforms for test harnesses does not generate waveforms.
You wouldn't have made any money by mentioning the book in your column,
would you?
- Robert Cooney
NET
[ Editor's Note: Sadly, Robert, I have to report that I made *no* money off
of that book review. Nada. Zero. Zippo. And To make it worst, I wasn't
even *offered* anything for it. No bribes, no kickbacks, nothing! I'm not
exactly sure what I'm doing wrong here $$$-wise, but, alas, it means I
stupidly wrote that column purely as my own opinion. Oh, well. - John ]
( ESNUG 296 Item 11 ) --------------------------------------------- [7/98]
Subject: (ESNUG 295 #6) 'Translate' W/ Buffering, dont_touch, & Incrementals
> We are attempting to translate a very large design from one technology to
> another similar one.
>
> Our first attempt was to load the big .db file in and let her rip. But the
> major sub-blocks have dont-touches, so it didn't penetrate the hierarchy.
> Our Synopsys rep told us that we would have to manually traverse the
> hierarchy removing these for it to work. He said that using a:
>
> remove_attribute find(design,"*",-hierarchy) dont_touch
>
> would not even work (no real explanation given).
>
> Rather than do that, we loaded the Verilog netlist of the entire design in
> and ran translate on it. This worked great since all constraints and
> attributes are not present. But, many of our buffer networks had been
> removed. Apparently, Synopsys does some optimizing in the translate step.
> Since we had no constraints, it decided that these buffers were not needed.
>
> So we loaded each sub-block .db file and ran translate on that. This does
> have constraint info. But, alas, our buffer network was still destroyed.
>
> I am now trying an incremental compile. But this is so time-consuming,
> I'm about to start over from VHDL.
>
> I totally understand that translate must use some smarts when figuring out
> how to map cells that don't exist in the new library. But a huge selection
> of buffers do exist and it sees fit to just take them out. Any ideas on
> why it does this? Or how to solve my original translation problem?
>
> - Chris Cope
> Pinpoint Solutions
From: Robert.Marshall@sinbad.nsc.com (Robert Marshall)
John,
My guess is it takes them out because "translate" is pretty much using the
same processes as "compile", and therefore the tool will try to optimise
out unnecessary cells.
I would have thought you could have don't touched the cells of the buffer
network to accomplish what you want, but I havn't tried this.
We've also been in the business of translating recently, although to have
the clock trees removed suited us fine (we actually did a proper compile
after translating).
Issues we had were:
1) Synopsys didn't really seem to understand scan flops too well -- it
replaced scan flops in library 1 with ordinary flops plus muxes in
library 2.
2) Similarily, it didn't handle latches with resets very well -- instead
of mapping a latch with reset in library 1 into a latch with reset in
library 2, it replaced the latch with reset in library 1 with a latch
without reset plus gating around the D and EN inputs to come up with
the reset function, which we decided was not a smart way to go.
You might want to check these gotchas aren't happening in your case.
- Robert Marshall
National Semiconductor
---- ---- ---- ---- ---- ---- ----
From: Bob Wiegand <rwiegand@ensoniq.com>
John,
To remove the dont_touches, try this:
foreach (des,find(design)) {
current_design des
remove_attribute current_design dont_touch
}
current_design <top_level>
where des is just a loop variable, feel free to use whatever name you prefer.
Don't forget to change the current design back to the top level before
continuing. The same technique can be used to restore the dont_touches
after the translation.
The buffers are a different story. If you need to preserve your existing
clock buffer tree, try using wrappers to instantiate the equivalent buffers
in the new technology. Create db files for these wrappers with dont_touches
on the instanced cell but not on the wrapper itself. Instead, do a
set_ungroup on the wrapper db to eliminate the extra level of hierarchy at
the next compile, or do an ungroup on the wrappers in the designs where they
are instanced.
The ideal situation would be to have a pre-layout hierarchy where the
top level instantiates a pads block, a clocks block, and the core(s)
with their hierarchy below. This also assumes a clock tree insertion
tool in the back end. The pads block allows instantiation of the pad
cells, and the clocks block allows for instantiation of clock tree place
holder cells, as well as any clock dividing logic. The core(s) can then
be constructed with ideal clocks with the appropriate delay and
uncertainty, and no buffers (pre-layout, of course). The translation is
then performed on the core only, with much fewer headaches.
Hope this helps.
- Bob Wiegand
Ensoniq
---- ---- ---- ---- ---- ---- ----
From: "Russell Ray" <rray@msai.mea.com>
John,
I hear your woes on using the translate command. If you have a large number
of cells that are the same in the different libraries (named the same) why
not just read in the verilog with your link library and target set to the
new library?
If the cells are not named the same but you want to preserve pretty much the
same cells, then the best thing to do is write a script where you decide what
cells should be "translated".
The translate command will remove buffer trees since it sees them as
unnecessary even if it is your clock tree. It does give you your design
in the new technology, but you have to run compiles to get your timing back
and sometimes your buffer trees. If there are many things like that, it
is almost easier to start from the RTL and re-run all your scripts.
Hope some of that helps,
- Russell Ray
Mitsubishi Semiconductor Durham, NC
---- ---- ---- ---- ---- ---- ----
From: "Chris Cope" <chrisc@ppsol.com>
John,
What we had to do was a compile -only_design_rule. We set the target
library to the new library to do this. Since we had dont_touches in
the hierarchy, it did not traverse. Even doing a remove_attribute on
all dont_touches in the design didn't help. So we had to do this
procedure bottom-up.
There was another complication once we had converted all bottom level
cells. On the next level of hierarchy, there was some glue logic.
So we translated that and saved it. Then we had to remove_design on
all blocks that it contained and read in the new blocks. Then we had
to rename design in some cases so that the .db file would link
together properly. Whew! What a pain.
- Chris Cope
Pinpoint Solutions
( ESNUG 296 Item 12 ) --------------------------------------------- [7/98]
From: "Celio Albuquerque" <cdlalbuq@matisse.inescn.pt>
Subject: Does FPGA Express Have Any Known Large State Machine Limitations?
Hi, John,
I'm currently working with FPGA Express from Synopsys.
I have a rather large state machine which I tried to implement. But TWO
DAYS after beginning the optimization it still didn't finished it. Since
that module has a large state machine with plenty I/O's, I would like to
know if there is a limitation with FPGA Express.
- Celio Albuquerque
INESCN
|
|