Editor's Note: I want to thank all the people who e-mailed me about my
friend, Sue, going in for sudden surgery for ovarian cancer two weeks
ago. Those letters really helped on that troubling Wednesday. Thanks.
The day after the surgery, I phoned the hospital where she was and asked
to be connected to her room (to arrange a time to visit.) And, to my
surprise, Sue didn't answer the phone -- instead, an ex-girlfriend from
a few years ago, Kate, who works as a nurse answered. "Oh. It's you.",
were the first words I heard. And after we talked about how our lives had
changed since we last saw each other, Kate said: "Hey, Sue, it's no one
important. It's just Cooley on the phone." (Ain't ex-girlfriends great
in a time of crisis??! <grin>)
And since that moment, I've been dragged, squirming & wretching into the
Secret & Mysterious & Icky world of "Feminine Problems".
Sue: "John, they took it all out. Total hysterectomy. I'm just
like Orange Cat now. I don't have any plumbing. I'm an 'it'
now." [ Sue seemed fairly non-chalant about it. ] "Yup. They
took my plumbing to some lab because it needed further testing.
I have an artist friend who keeps her plumbing in a jar of
formaldehyde on the mantle of her fireplace. She says she
might dry them out some day to use them in a sculpture. But no
jars for me. My stuff is in some lab now."
Me: "SUE! That was an absolutely disgusting thing to 'share' with
me!... I'm not totally comfortable with this topic. Eeeewwww!"
Sue: "Sheesh, John! Ron [ her boyfriend ] had the same reaction.
It's just a natural part of life. Get over it."
Kate: [ in the background ] "You tell him, Sue! It's part of life,
John! Deal with it!"
When I later told my girlfriend about this conversation, my girlfriend
said: "Oh, yea, I have a friend who had a Bon Voyage party for part of
her breast she had to have removed. She had it in a jar of formaldehyde,
too." [ Eeeeeeeew! Don't women know anything???!! There is some
knowledge that *MAN* is not meant to know! Eeeeeeew! ] "Sue is right,
John. It is just a natural part of life."
Eeeeeew! :(
- John Cooley
the ESNUG guy
( ESNUG 315 Subjects ) -------------------------------------------- [3/99]
Item 1 : (ESNUG 314 #1) These CoreBuilder/CoreConsultant Tools Kinda Suck
Item 2 : Last Minute SNUG'99 Registration & The Sun "Compute Ranch" Tour
Item 3 : A Workaround Script For The Design Compiler check_design Bug
Item 4 : (ESNUG 313 #2) VCS Encryption Bug -- Did Someone Win $75,000 ?
Item 5 : Who Needs Platform's LSF When You Can Get The Same Stuff FREE?
Item 6 : Cheap Bastards Synopsys Now Charging For "set_compile_directives"
Item 7 : (ESNUG 313 #8) I Can't Confirm The So-called hdlin_use_cin "Bug"
Item 8 : OUCH! The 21 Things I Tripped Over When I Switched To PrimeTime
Item 9 : (ESNUG 313 #3) We Love Specman Before & After The "e" 3.0 Change
Item 10 : The New DC 99.05 Release Benchmarked Only Marginally Better
Networking Section : Cadence Wants An MBA-Type To Head Their User's Group
( ESNUG 315 Item 1 ) ---------------------------------------------- [3/99]
Subject: ( ESNUG 314 #1 ) These CoreBuilder/CoreConsultant Tools Kinda Suck
> I've just spent the past couple of days working with Synopsys' CoreBuilder
> and CoreConsultant tools. ... They basically work like:
>
> IP Maker IP Buyer
> ---------- ----------------
> (e.g. MIPS ) (e.g. Hewlett-Packard)
>
> CoreBuilder --> CoreKit files --> CoreConsultant
> captures IP into (encrypted in converts CoreKit files
> CoreKit files .kb format) into gates
From: "Clifford E. Cummings" <cliffc@sunburst-design.com>
John -
Starting with ESNUG 297, you had significant discussion on the "Reuse
Methodology Manual" and reusable code in general. Michael Keating (author
of the manual), Janick Bergeron from Qualis and Yatin Trivedi from SEVA
posted extensive comments on the topic.
Now, in ESNUG 314, you just published a scoop on a pair IP oriented Synopsys
tools.
What a waste of time and energy!
Design Reuse is myth; it's this year's EDA hobby. They're trendy industry
buzzwords that'll be long forgotten 2 to 3 years from now.
Design reuse largely fails for one main reason: motivation. Engineers
(myself included) do not want to reuse somebody else's design. Engineers
want to "create" and for the most part, they do _not_ want to modify another
engineers code to do it. The prevailing attitude is: "I'm the design
engineer now, I'll do it my way!"
The design blocks that I do see being reused most often include:
(1) Block diagrams
(2) Some special core-logic cells
(3) Engineers reusing their own code -- not another engineers code.
(Reusing your own code is "cool", reusing somebody else's code is
frustrating, boring, and a sign of technical weakness.)
Arguments will fly that a good coding style and good naming conventions and
good comments and good this and good that would make reuse easier, less
frustrating and less boring. As my teenage daughters would say, "no, duh!"
And as long as you can do all of this without impacting schedule, your
management team will support it.
But guess what, reusable designs take a lot longer to create!
Re-engineering a design is often easier than trying to understand an
existing design (for the most part, designs are not well commented). Even
minor modifications to an existing state machine design can require more
effort and cause more grief than just re-drawing the state diagram and
re-coding the new design.
Engineers enjoy the exhilaration that comes from creating new designs and
will often unintentionally sabotage a prior-existing design concept just to
find a reason to start over and do the entire design again ("Oops! One of
the I/O pins has changed, we have to start completely over from scratch!").
Even Ron Collett's research backs this engineering bias. In the Dec. 7, '98
issue of "EE Times" (pg. 49), Collett studied 200 IC & ASIC design projects
and found that, on average, designs were 70 percent new material and 30
percent reused IP. Of that 30 percent reused, 88 percent was recycled
in-house designs. Mathematically, this works out that designs consisted of
only 3 percent outside-supplied IP. (And I'll bet you that these were almost
all RAM core and PCI interfaces, John!) When he asked the engineers to
project into the next 12 months, the amount of externally sourced IP grew
to 8.3 percent of their designs. Still, small potatoes!
But let's now talk testbenches. There, reuse is loved. I have seen design
teams go out of their way to reuse a testbench. I have seen examples almost
as absurd as: "well, we have this old testbench to test an 8-bit 8088 Intel
microprocessor. Let's see if we can adapt it to the Pentium-Pro!!"
Whenever I teach Verilog and Verilog synthesis classes, I always take a
short poll. The question: "How many engineers in this room prefer writing
testbenches over doing ASIC design work?" Of more than 600 engineers taught
in the last two years, fewer than five preferred writing testbenches.
At last year's IVC-VIUF conference, one of the hot topics was the fact that
as designs have become larger, the testing effort has become even more
complex. In the past, the design-verification effort was approximately
50%-50%. Last year I heard engineers claiming the design-verification
effort for million-gate ASICs to be more like 30%-70%. Design is not the
main problem; TESTING IS!!!!
A testbench typically tests at the I/O-pin-level of the design. A testbench
does not have to be exceptionally sophisticated (testbench code is not used
to decrease the number of gates, or increase the speed of an ASIC). A
testbench is used to apply stimulus and verify outputs. If a design team
already has a testbench that can be used to test a SCSI interface on an
existing design, why re-write the testbench for the second generation
component?
Engineers are very willing to reuse a test suite because they did not want
to write the testbench in the first place! Engineers always jump at the
opportunity to reuse a testbench. But engineers will stomp on anyone
suggesting they reuse somebody else's design!
Since we all know that the verification effort is growing much faster than
design complexity, why is the EDA industry focusing so much effort on the
_easy_ part of the job (reuse) and not on TESTING? New, fluffy, IP-oriented
EDA tools like Synopsys CoreBuilder and CoreConsultant are a waste of
everyone's time. What we engineers _really_ want are EDA tools that make
_testing_ more and more effortless!
- Cliff Cummings
Sunburst Design Beaverton, OR
[ Editor's Note: I just have one question, Cliff. I noticed you're citing
Ron Collett in your letter. Is this the same Ron Collett who predicted
the utter & grave demise of Verilog so many years ago? :) - John ]
---- ---- ---- ---- ---- ---- ----
From: Dave Brier <dbrier@ti.com>
Hi, John,
I'm not too happy about the new Synopsys IP delivery tools. The problem
I have with them is that they are tied to proprietary formats once again.
Yuck.
Rather than just sit around complaining, my engineering group discussed the
IP delivery problem and came up with the following solution. Our idea is to
use a true encryption engine, with _no_ proprietary anything, to create
secure source files for the exchange if IP via PGP. PGP is readily
available around the world and easily used. We are proposing is that all
EDA tools be able to call the PGP algorithm when they read a file if
required. The flow looks something like this:
1.) Create Verilog / VHDL model of IP
2.) Encrypt model using PGP from within the EDA tool
3.) E-mail file.v.pgp to customer
4.) Design Compiler, VCS, or whatever "receiving" EDA tool
automatically decrypts the file.v.pgp
The basic steps and requirements in each step of the flow are:
Keys:
Either there could be a central IP key server or every provider of IP
would have a client key server. Whatever works.
Each customer would have a "public" and a separate "private" key.
Encryption:
Nothing more than running PGP on your files and applying the destination's
"public" PGP key. (The way PGP works, the only way file can be decrypted
is by using the matching "private" PGP key. This way the customer and
*only* the customer can read the encrypted file.) Our idea is that this
PGP encryption/decryption process be embedded *within* EDA tools. That
way, customers could recieve and use IP without having direct access to
the source Verilog or VHDL.
Protection & Licensing:
What we're suggesting somewhat mimicks what many Internet e-mail tools
(Eudora, Netscape, Outlook) already perform invisibly. They automatically
apply the proper PGP key to each mail dependent upon who you send it to.
This works even on a broadcast message, since each message gets its proper
PGP key applied.
Our suggestion is that a licence server is added to the process.
To protect the encrypted source at all times, the "private" PGP key that
the destination has cannot be the complete key. There will have to be
some sort of server, similar to the LM License server that is capable of
only delivering the complete PGP key to a properly licensed tool. So, for
example, when you read an e-mailed file into DC, DC is capable of pulling
the complete PGP key and decrypting the file. You can't have anyone
being able to run PGP on the source file themselves.
Decryption Happens Only On Reading:
Our overall idea is that when you read the file into DC, Verilog-XL,
Modeltech, etc., and when these tools find a .pgp extension, they would
automatically poll the license server and pass the file (IN RAM) and the
key (IN RAM) to PGP and get back (IN RAM) the original decrypted source
to compile in DC, Verilog-XL, Modeltech, etc.
It seems simple enough, we have a DC_Shell script that does this, albeit in
a crude manner, but does demonstrate what I am talking about. It's crude
script (which does dump a temporary file, so it's not secure at all), but
it demonstrates the concept.
# Assume the existence of some pre-encrypted files:
# file1.v.pgp, file2.v.pgp, file3.v.pgp
files = { file1.v, file2.v, file3.v }
foreach ( filename, files ) {
# decrypt the file. PGP appends a .pgp extension to the
# filename by default as it searches for something to
# decrypt. Note that if the files were encrypted with
# different keys (say from different vendors), you'll need
# multiple pass phrases
sh pgp filename
# read in the verilog file
read -f verilog filename
# remove the verilog file. This will leave just the .pgp
# version
sh rm -f filename
}
Required tool limitations for PGP encrypted models (that we thought of) to
keep total security:
o Must suppress all hierarchical messages( includes reg/latch reports,
timing loops, etc.).
o Must suppress reporting of constraints.
o Restrict schematic generation (could be argued that this isn't a
big deal).
o Restrict descending as you can do with Modeltech and VCS, etc.
o Restrict tracing of signals internal to models that are encrypted.
You could also encrypt STA scripts, DC scripts and any other information
that might contain data that is sensitive. You could output log files that
could be encrypted for design support as well as many other useful items.
The most important issue here is that the flow be *non-vendor* specific.
You know something of a standard, that way you don't have to support and
maintain dozens of different formats. You avoid all of the PLI/FLI
socketing issues that you get when you create C models and there should be
less performance hits this way.
Ultimately what is needed for a scheme such as this to work is the
cooperation of the EDA industry. I am certain that someone can shoot holes
in this method, anyone if they try hard enough can crack a model, and steal
your IP even if they have to do it polygon-by-polygon.
The concept here is to put a reasonable amount of protection with a minimal
amount of heartache.
- Dave Brier
Texas Instruments Dallas, TX
( ESNUG 315 Item 2 ) ---------------------------------------------- [3/99]
Subject: Last Minute SNUG'99 Registration & The Sun "Compute Ranch" Tour
From: mujahid ul islam <mui20@eng.cam.ac.uk>
Hi John,
After reading the latest ESNUG posts about the conference I've suddenly got
interested! Could you send me the registration details (or point me in the
right direction) please. Sorry to be a pain -- I know you've emailed these
out before.
Thanks,
- Mujahid Ul Islam
University of Cambridge
[ Editor's Note: The best I can do, Mujahid, is tell you to show up at
the San Jose DoubleTree Inn at 8:30 AM on Monday, March 29th. SNUG'99
is 3 days, and you'll have to do an on-site registration. - John ]
---- ---- ---- ---- ---- ---- ----
From: Dennis Kelly <Dennis.Kelly@Eng.Sun.COM>
John,
Seeing those ESNUG posts regarding questions about using server "ranches"
versus desktop configuration methodology in EDA, I'd like to offer the
SNUG '99 attendees an opportunity to visit Sun Microelectronics on Thursday
morning (4/1) after SNUG'99.
For those interested in "how Sun does it", I'll be offering:
* Transportation from the Double Tree Hotel to/from Sun Microelectronics
in nearby Sunnyvale. Thursday morning in ~ 1/2 hr intervals.
* (upon arrival @Sun) Brief presentation "Managing Complexity" followed
by a Sun Microelectronics "Ranch" tour, an impressive compute resource
w/~2000 Sun CPUs used for running EDA jobs.
* Q & A period w/Sun Microelectronics EDA engineers, compute farm
managers and Synopsys tool users. This is NOT marketing fluff, these
guys/gals engineer, run and manage projects that tap the compute ranch
to ~ 1 million EDA jobs/month!
Tour times, Thursday morning 4/1: 8:30, 9:00, 9:30, 10:00, 10:30
Total duration/round trip w/Tour, ~ 2 hours
So, if any of your readers are interested, have them e-mail me at:
dennis.kelly@sun.com. We will have sign up sheets at SNUG as well, but
space will be limited.
- Dennis Kelly
Sun Microsystems Palo Alto, CA
( ESNUG 315 Item 3 ) ---------------------------------------------- [3/99]
From: david_johnson@paging.mot.com (David Johnson)
Subject: A Workaround Script For The Design Compiler check_design Bug
John,
I definitely get some useful information from ESNUG (and I appreciate the
humor too!) so I'd like to contribute something. I ran into a problem
with Design Compiler check_design. For hierarchical designs, check_design
does not warn about an output port being not connected _external_ to the
cell if the port is connected _internally_ to the cell to an output and
an input. See STAR 69047 (acknowledged as a BUG).
Now Verilint or other similar tools might catch this problem, but for
designers that rely only on Design Compiler for check_design, I have
a work-around script that might help them until Synopsys fixes the bug.
echo "exeternal pin connection report for " + _design >
_external_pin_report_file
echo "---------------------------" >> _external_pin_report_file
echo "" >> _external_pin_report_file
current_design _design
_hier_cells = {}
if (_recursive == 1) {
_hier_cells = filter(find(-hier,cell, "*"),"@is_hierarchical == true") >
/dev/null
} else {
_hier_cells = filter(find(cell, "*"),"@is_hierarchical == true") >
/dev/null
}
foreach(_cell, _hier_cells) {
echo "checking CELL " + _cell >> _external_pin_report_file
_pin_errors = 0
foreach(_pin, find(pin, _cell + "/" + "*") {
all_connected(_pin)
if (dc_shell_status == {}) {
get_attribute _pin pin_direction
foreach (_direction, dc_shell_status) { }
echo " (WARNING) no external connection for" _direction "port"
_pin >> _external_pin_report_file
_pin_errors = 1
}
}
if (_pin_errors == 0) {
echo " all pins connected externally" >> _external_pin_report_file
}
echo "" >> _external_pin_report_file
}
User specified variables:
_design = top-level design name
_recursive = 1: run check on all cells recursively through the hierarchy
= 0: top level cells only
_external_pin_report_file = report file name
examples:
dc_shell> _design = cpu
dc_shell> _external_pin_report_file = ../report/cpu.external_pins.rpt
dc_shell> _recursive = 0
dc_shell> include "check_design_one_level.scr"
Use Synopsys Design Compiler 1998.08 (possibly earlier versions as well)
- Dave Johnson
Motorola
( ESNUG 315 Item 4 ) ---------------------------------------------- [3/99]
Subject: ( ESNUG 313 #2 ) VCS Encryption Bug -- Did Someone Win $75,000 ?
> VCS Encryption Bug
>
> VCS 4.2.2 and VCS 5.0.1 contain a number of bug fixes, including a fix for
> a source encryption bug which could compromise the security of an
> encrypted model under certain unusual circumstances. This situation could
> only arise in the 4.2, 4.2.1 or 5.0 versions of the product.
> Corresponding versions of VCSi could also exhibit the same problem. ....
From: "Doss, Bill" <Bill.Doss@COMPAQ.com>
John,
Does this mean someone won the $75,000 Humvee Jeep that Synopsys was
offering last year? Refer to ESNUG 287 Item 5.
> Subject: The Crack VMC And Win A $75,000 Humvee Contest
>
> John, I noticed that you are one of the vmc contest officials. You know
> as well as I do that this is a sucker's bet. There is no way to extract
> the design from a VMC model and if anything I would be the only one that
> could even get close to extracting any sort of information that even
> resembled the design. So I guess synopsys will not be giving away that
> humvee. Darn.. and I wanted to get to my house in the santa cruz
> mountains!! Have a good one,
>
> - [ Chicken Man ]
Did [ Chicken Man ] crack the Synopsys encryption scheme?
- Bill Doss,
EDA Tools Manager,
Compaq Computer Corporation
( ESNUG 315 Item 5 ) ---------------------------------------------- [3/99]
From: Edward Arthur <eda@ultranet.com>
Subject: Who Needs Platform's LSF When You Can Get The Same Stuff For FREE ?
Hello John,
I remember the discussion about LSF a while back. I don't think anyone
mentioned any of the 'free' alternatives. I have not run any of these yet
but plan to start experimenting with a subset of GNQS, Condor and PBS. Who
wants to spend $$$ on a batch tool instead of simulator licenses and work
stations?
http://www.gnqs.org/
http://www.cs.wisc.edu/condor/
http://pbs.mrj.com/ (only available inside the USA)
Is anyone having any luck with one of these or any of the other free 'batch'
systems out there?
- Ed Arthur
Lucent Technologies Concord, MA
( ESNUG 315 Item 6 ) ---------------------------------------------- [3/99]
From: Rick Weiss <rickw@nablewest.com>
Subject: Cheap Bastards Synopsys Now Charging For "set_compile_directives"
Dear John,
A couple years ago, I asked on ESNUG if there was a way to have dc_shell
delete dangling gates without doing any other optimizations. The answer
came back as:
set_compile_directives DESIGN_NAME -delete true -constant true \
-local false -critical false
compile -incremental -only_design_rule
And it appeared to work. I needed it again today, and just put it in a
script. Now dc_shell says:
Error: DC Ultra license required to use 'set_compile_directives'. (UIO-65)
Turns out that I have a DC Expert License. It appears that Synopsys has
changed the licensing requirements for this command -- to extract more
$$$ from us. You didn't need to pay extra for this before! Boooooooo!
BTW, I did a "help set_compiler_directives" in dc_shell, and it says that
I only need a DC Expert or DC Expert Plus license. (I'm using 98.02.)
Charging more for "set_compile_directives" -- Booooooo! Bad idea!
- Rick Weiss
N*ABLE Technologies Cupertino, CA
( ESNUG 315 Item 7 ) ---------------------------------------------- [3/99]
Subject: ( ESNUG 313 #8 ) I Can't Confirm The So-called hdlin_use_cin "Bug"
> Way back in ESNUG 264 we learned about a neat switch in Synopsys for
> optomizing logic. The switch was "hdlin_use_cin = true" which allows you
> to use the carry in input of an adder if you are doing an "a + b + 1"
> operation. (Why Synopsys doesn't do this automatically I have no idea....)
> This saves you an entire adder in your design, and can really speed up
> your arithmatic.
>
> However, recently a bug has surfaced. If you have any logic associated
> with the "a" and "b" inputs which moves bits in the buss around (like a
> simple shift to multiply by 2) then that logic gets "eaten" by the adder,
> and you wind up with something that doesn't add up. I've actually gone
> to the point of removing this variable from all of my .synopsys_dc.setup
> files because of this.
>
> - [ Kenny from South Park ]
From: "Clifford E. Cummings" <cliffc@sunburst-design.com>
John -
I read the post about using the switch hdlin_use_cin = true. Could you ask
[ Kenny from South Park ] to post an example design that shows a post-
synthesis failure?
I've been teaching people in the Synopsys Advanced Verilog classes for the
past 2 years to use this switch and to add it to their .synopsys_dc.setup
file, because of the remarkable improvement that this switch causes to the
size and speed of arithmetic logic designs. If this switch truly does cause
a problem, Syonpsys should be notified and I owe a lot of former students
an update e-mail.
Looking at Kenny's description, I tried to recreate a failure but without
success. All of my pre-synthesis designs match my post-synthesis designs. I
am not trying to cast doubt or in any way embarrass Kenny for raising this
issue, but frequently when multiplying by a fixed number, and in particular
when multiplying by a power of two (or shifting as Kenny has suggested),
the compiled design often looks strange because no additional hardware is
generated, but the input wires have shifted to account for fixed
multiplication. I don't know if that is what Kenny means by "logic gets
eaten" or not.
Below I have copied the experiment files that I ran along with the Synopsys
synthesis script that I used to generate the gate-level files.
My DC script:
hdlin_use_cin = true
read -f verilog addert2_hdlin.v
current_design = addert2_hdlin
compile
ungroup -all -flatten
write -f verilog -hier -o addert2_hdlin.vg
hdlin_use_cin = false
read -f verilog addert2_nohdlin.v
current_design = addert2_nohdlin
compile
ungroup -all -flatten
write -f verilog -hier -o addert2_nohdlin.vg
NOTE: The next two files are identical except for the module names. This
made the synthesis scripting and the testbench easier to generate.
module addert2_hdlin (co1, sum1, co2, sum2, a, b, ci);
output [7:0] sum1, sum2;
output co1, co2;
input [7:0] a, b;
input ci;
assign {co1, sum1} = a + b + ci;
assign {co2, sum2} = (a*2) + (2*b) + ci;
endmodule
module addert2_nohdlin (co1, sum1, co2, sum2, a, b, ci);
output [7:0] sum1, sum2;
output co1, co2;
input [7:0] a, b;
input ci;
assign {co1, sum1} = a + b + ci;
assign {co2, sum2} = (a*2) + (2*b) + ci;
endmodule
I ran the RTL designs against gate-level models after synthesis with 10,000
random input vectors. Compared all RTL & gate-level outputs. No failures.
- Cliff Cummings
Sunburst Design Beaverton, OR
( ESNUG 315 Item 8 ) ---------------------------------------------- [3/99]
From: "Paul.Zimmer" <paul.zimmer@cerent.com>
Subject: OUCH! The 21 Things I Tripped Over When I Switched To PrimeTime
John,
I've been making the move from DesignTime (essentially, dc_shell) to
PrimeTime recently. I'm sure a lot of ESNUGers are/will be doing the same,
so I thought I throw out a list of "oddities" that I've tripped over.
Before I do, I want to preface this with the comment that, overall, I'm
fairly happy with PrimeTime. Tcl isn't my favorite language, and
PrimeTime does fatal occasionally, but it is much better than DesignTime.
Also, I've been impressed with the hotline team's knowledge and
responsiveness. Whoever is running the PrimeTime show over at Synopsys
seems to really care about getting it right.
I haven't run this list by Synopsys (although some of the issues have
been discussed with them), so there may well be solutions to these
problems that I haven't found. If so, I hope Synopsys will post back so
that we can all benefit!
Major Oddities
==============
1) Collections
This is by far the most confusing thing. To save memory and speed
processing (so I was told), PrimeTime returns "collection handles"
instead of normal tcl lists. These are essentially pointers to internal
data structures, although users can't really use them that way.
Anyway, when you do "get_cells *" (equivalent of find(cell)), it
returns something that looks like "_sel001". It won't do any
good to echo $_sel001. If you want to see what is in _sel001,
there are two ways:
1) If you're doing this interactively, just use "query" (short for
query_collection) where you would have used "list" in dc_shell.
You can't, however, access $_sel001 directly.
# this doesn't work:
pt_shell> get_cells *
_sel1
pt_shell> echo $_sel1
Error: can't read "_sel1": no such variable
# these work
query [get_cells *]
or
set _cells [get_cells *]
query $_cells < works
2) If you want to echo what's in the collection (often just one
thing), use get_attribute ... full_name:
foreach_in_collection _thingie [get_cells *] {
echo [get_attribute $_thingie full_name]
}
This get_attribute ... full_name happens so often that I wrote a
procedure to do it:
proc fullname thingie {get_attribute $thingie full_name }
One annoyance I haven't found a good way around is adding up multiple
collections. There is a command "add_to_collection", but it only
allows ONE collection to be added. When you're trying to do stuff
like adding collections of D pins that match half a dozen different
names, this gets very clumsy.
The other one that always bites me is remembering to use
"foreach_in_collection" instead of "foreach". You have to get used to
thinking about this every time you write a loop.
3) PrimeTime, like DesignTime, still doesn't support output clocks in
any reasonable way. That's a pity, because almost every chip these
days uses them. I'm talking about interfaces where you source both
the clock and the data. A clock comes in (or is generated internally),
and is used to clock output flops, then is sent out along with the data.
This is a virtual requirement for any high-speed interface.
Anyway, what the designer wants to do is just declare the output clock
pin as a generated clock, and declare the input/output delays as
relative to this clock - straight from the data sheet.
Unfortunately, you can't do this. Instead, you have to somehow time
the path from the original source clock to the output clock pin, then
adjust the input/output delays be this amount. With DesignTime, this
meant doing a "report_timing" out to a file, then sed/awk/grep'ing
the result back in. PrimeTime can do this with "get_timing_paths".
But still, this is tricky enough to get right in the simple case with
no inverted clocks, but when you start inverting the clocks, things
get REALLY interesting. And it is all so unnecessary. PrimeTime has
all the information, it just needs to know how to use it. And,
again, this is SO common. PrimeTime is almost there. It will let
you create a generated clock with a divide_by 1, but it doesn't time
the paths quite correctly. I hope they fix this soon.
4) all_clocks
all_clocks (synonym for get_clocks *) returns some of the clocks all of
the time, and all of the clocks some of the time, but doesn't return all
the clocks all the time.
Basically, generated clocks are NOT returned in all_clocks until AFTER
you've done a "check_timing" (or had it done implicitly by issuing
a report_timing or similar command).
This is unfortunate if you're trying to use all_clocks to set false
paths between the clocks BEFORE doing check_timing (since check_timing
will take forever if you have async clocks and haven't set false paths
between them). The workaround is to create your own version:
set _all_clks [add_to_collection [get_clocks *] [get_generated_clocks *]]
Unfortunately, this will create duplicates if check_timing has already
been run... I think there's a PrimeTime command for getting rid of
duplicates in a collection, but I haven't tried it.
Anyway, I think it's a bug. all_clocks should return the same thing
before and after check_timing.
5) Clock gating checks don't respect set_case_analysis.
The chip operating mode that you have defined using set_case_analysis
doesn't propagate for clock gating checks.
This basically makes clock gating checks useless as far as I'm concerned.
I have discussed this with Synopsys. I get the impression that they
intend to fix this in the future.
6) Can't easily save results
This is a pity. You run a script that sets all sorts of constraints
and timing exceptions, and you can't save away the result for quick
loading later (or as a backup in case of a crash). Fortunately
PrimeTime is fairly fast, but it still would be handy to be able
to write out a ".ptdb" file. My entire top-level constraint script
takes over 1.5 hours to run on a 300MHz Sparc - so waiting to load
it up to look at some detail interactively is a real pain.
Nits
====
1) No "source" attribute on GENERATED clocks. Very strange. Normal clocks
have an attribute called "source", but not generated clocks. Furthermore,
PrimeTime won't even let you create a user attribute on a generated clock
object, so you can't even do it yourself when you create it. This is
a pity, because (in scripts) you often want to know the source for
a generated clock. I do it by keeping an array called "gen2src" that I
build as I declare the clocks.
2) Annoying errors:
PrimeTime issues errors for things that aren't that serious, and I can't
find a workaround for some of them. This is a problem, because my
checking scripts don't expect the string "Error:" to be OK!
Strange things that generate errors:
-----------------------------------
1) Accessing a variable that isn't set.
(You can work around this by doing:
if {info exists foo} {...
2) Accessing an ARRAY variable that isn't set.
(I haven't found away around this, because "info exists" doesn't
seem to work on arrays.)
3) Using a get_.. command that SHOULD just return an empty collection:
pt_shell> query [get_ports pllmn/ICLK]
Warning: No ports matched 'pllmn/ICLK' (SEL-004)
Error: Nothing matched for ports (SEL-005)
pt_shell>
There IS a way around this one. Use the "-quiet" switch on get_...
3) Tendency to crash, esp. if you re-define clocks
I do see more crashes with PrimeTime than I saw with DesignTime.
This said, I have yet to see a crash when I'm running in batch mode.
It's when I screwing around interactively with pt_shell, especially
when I'm re-defining clocks, that I get crashes.
Don't get me wrong. Overall, PrimeTime is reasonably stable. But
it could be better :-).
5) PrimeTime shares one really annoying habit with DesignTime - it isn't
too clever about strings vs objects (collections). For example,
when a command has an option "-clock clock", I should be able to
say "-clock $foo", and it shouldn't matter whether $foo is a string
name of the clock, or the clock object. Too many commands insist
on the string version. This is easier with PrimeTime using the
full_name attribute than with DesignTime, but it is still an irritation.
Tips
====
1) Arrays and Quotes
The statements:
set foo(bar) 1
set foo("bar") 1
create TWO DIFFERENT elements in array foo - one named 'bar' and one
named '"bar"'!
If you're coming from a perl background, that foo(bar) thing looks
funny - but get used to it!
2) Creating an empty collection. I didn't expect it to be so simple.
You just do "set foo {}".
3) PrimeTime's report_timing command has a really neat switch called
"-path_type full_clock". This will cause the timing report to show
you both clock paths right in the report - very handy for examining
hold time failures.
4) Get to know "get_timing_paths". This is the mechanism that allows you
to parse timing paths without resorting to doing report_timing to
a file and using unix commands to hack it up. It is really quite handy.
5) In Tcl, variables used in a process (subroutine) are local by default.
So, you have to say "global foo" to get access to foo. This one
bites me all the time.
6) Using the -regexp switch. Put the whole argument in curly braces:
[get_cell -hier {.*} -regexp \
-filter {full_name =~ .*LOCKUP && ref_name =~ ld.*}]
BTW, this points out another feature of Primetime. The -hier switch
is actually useful. You can do things -hier from the top of the chip
and get a result back in a reasonable amount of time!
7) PrimeTime has a nifty new help command that accepts (dumb) wildcards,
so you can do "help *coll*" and see all the collectin commands.
Very nice.
That's all for now. Sorry this got so long, John, but I thought it might be
useful to other people learning PrimeTime.
- Paul Zimmer
Cerent Corporation
( ESNUG 315 Item 9 ) ---------------------------------------------- [3/99]
Subject: ( ESNUG 313 #3 ) We Love Specman Before & After The "e" 3.0 Change
> [ Editor's Note: Recently, a Synopsys/VERA salesman (VERA is Specman's
> rival) told me that Verisity has fundamentally changed the Specman "e"
> language such that "e" 3.0 is no longer backward compatable with its
> pre-3.0 versions. The VERA salesman bragged that he was getting a good
> number of angry Specman customers switching over to VERA because of this
> lack of backward compatibility. Can any users out there confirm or deny
> Verisity did this and, if yes, has it had as dramatic an impact as this
> obviously biased Synopsys/VERA salesman is claiming? - John ]
From: Dave Tokic <davet@verisity.com>
John,
It looks like you got snookered by that VERA salesperson into posting a
question about old history! Verisity's production release of Specman Elite
(Specman 3.0) was back in June '98 (9+ months ago), with pre- production
versions out at customers 3-4 months earlier. In fact, we've currently
released version 3.1.2. At this point, 95% of the customers who were using
the 2.x versions have been successfully transitioned to the enhanced "e"
code for quite some time now. The process has generally gone smoothly and
customers have been happy with the transition and the very powerful
capabilities provided by the enhancements. Specific details and benefits
can be seen at: http://www.verisity.com/html/default_specmanelite.html
- Dave Tokic
Verisity Design Moutain View, CA
---- ---- ---- ---- ---- ---- ----
From: "Greg Mokler" <greg.mokler@amd.com>
John,
I can throw in a few comments about the Specman 2.3 to 3.0 transition,
because we went through the process about 6 months ago.
It is true that the e code written for 2.3 is not completely compatible
with 3.0. For example, in order to make the language more object oriented,
3.0 imposed the requirement that all methods (Specman procedures) be owned
by a specific data structure. This made the converted code a bit ugly.
Verisity came in & used scripts they had written to convert our existing
2.3 code. There were problems related to the scripts as we were one of the
very first customers to make the transition, and the scripts were not very
robust, but in a matter of a week or so, the conversion was completed with
minimal impact to ongoing work.
This effort was inconsequential compared to the amount of time and work
we put into helping Verisity debug their new release; there were a number of
significant problems, and we probably lost a full month (very roughly) in
getting past them. The amount of time we spent on this was due largely to
the lack of local apps people to do the kind of handholding that was needed,
and the general lack of depth within their support department. These
problems are pretty typical for a small company, and, in my mind, have been
addressed quite adequately since that time: a local office now has 3 apps
people with good knowledge of the tool who have been very responsive.
I would like to stress that 3.0 was a major release and a significant
improvement over 2.3. From what I've seen over the past two years, Verisity
does not introduce backward compatibility issues as a matter of habit, and I
believe the improvements outweighed the effort involved. It added a number
of new features which have made the tool significantly more powerful: I
would not want to build another testbench without it!
The tool is quite stable now: the transition to 3.1 was uneventful,
despite the fact that a big chunk of the tool (the generator) was
completely rewritten. As things stand now, I would say that Verisity has
successfully put these problems behind them.
- Greg Mokler
AMD Communication Products Division Austin, TX
( ESNUG 315 Item 10 ) --------------------------------------------- [3/99]
From: Steve Hwang <hwang_cad@yahoo.com>
Subject: The New DC 99.05 Release Benchmarked Only Marginally Better
Hi, John
I got a new copy of the 99.05 Design Compiler software last week, and I
heard so many people at Synopsys recommending this release. Therefore, I
did a quick synthesis run on some of my customer's designs as a crude
benchmark, and here is the information about the run times, speed, and area
using the same scripts and library.
Because this design was very performance driven, John, we had push hard on
the timing. It turns out not to be a good reference for Synopsys to promote
99.05. However, I think we have to give credit to Synopsys for the run
time improvement in 9905 release.
I ran the design using a two passes bottom up compile, and here is the run
time/speed/area comparison between 9905 and 9808. In this benchmark, I used
a SUN enterprise 450 server/2G memory/4 *300Mhz CPU, and each run is using
the same script/library/constraints. Since the swapping/paging is not a
issue here, I didn't keep track of memory usage of these two runs. The run
time is using the user time not system time.
9808 9905 Percentage
==========================================
Run Time 23hr 21min 20hr 9min 13.42% faster
Speed 109 Mhz 112 Mhz 3.00% faster
Area 86012 92213 7.21% larger
Because this was an embedded process design, this benchmark may not fit
into other type of design. Hopefully, other people have some benchmark
results they can share with the rest of designers here.
- Steve Hwang
( ESNUG 315 Networking Section ) ---------------------------------- [3/99]
San Jose, CA -- Cadence seeks manager (MBA a plus) w/ 2 to 4 years exp.
managing external technology-focused user groups. "mgabriel@cadence.com"
|
|