Editor's Note: I just wanted to send out a quick note to thank the 38
people (so far) who have either sent in their own SNUG'00 Trip Report
to me or filled out my survey. Keep them coming! I'm gunna have one
busy weekend collating them for the collective Trip Report. Cool.
- John Cooley
the ESNUG guy
( ESNUG 348 Subjects ) ------------------------------------------- [3/30/00]
Item 1 : WARNING: DC 99.10 -incr & Boundry Optimization Creates Bad Logic!
Item 2 : ( ESNUG 344 #6 ) Well, We Found Verisity's Specman Worth Learning
Item 3 : ( ESNUG 346 #1 ) Married Guy Busted; It Was PKS That Was Broke !
Item 4 : We Compared Mentor's MBistArchitect To LogicVision's memBIST-IC
Item 5 : The Odd `resetall & `uselib Differences Between VCS & Verilog-XL
Item 6 : Cadence's Silicon Ensemble Useless At 0.18um On Cross-Capacitance
Item 7 : Aart Says He Has No Timing Correlation Issues; We Found Otherwise
Item 8 : Problem Interfacing Synopsys DesignPower And Cadence NC-Verilog
Item 9 : ( ESNUG 346 #14 ) Use Signalscan To View Cadence cdsSpice HSPICE
Item 10 : Does Anyone Have A Good EMACS Set-up For Use With dc_shell-mode ?
Item 11 : ( ESNUG 346 #6 ) DC 99.10-04 report_power Memory Leak & DW Libs
Item 12 : ( ESNUG 346 #13 ) DC & PrimeTime Can't Read The SDF They Write
Item 13 : ( ESNUG 346 #11 ) ... And ModelSim Can't Read The Tcl It Writes!
Item 14 : RESIZE Function In numeric_std.vhd Messes Up IKOS, ModelTech, DC
Item 15 : ( ESNUG 344 #7 ) What About "set_dont_touch_network" On Resets?
Item 16 : ( ESNUG 345 #2 ) Well, SureFire's SureLint Impressed Us The Most
Item 17 : ( ESNUG 343 #12 ) Synopsys Response To "Translating DC To TCL"
Item 18 : ( ESNUG 346 #15 ) ...But Argon Broke O-in, Broke EMACS, Broke DC
The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com
( ESNUG 348 Item 1 ) --------------------------------------------- [3/30/00]
From: [ The Great Gatsby ]
Subject: WARNING: DC 99.10 -incr & Boundry Optimization Creates Bad Logic!
Hi John, I have to be anonymous on this.
We are right in the middle of taping out a 1.3 million gate design and we
discovered that incremental compiles in DC 99.10 is synthesizing bad
functionality in our design! Yes, I am saying that incremental compiles
in DC 99.10 is creating broken netlists. The problem involves something
messy with boundry optimization. Synopsys R&D has agreed that this is a
bug and gave me this workaround.
remove_design -all
read Simpsons_Top_Before_Inc.db
/*remove all attributes*/
/*without this foreach loop formality fails */
foreach(design_name,find(design, "*"))
{
current_design = design_name
reset_design
}
current_design Simpsons_top
uniquify
set_dont_touch {Bart Lisa Maggie Marge Homer} true
/* apply contraint here before compile */
inlcude default.wscr
compile -no_design_rule -map_effort low
write -f verilog -hier -o Simpsons_top_mapped_before_inc.v
current_design Simpsons_top
set_dont_touch {Bart Lisa Maggie Marge Homer} false
/* make sure to apply all the blocks */
set_boundary_optimization {Bart Lisa Maggie Marge Homer} false
compile_preserve_subdesign_interfaces = true
compile -no_design_rule -map_effort low -incremental_mapping
write -o Simpsons_top_mapped_after_inc.vgl -f verilog -hier
exit
Since we didn't know this problem existed on some other blocks and our
functional vector might not catch them, we have to re-synthesize all our
blocks again. Synopsys says this should be fixed in DC 2000.05.
- [ The Great Gatsby ]
( ESNUG 348 Item 2 ) --------------------------------------------- [3/30/00]
Subject: ( ESNUG 344 #6 ) Well, We Found Verisity's Specman Worth Learning
> There is no question that the approach Vera and Specman pursue has quite
> a number of benefits depending on the application. However, testbench
> designers would have to undergo the painful process of learning a new
> language and mastering new semantics. Having managed this, one can't be
> sure of the alleged benefits for a reasonable amount of time. In the
> seemingly un-stoppable advent of high-level design and SoC, design
> verification methodology will very probably change quite soon causing the
> adoption effort not to pay off. Also, the semantics of those proprietary
> languages (Vera, Specman, QuickBench) are quite ambiguous w.r.t. timing
> compared to well understood HDLs like Verilog or VHDL.
>
> I'd be glad to hear in ESNUG from other designers their opinion about
> standard versus proprietary testbench languages. We don't think they're
> cost effective.
>
> - Ernst Bernard
> Siemens AG Munich, Germany
From: Onn Haran
Hi John,
I must disagree w/ Ernst. I can share some of my experience using Specman:
I have started using Specman one year ago, by managing the verification team
for a complex SoC design. Yes, I did fear from a new environment. I was
surprised to see that even at the ramp up period, with partial knowledge,
the test environment was built much faster than it would have been with VHDL
code.
What I discovered is that very fast you reach a level that passes the
coverage from the VHDL test bench. Now our effort is spent on system
testing. The level of testing that I have reached with Specman is
incomparable to anything I've done in the past, and in much less time.
I must mention that during the lab verification tests before production only
1% of the bugs were found from VHDL simulation, the remaining 99% were found
by the Specman. After production not a single bug was found.
I also used Specman for block level verification, and again the results were
amazing.
The benefit of Specman over VHDL is not the GUI. It is the flexible
language that allows high level description of the behavior of the DUT.
It's generation and analysis are very fast and efficient, much more than
anything that you can write with VHDL. You can do the comparison by
yourself: write and debug 100 lines of C++ code and write the same
functionality in VHDL. See for yourself what is faster and more efficient.
With one thing I agree with Ernst: the engineer that writes Specman should
be much better than the one that writes VHDL. With Specman, a high level
system view is required, and not every engineer is capable for that. But if
these engineers are available, then use them for Specman!
- Onn Haran
Texas Instruments Israel
---- ---- ---- ---- ---- ---- ----
From: Hans-Juergen Brand
Hi John,
Just some quick comments about the opinion Ernst raised in his email. I
have been using Specman for different projects (for communications and
more PC related applications) at AMD for around two years.
So far I have seen this to be a big time saver reducing the effort for
developing the RTL simulation environment and increasing the quality of
the design. What are the reasons for that?
If you use HDL testbenches you have to develop all functions for data
generation, checking and coverage by yourself. So basically every company
develops its own "Specman-tool". By using Specman you can really focus on
defining and coding of the test scenarios rather than worrying about basic
verification functions and let the verification engineers do what they were
hired for -- verifying the design and not to develop yet another proprietary
verification tool. So they have much more time left for understanding the
system functionality and writing more comprehensive test cases for improving
the functional coverage of your simulation.
Besides that, the e-language incorporates features you can not find in C or
any of the HDLs (like handling of parallel threads & temporal expressions).
Concerning the learning process for a new language/tool, I have not seen
any major issues so far -- as long as they take training (and having C
background is very helpful.) Within the last year, 6-7 people with varied
levels of experience (new-grad up to 10 years in industry) joined our
system-level verification team. Almost all of them started working and
coding with Specman already after 2-3 weeks before their first training.
Many of them had a VHDL background and we are working in Verilog. I would
expect that VHDL-to-Verilog transition to be more troubling for them.
In our current project we are moving to a unique environment for block and
system level verification. This improved the communication between the
design and the verification team. After seeing the benefits of Specman most
of the designers stopped writing their own Verilog testbenches and used
Specman based environments instead. Since Specman sits "on top" of our
conventional Verilog simulation environment and can handle C-models as well,
it provides a framework for all verification issues. Specmen incorporates
Verilog know-how instead of replacing it. Moreover we found the Specman
co-verification link (cvl) we very useful, too.
- Hans-Juergen Brand
AMD Saxony Manufacturing GmbH Dresden, Germany
---- ---- ---- ---- ---- ---- ----
From: Lior Storfer
John,
Ernst's comments below take me be back in time around 2 years.
We had quite similar debates internally on the best means to verify complex
digital designs. After going through internal debates we decided to bet on
Verisity's Specman tool. We do not regret that decision for a minute.
But I absolutely agree with Ernst on the switching costs. It is not trivial
to switch. It is somewhat like switching from design using schematic capture
to using a HDL language. The learning curve is there. But, is there still
anyone desinging complex chips using schematic capture?
As to the pseudo-random based approach of the Verisity tool, it has proven
to be a winning approach for us. Though Specman is random based, you can
control the randomness as you wish. By using the Generators, Coverage tools
and Checkers one can easily direct the simulations to hit on the soft spots
of his design.
- Lior Storfer
Texas Instruments - Cable Broadband Communications
( ESNUG 348 Item 3 ) --------------------------------------------- [3/30/00]
Subject: ( ESNUG 346 #1 ) Married Guy Busted; It Was PKS That Was Broke !
> I've worked in layout automation and chip design for over 10 years. Of
> all the engineers I know working _physical_ design on ASICs & processors,
> most won't even look at PhysOpt because they don't want to correlate a
> placement engine that doesn't have its own router (making me wonder how
> PhysOpt even qualified in the GDS II category since it can't get to
> polygons under its own steam). It seems to me that lack of a router in
> PhysOpt leaves alot of heavy lifting to the Avanti/Cadence/IBM back-end
> flows (try notch-filling, antenna checking, and cross capacitance, for
> instance!)
>
> I'm not talking out of my ass here, John. We looked at PhysOpt. We
> couldn't even get PhysOpt to run -- but that may be more symptomatic of
> where we are on the Synopsys support/sales pecking order than the actual
> quality of the tool.
>
> - [ Tony, the Tiger ]
From: [ Tony, the Tiger ]
Hi John,
As most married men can tell you, to stay married I learned very quickly to
say the words "I'm sorry, Honey, it's all my fault" when I screw up. This
is one of those times. The Synopsys guys are gunna love this, I'm sure.
It's time for [ Tony, the Tiger ] to eat some sugar-frosted crow.
In my company, I'm the guy who's supposed to make sure our design flows
can hit timing on schedule. That's why I wrote you arguing that I'm not
seeing the kind of data in my evaluations showing that any of the next
generation physical synthesis solutions has a definitive advantage over our
tuned "DC with Avanti/Cadence timing-driven placement" flow. I'm as game
as the next guy to bring a new tool up if I can see where it makes a real
difference. So far what I'm seeing is a lot of people with crappy flows
flocking to this year's version of EDA Viagra. The kind of designs I see
falling flat in solid timing-driven flows wouldn't cut the mustard in _any_
flow because of fundamental design errors (usually timing constraints so
poorly authored they may as well have been done with spray paint). Every
design we've done with solid constraints up-front closed. Those without,
went down in flames. Without solid constraints, no tool is going to do
squat for you on timing. If you've got the right constraints, I can close
it with the DC + timing-driven placent flow I've got today.
In my letter I wrote:
"We looked at PhysOpt. We couldn't even get PhysOpt to run -- but that
may be more symptomatic of where we are on the Synopsys support/sales
pecking order than the actual quality of the tool."
I mixed PhysOpt up with PKS, both of which were written off early in
our evaluation, but for vastly different reasons. The simple fact is that
neither I nor any of the design teams I support got our hands on PhysOpt.
We called Synopsys some months ago but nothing came of it. Unfortunately,
at that same time, I was talking to Cadence about using PKS on one of our
designs as a test case. It turned into what you called a taxi-cab eval,
John. Cadence took our test design to run through PKS and disappeared for
many moons. Repeated follow-up calls kept getting lame answers that I could
only interpret as "Gosh, Tony, our PKS results are too pathetic to show
you." In my mind, I wrote both of them off. So, I must humbly eat crow
and say that when I mistakenly said "PhysOpt didn't work," I was actually
thinking of my "PKS didn't work" experience. In my mind, I confused the
Cadence PKS no-show w/ our simultaneous Synopsys PhysOpt talks.
So, in the same tone I use with my wife when I mess up, I must say I humbly
apologize for my mistake here. Sorry for trashing PhysOpt when, in my
heart, I had meant to trash PKS. At least I don't have to buy you guys
flowers...
- [ Tony, the Tiger ]
P.S. But I still stand by my original argument: this physical synthesis
crap isn't showing drop-dead results against the tuned "DC w/ timing-
driven placement" flows I have today. Why bother with them???
( ESNUG 348 Item 4 ) --------------------------------------------- [3/30/00]
From: [ Gotta Be Anon ]
Subject: We Compared Mentor's MBistArchitect To LogicVision's memBIST-IC
Hi John,
Please keep me anonymous.
We're looking to purchase a memory BIST tool and have evaluated Mentor's
MBistArchitect and LogicVision's memBIST-IC. Both tools were able to
generated BIST circuitry for a complex asynchronous SRAM macro, so I'm
looking for ESNUG feedback on either tool to aid in our decision.
Here are some of our findings:
1. Both tools allow one controller to test multiple RAMS. Both support
asynchronous and synchronous SRAM, and ROM.
2. MBistArchistect (MBA) was extremely simple to use. As with both tools,
setting up the memory model was the most difficult part.
memBIST-IC (MBIC) was more difficult to set up because it needs to
parse and modify user Verilog.
3. MBIC inserts the mbist into the netlist. This has both positive and
negative implications. Positive: Verilog porting does not need to be
fixed through the hierarchy (the BIST adds several new ports).
Negative: the "golden" RTL code must be frozen and the mbist inserted
to generated the "operating" RTL code for simulation and layout.
With MBA, the engineer has to insert the the mbist manually, but he
controls the final (and single) netlist.
4. Both tools generate a testbench to verify the BISTed memory, at the
memory instance level.
5. MBIC can propogate the testbench to the top level, so mbist tests can
be simulated at the chip level, like they would be on a tester. This
is a manual operation with MBA. Positive: Ease of use. Negative: if
the test procedure is complex (mbist tests are based on register
addresses, etc.), MBIC may not be able to handle this.
6. MBIC can handle parallel or serial mbist with the number of comparators
specified by the user. MBA can only handle full parallel mbist.
7. Both MBA and MBIC can handle reduced data paths for mbist checking:
MBA - compressor (LSFR with signature?)
MBIC - LSFR with signature
8. MBIC provides a report on tester cycles and tester time (based on a
reference clock). Reporting out of MBA is limited.
9. MBIC provides only the MarchC+ algorithm for mbist.
The RTL for the controller logic was not synthesized for either tool to
guage the silicon "cost" of mbist. Both tools can operate from a JTAG
interface.
Having said all this, my question to ESNUG is which mbist tool would you
chose and what's your reasoning? Also could you share any experiences
you've had with these tools?
- [ Gotta Be Anon ]
( ESNUG 348 Item 5 ) --------------------------------------------- [3/30/00]
From: John Russo
Subject: The Odd `resetall & `uselib Differences Between VCS & Verilog-XL
Hi, John,
It appears that with VCS a `resetall compiler directive causes the search
path for levels below the current hierarchy to be a empty set. This applies
to all files except those in the current file. Verilog-XL does not change
search enviornment as a result of `resetall.
Verilog-XL and VCS behave differently in another way. I've found that with
Verilog-XL when a uselib directive is used in a model that instantiates
parts, all searches at that level will honor that uselib statement. In
other words, if a module instantiates parts and uses a `uselib statement to
control the search path, then all searches at that level will use that
search path until another uselib statement is encountered. For example,
suppose I had a module in a `uselib enviornment of liba (`uselib dir=liba
libext=.v) and suppose I instantiate two parts: a1 and b1. If I put a
`uselib statement inside the code for a1 to control the search for sub-part
a11, then all parts at the second hierarchical level would use the second
enviornment. In other words part b11 which is instantiated by part b1
which is in the top level search enviornment will ONLY be searched in the
second level enviornment that the uselib statement in a1 specified.
Interestingly enough VCS5.1 behaves identically, but VCS5.0 maintains the
top level environment for the second part.
- John Russo
Lucent Technologies Allentown, PA
( ESNUG 348 Item 6 ) --------------------------------------------- [3/30/00]
From: [ Born To Run ]
Subject: Cadence's Silicon Ensemble Useless At 0.18um On Cross-Capacitance
Hey John, very anon please.
I like how ESNUG is adding physical design issues and wanted to add to the
discussion. I wanted to give people designing 0.18um ASICs a heads up on
cross-capacitance. I'm seeing 2 problems here: ASIC suppliers seem to be
pretty clueless about it, and the advertised Cadence Silicon Ensemble flow
can't handle it for beans.
Here's the deal: I'm working on a 0.18um ASIC with a Japanese foundry.
Good guys. They're working their butts off and closing timing without
a problem. Not a word that we need to worry about cross-capacitance. Then
I get to talking to some of my friends working on processors, and I start
to get a bad feeling. For one thing, cross-capacitance isn't proportional
to clock speed, its proportional to metal pitch. You can be running 83 MHz
in 0.18um and still have cross-capacitance bite you in the butt. Sure,
maybe you can leave some margin on the table to cover additional delay due
to cross-capacitance, but you can't margin noise. Get a glitch far enough
down your logic cone, clock it into a flop, and suddenly you get to debug
a frequency band where the part fails. Lovely.
So when I press my foundry, I get this cheesey answer that they're going to
insert buffers every couple of millimeters to prevent cross-talk. Sounds
good, right?
Not once you run the simulations. For most nets, a 1-2 mm buffering
distance would be fine, but if you have a very high drive cell aggressing
on a very low drive cell, it can increase your delay 50% at less than
1/2 mm of adjacency, even on wide-pitched metal. 50%. That's no small
potatoes. I mean, how much margin do you have?
So warning #1: If somebody tells you you don't have a cross-capacitance
problem in 0.18um, or that they can handle it with blind buffering, check
to see if they've simulated it or if they're just pulling this methodology
out of their butts based on how they think cross-capacitance works.
But it gets worse from there...
So now we know we've got a cross-capacitance problem, and we know we can't
just blindly jam buffers in to fix it, so what do we do? We go check out
the signal integrity option on Cadence Silicon Ensemble.
The best thing I can say for SE is that it runs without crashing.
Here are the problems we've found:
1. The parasitics coming from HyperExtract are up to 200% off compared
to 3D field solution. A chimpanzee throwing darts at a diagram
of parasitics could do better.
2. Its flagging over 1000 noise violations in a few hundred K gates
of logic. Over 1K noise violations in < 9mm^^2 of randomly
routed die? Gimme a break! Simulation showed that SE was
over-estimating noise by > 100%. It was using a slew rate
less than half of the actual slew rate. Its hard to say how many
real noise violations are in there, but my simulations are saying
my usual layout topologies ought to give me < 10 noise violations
per 100K gates *usually*. Not 1000!
3. The repair file isn't. I'm still playing around with this to see
if other PBopt options might get me a better repair rate, but so
far I'm still left with ALOT of xcap timing and noise violations
reported even after I run through the check/repair flow a couple
of times. Given that these results are based on the HyperExtract
parasitics that I know to be bogus, I'm not really motivated to
run this into the ground. To me, the whole solution has the feel
of something that isn't going to gel.
Lesson learned: Just because your EDA or ASIC vendor showes you a pretty
drawing of a flow, don't buy it until you see the parasitic, noise, and
delay modeling correlation. Cross-capacitance isn't like bringing a
router up where the right values for metal pitch go a long ways towards
getting you something reasonably DRC clean. Cross-cap and noise really
require some tuning before you get back something clean.
Part of the trouble with all of this is the incredible lack of data in
the industry on what to expect in real live designs. Virtually all the
data I've come across is from contrived layout topologies on test chips
or from microprocessors with manual or heavily programmed routing. That
just isn't an option in ASIC design. I haven't found anybody who can
knowledgably tell me what kind of cross-capacitance issues to expect in
automatically routed logic. It gets down to religion. Some people say
that because this type of routing results in large numbers of extremely
small aggressors, the cross-capacitance can be neglected (the aggressors
will never all aggress at the same time). Sounds good. But I sit here
and look at the routing fanning in and out of some of my memories and
MUXes, and there are long stretches of massive adjacency. I think the
large-#-of-small-aggressors argument just doesn't hold up across the die.
Even in this random routing, there are instances of great regularity. The
second argument I've come across is that odds are against all these lines
having the appropriate phase relationship to clobber each other. But I
can't guarantee that for all signals entering and leaving memories and
muxes. What do I look like, someone who wants to run vectors for the
rest of my freakin' career?
All in all, I feel a lot of bad silicon coming on...
- [ Born To Run ]
( ESNUG 348 Item 7 ) --------------------------------------------- [3/30/00]
From: Chip Laub
Subject: Aart Says He Has No Timing Correlation Issues; We Found Otherwise
John,
I buttonholed you about the statement Aart de Geus (the CEO of Synopsys)
made in his keynote address to SNUG, claiming there were no complaints
about timing correlation between Synopsys tools. Please feel free to
publish my name, etc., on this. We have long complained about what has
been called the "Synopsys-Pathmill correlation issue". This is mainly a
problem with setup and hold time calculation at latch nodes by Pathmill.
If there are finite (i.e. slow) transition times on the clock signals
(transistor gates) controlling a latch node, the analysis is inaccurate.
Where Synopsys synthesis libraries are derived from SPICE characterizations,
any EPIC Pathmill runs on post-synthesis netlists have an ugly tendency
not to correlate well with reports from Design Compiler. We have been
pretty frustrated that EPIC (now call "Nanometer Analysis Technology" inside
of Synopsys) has not resolved this, since they agreed that it was indeed a
problem. With both Pathmill and DC now in the same company, it is even
more of a question. After Aart's bold statement, I have to ask, is this an
issue at any other design houses?
- Chip Laub
Intel Santa Clara, CA
( ESNUG 348 Item 8 ) --------------------------------------------- [3/30/00]
From: Hans Miller Pedersen
Subject: Problem Interfacing Synopsys DesignPower And Cadence NC-Verilog
Hi John !
We are using Synopsys DesignPower and Cadence NC-Verilog, in order to
perform power estimation of our ASIC's - but we are having problems -
perhaps some of your readers might be able to help us out.
The problem relates to the backward SAIF file, written by NC-Verilog
through the $toggle_report PLI supplied by Synopsys (the file contains
switching activity for all ports). It seems like some input ports are
dropped, but not all. When read into dc_shell, and running report_power,
these "dropped input" are having default 50% switching activity (no
annotation), which was the way I initially discovered the problem.
Has anybody discovered similar problems ?
- Hans M. Pedersen
Oticon A/S Hellerup, Denmark
( ESNUG 348 Item 9 ) --------------------------------------------- [3/30/00]
Subject: ( ESNUG 346 #14 ) Use Signalscan To View Cadence cdsSpice HSPICE
> It depends on the type of simulation. With the Cadence stuff comes a
> viewer called "simwave" (but it also responds to "wd"), which you'd
> normally use to view the output of Verilog-XL simulations. It is,
> however, also capable of displaying HSPICE results, but only transient,
> no AC stuff. (Don't know if that's at all useful to you, but it's an
> option worth knowing.)
>
> - Han Speek
> University of Twente Enschede, Netherlands
From: Joachim Schmidt
Hi, John,
One could use the signalscan tool (which is also delivered with Cadence)
instead of "simwave" and select FORMAT -> ANALOG FORMAT in the waveform
window menu.
- Joachim Schmidt
Dialog Semiconductor Germany
( ESNUG 348 Item 10 ) -------------------------------------------- [3/30/00]
From: Scott Butler
Subject: Does Anyone Have A Good EMACS Set-up For Use With dc_shell-mode ?
Hi, John,
Does anyone have a good emacs settup for dc_shell scripts?
- Scott Butler
Intel
( ESNUG 348 Item 11 ) -------------------------------------------- [3/30/00]
Subject: ( ESNUG 346 #6 ) DC 99.10-04 report_power Memory Leak & DW Libs
> We are using DC 99.10-04 in the Tcl mode and the report_power command for
> power characterization of DW-components for a LSI library. The power
> characterization requires several calls of the report_power command. We
> have found that report_power does not de-allocate memory which leads to a
> program abortion for larger characterization runs.
>
> I have attached a simple DC test script to demonstrate the problem.
>
> # program to demonstrate memory leak produced by report_power
>
> # create module from designware-library
>
> elaborate DW02_mult -arch csa -lib DW02 -update \
> -param "A_width = 4, B_width = 4 "
> current_design
> compile -map_effort medium
> ungroup -all -flatten
> change_names -rules vhdl -hier
>
> # calculate power
> while { 1 == 1 } {
> set_switching_activity -period 10 -toggle_rate 0.7 [all_nets]
> report_power }
>
> One should start the script and observe the memory requirement to see
> that already for this small component each report_power command requires
> some more Mbytes of additional memory.
>
> - Gerd Jochens
> OFFIS Research Institute Oldenburg, Germany
From: Gerd Jochens
Dear John,
Thought I'd follow up on this. The application manager of Synopsys Germany
responded after my letter ran in ESNUG. He said that the settings of the
synthetic_library & link_library variables are responsible for this.
Our default setting of these variables are to include all available DW
libraries (set link_library [list {*} {lcbg10p_wccomv.db} dw01.sldb
dw02.sldb dw03.sldb dw04.sldb dw05.sldb dw07.sldb]). This setting forces
the memory leak to occur.
I have found that reducing the number of link libraries to only dw01 and
dw02 (for example) avoids the problem. Which library or combination of
libraries is responsible for the leak is still not clear.
So, my advice for the moment is to include only those libraries which are
really necessary for a certain design.
- Gerd Jochens
OFFIS Research Institute Oldenburg, Germany
( ESNUG 348 Item 12 ) -------------------------------------------- [3/30/00]
Subject: ( ESNUG 346 #13 ) DC & PrimeTime Can't Read The SDF They Write
> We are modeling flop output-to-output timing arcs in our .lib. That is,
> we have CLK->QB and QB->Q arcs. DC handles these just fine until it
> comes time to write an SDF. Then it decides to simply write CLK->QB and
> CLK->Q arcs. The delay numbers are correct, but of course this file
> cannot be imported into DC because of arc mismatches!
>
> SolvNET revealed that each tool may or may not have its own SDF writer
> code. (See Static_Timing-171.html and Static_Timing-191.html). In my
> opinion their proposed workarounds are laughable.
>
> Has anyone been through this before and found which tool's SDF writer
> produces the most 'reasonable' (i.e., bug-free, actually USABLE) output?
>
> - Andy Pagones
> Motorola Labs
From: Tom David
Hi John,
I've run into a similar problem using the Synopsys SDF writers and readers
out of DC and PrimeTime. I've basically given up using the SDF writers in
DC/PT and mostly use Ultima/MDC. They seem to generate the most reasonable
SDF that back annotate both to the .lib file and to my Verilog/VHDL models.
The timing checks sections for these standard cell models (for both VITAL
and Verilog) are generated directly from the .lib via a Perl script that I
wrote.
- Tom David
Silogix
( ESNUG 348 Item 13 ) -------------------------------------------- [3/30/00]
Subject: ( ESNUG 346 #11 ) ... And ModelSim Can't Read The Tcl It Writes!
> Has anyone experience using tcl/tk with ModelSim? What is the interest
> of using tcl/tk with this tool?
>
> - "Jeff" France
From: [ The Cat In The Hat ]
Hi, John.
Anonymous, please. I've been using ModelSim (PE, V5.3a), and discovered one
interesting "feature". If you have a waveform display up on the screen, and
want to save the format, it writes a Tcl script for you to execute later.
So far so good.
However, if your display includes large virtual signals (eg busses that you
have built on the display by combining individual signals), the Tcl script
is not executable when reloaded. It writes the bus description all on one
line, with no regard to the buffer length (I've seen lines almost 5000
characters long)! These, naturally, break the Tcl interpreter when read
back in.
I do feel it's rather poor for a tool to write a script that it itself
cannot read.
- [ The Cat In The Hat ]
( ESNUG 348 Item 14 ) -------------------------------------------- [3/30/00]
From: Chuck Wilde
Subject: RESIZE Function In numeric_std.vhd Messes Up IKOS, ModelTech, DC
John,
We recently started using UNSIGNED instead of STD_LOGIC_VECTOR in our VHDL
code. This was done to conform to our internal coding standards. I was
using the IKOS FFX rtlcompiler tool on a design, and my VHDL code wouldn't
get through the analyze step. This same code had gone through Synopsys DC
and MTI's Modelsim. After some investigation, I found that the problem was
caused by the R.2 RESIZE function in numeric_std.vhd. The function
declaration is this:
function RESIZE (ARG: UNSIGNED; NEW_SIZE: NATURAL) return UNSIGNED is
There is a test in the code which checks the size of the UNSIGNED vector
against the desired NEW_SIZE
if (RESULT'LENGTH < ARG'LENGTH) then
...
else
RESULT(RESULT'LEFT downto XARG'LEFT+1) := (others => '0');
...
end if;
The test assumes that the size of ARG is never equal to NEW_SIZE. If the
sizes are the same, the else condition gets evaluated as something like
this (for a vector of length 4):
RESULT(3 downto 4) := (others => '0');
and rtlcompile sees "3 downto 4" as a Fatal Error.
My fix was I changed the test of the sizes to
if (RESULT'LENGTH <= ARG'LENGTH)
changing "<" to "<=". That solved my problem.
I talked the problem over with some other members of my group, and we
decided that Synopsys DC and MTI must have built-in functions so they don't
actually use the RESIZE function found in numeric_std.vhd. I've looked at
source code from several places (IEEE, IKOS, MTI, Xilinx) and they all have
the same error. I fixed the code for my IKOS tool, but I don't know if
there is a common source supplied by IEEE which is then reused by all the
EDA vendors. I'm hoping your forum will get the word out.
- Chuck Wilde
Lucent Technologies
( ESNUG 348 Item 15 ) -------------------------------------------- [3/30/00]
Subject: ( ESNUG 344 #7 ) What About "set_dont_touch_network" On Resets?
> I want to thank Kayla for her observation on this issue.... I thought I
> was going crazy!!! I've been compiling some blocks for a customer for a
> few months now and just recently I've noticed that since 99.10 that every
> once in a while it would NOT correctly listen to my set_dont_touch_network
> on a clock in a structural design.... The structural had 2 clocks, named
> CLK and CLK32FC. The CLK32FC clock was correct while the CLK clock
> (redundant, I know...) was BUFFERED! BUT, when I compiled this
> structural manually, the issue went away.... huh?
>
> - Gzim Derti
> Intrinsix Corp. Rochester, NY
From: Mark Grabosky
John,
I am having problems with the set_dont_touch_network not working on reset
nets with DC 99.10. Kayla's ESNUG 344 #7 addressed this bug on clock nets.
Are there any workarounds for resets?
- Mark Grabosky
Mint Technologies
( ESNUG 348 Item 16 ) -------------------------------------------- [3/30/00]
Subject: ( ESNUG 345 #2 ) Well, SureFire's SureLint Impressed Us The Most
> The one tool that we found was called HDL-Lint, sold by Veritools. (There
> is apparently another tool called SureLint from SureFire Verification,
> recently acquired by Verisity, but it was not available at the time of our
> evaluation. I think it is still in alpha/beta stages at this point.)
>
> - Nathan Dohm
> StarBridge Technologies Marlboro, MA
From: [ Mr. Happy ]
John,
I am a customer of SureFire (Verisity) and am evaluating SureLint and some
other tools (and already own Verilint) and prefer to remain anonymous.
Very interesting reading, Nathan. We have a Verilint license (with the 28
minute lockout) but never upgraded and are evaluating other linting tools.
Personally I was not impressed with HDL-lint. It seemed a step or two below
Verilint which hasn't even been upgraded in over a year.
I have been using the SureFire (now Verisity) tool for a few months now and
am very impressed with the features it provides. The tool has some unique
features such as FSM checks (automatically extracts them and draws bubble
diagrams) and detects static race conditions.
Normally, I use emacs to step through errors, too, but the SureLint GUI has
made me break that habit -- it is much easier to navigate by message type
rather than sequentially through each file.
The SureLint team is eager to improve the tool be it adding command-line
options, reporting new messages, special syntax handling, etc.
- [ Mr. Happy ]
( ESNUG 348 Item 17 ) -------------------------------------------- [3/30/00]
Subject: ( ESNUG 343 #12 ) Synopsys Response To "Translating DC To TCL"
> As I just ported most of my dc scripts from dc_shell to dc_shell -tcl, I
> thought I would share my experiences and ask my unanswered questions...
>
> Overall dc-transcript did an awful job of converting my dc-shell scripts
> to tcl. Very few of them worked first time, but it at least put me in
> the right ballpark. I was pretty frustrated for the first few files, but
> once I started to understand the dumb things it repeatedly did (thanks
> Gzim), it became relatively simple to fix its output.
>
> - Mark Andrews
> Electronics For Imaging, Inc. Foster City, CA
From: [ Tickle Me Elmo ]
Hi John,
I'd like to comment on Mark's post (ESNUG 343 #12) regarding DC-Tcl and
hopefully answer his questions. I work at Synopsys, but please sign me as
[ Tickle Me Elmo ] in this letter, OK?
I think Mark hit on a very important point regarding dc-transcript. It's a
good place to start, but it will by no means create a perfect translation of
your DC compile scripts.
I also wanted to reinforce a point I made during the "Tickle Me Elmo" SNUG
presentation last year. Once you become familiar with Tcl, you will write
much better scripts than what you would get out of a translator. This isn't
a problem that's specific to dc_shell and Tcl, but if you look at dc_shell
as a language you can easily see why translation is difficult.
> HELPFUL HINT 1
> --------------
>
> Say you have this in dc_shell, i.e. you are using a variable to test
> something in a filter command:
>
> test="true"
> filter( find( design, "*" ), "@is_mapped == test" )
>
> dc-transcript will convert it to:
>
> set test {true}
> filter [find design {*}] {@is_mapped == test}
>
> This always returns an empty collection, it should have been:
>
> set test {true}
> filter [find design {*}] "@is_mapped == $test"
I'd like to give you some insight into why this is difficult to translate.
In dc_shell, the syntax for variables and constants is identical. These
are identical in dc_shell:
test="true"
filter( find( design, "*" ), "@is_mapped == test" )
and
filter( find( design, "*" ), "@is_mapped == true" )
To translate this you have to know that the variable "test" was created to
be included in the subsequent statement. This requires you to keep track of
the "state" of all variable assignments, then make the appropriate
substitution. This looks simple enough to translate, but when "we" do it
visually, we're using clues from the command. The fact that "test" is
defined, the fact that "is_mapped" takes a true/false value, and the fact
that "test" is assigned "true" which is what "is_mapped" is looking for.
dc-transcript doesn't try to "understand" your script. It works to
translate dc_shell constructs into DC-Tcl constructs. So to add to the
hint, these sorts of transformations are not performed by dc-transcript.
I highly recommend reviewing the output of dc-transcript and make sure your
script still makes sense. As an aside, for the straightforward translation
of constraint files dc-transcript has a much higher success rate.
> QUESTION 1
> ----------
>
> Do we need the tcl equivalent of this in dc_shell -tcl?
>
> foreach(design_name,dc_shell_status){}
With DC-Tcl, "dc_shell_status" has gone away. It has been replaced by
'real' return values from commands. If you're using Tcl, you don't need
"dc_shell_status".
Example:
find(design, "*")
my_designs = dc_shell_status
should be rewritten (in Tcl)
set my_designs [find design {*}]
dc-transcript will attempt to do this for you but it can't always bind the
command and the dc_shell_status it belongs to in which case it doesn't
translate it. Note that dc-transcript will create a Tcl variable called
"dc_shell_status" where it finds it in your dc_shell scripts. However this
variable in Tcl is no different than any other Tcl variable. It does not
automatically get set to the return value of the previous command.
So in fact dc-transcript would turn the above dc_shell code into:
set dc_shell_status [ find design {*} ]
set my_designs $dc_shell_status
> QUESTION 2
> ----------
>
> Do we need these anymore?
>
> list_name = {}
> int_name = 0
> str_name = ""
>
> Dc-transcript dutifully converts them to tcl, but tcl doesn't know the
> difference between and integer and a string. And we don't really want a
> list in tcl, we need a collection. I wasn't really sure what the correct
> thing to do was, so I just deleted them all, (everything still seems to
> work...).
In most cases, you don't need to initialize variables in Tcl, but there's no
harm in doing so.
Technically, "list_name = {}" can either be a null string or a null list.
Actually, in Tcl lists are just strings with a special format. This is one
of the reasons that they're horribly inefficient for storing large amounts
of things. And this is the reason we invented "collections".
You touched on this in HELPFUL HINT 1, but you need to keep straight the
meaning of "" vs {} in Tcl. They're both used for quoting strings. ""
allows you to perform variable substitution inside while {} keeps the stuff
inside intact:
dc_shell-t> set test "true"
dc_shell-t> puts "$test"
Prints:
true
while
dc_shell-t> puts {$test}
Prints:
$test
This is a very common thing to mess up when learning Tcl and I will admit
that I still screw this up sometimes.
> QUESTION 3
> ----------
>
> How do I create an empty collection?
The answer is, you don't need to. I know this may not be obvious and it
wasn't to me when I first starting using DC-Tcl. When you want to have a
collection of things, you grab objects using one of the "get_" functions.
If you REALLY want to create something that has null value to initialize a
variable, you can simply assign it a null string, for example:
dc_shell-t> set my_collection ""
...
dc_shell-t> set my_collection [get_designs]
I think the answer to the next question my help clear things up:
> QUESTION 4
> ----------
>
> Is a collection a regular tcl variable with a hidden Synopsys type? Or is
> it a tcl list with one entry and a hidden Synopsys type? By type I mean
> design, net, port etc.
Really, there's only one type of variable in Tcl: The string. All other
variables are just special types of strings. Numbers are strings that can
be evaluated by certain functions (hence why there's no operators in Tcl!)
and lists are strings with rules about delimiting. In DC-Tcl (and other
Synopsys shells that are Tcl-based) collections are fundamentally strings.
Things that are expecting "collections" know how to take the collection
handle string and turn it into an object.
Try this: (with a design in memory)
dc_shell-t> set foo [get_designs]
now
dc_shell-t> puts "$foo"
You get the string:
_sel123 (or whatever name DC creates it as)
But if you say
dc_shell-t> report_attribute $foo
You get an attribute report for all your designs. This is because
"report_attribute" was looking for an "object_list" as an argument. (To
verify this, type "report_attribute -help".) To see what I mean, try:
dc_shell-t> set goo "_sel666"
dc_shell-t> report_attribute $goo
You get:
Warning: Can't find object '_sel666' in design 'DESIGN'. (UID-95)
Warning: Nothing implicitly matched '_sel666' (SEL-003)
Error: Nothing matched for collection (SEL-005)
(assuming you had design "DESIGN" loaded into memory.)
DC actually tried to find the collection "_sel666" which is something that
you just made up. But Tcl didn't know that, it just thinks that "_sel666"
is a string. It wasn't until "report_attribute" tried to actually look up
"_sel666" that things got funny.
Back to the "empty collection" question, if you try to be clever and do:
dc_shell-t> set foo [get_designs ""]
you will get nothing, null string, NOT an empty collection
dc_shell-t> echo $foo
dc_shell-t>
I hope I have shed some light on DC-Tcl and Tcl in general. I want to thank
Mark for posting his message to ESNUG and sharing his experiences. As
always, we welcome feedback about our products (yes both good and bad). If
you encounter a problem or something doesn't work as you expect, please let
us know. We really DO listen to your feedback!
- [ Tickle Me Elmo ]
( ESNUG 348 Item 18 ) -------------------------------------------- [3/30/00]
Subject: ( ESNUG 346 #15 ) ...But Argon Broke O-in, Broke EMACS, Broke DC
> Ouch. We had Kurt Baty at my place of employment pushing the 0-in tool
> with disastrous results, until new management decided that enough was
> enough and dumped it.
>
> Some of us could not really figure out why he kept pushing it. Turned
> out that Kurt Baty was a primary investor and technical advisor at 0-in
> and that's why we were using it in the first place. We continued to use
> it despite continuous setbacks even though there was a great effort by
> our team to make our design 0-in friendly.
>
> - Rui DosSantos
> Argon Networks Littleton, MA
From: Jerry Lampert
John,
Ouch, indeed. I am responding to some comments made by a co-worker in ESNUG
Post 346. I feel that I owe it to Kurt Baty and to 0-In to clarify a few of
the statements that were made that weren't quite on the mark.
When the two founders of Argon, both of whom have software backgrounds, were
starting the company, they called on Kurt to get his take on the feasibility
of the project and to try to recruit him. Kurt's reaction was that design
verification would be one of the project's biggest problems and that such an
ambitious schedule could not be met by traditional means. And so, the
result was that all three agreed that Argon would be betting that 0-In would
provide the vehicle to beat a traditional design flow. The co-founders knew
we'd be working with (pre-)alpha code, knew that at 0-In's stage of
development they'd be getting as much from the effort as Argon, and knew
completely about the level of Kurt's financial involvement/interest in 0-In.
Our two biggest chips are over an order of magnitude larger than the last
chip I did. Our "small" chip is only 3 times as large as my last chip.
Well guess what? We broke the 0-In tool. Guess what else? We broke
Synopsys, we broke emacs, and I don't even want to describe the carnage we
caused with our ASIC vendor. But you know what? We worked with all of
these tool vendors to get by these problems, and in the end, all of us
benefited. This seems to me to be the normal mode of operation when you're
designing an ASIC that is out on the edge. 0-In didn't bill Argon, and
Argon didn't pay 0-In, but then again, neither company walked away with
nothing. We're further ahead of schedule than we would have been without
0-in and 0-in got some very serious debugging. Classic win-win (as much as
one can with a chip design that breaks all conventional EDA tools.)
- Jerry Lampert
Argon Networks, Inc.
============================================================================
Trying to figure out a Synopsys bug? Want to hear how 11,086 other users
dealt with it? Then join the E-Mail Synopsys Users Group (ESNUG)!
!!! "It's not a BUG, jcooley@world.std.com
/o o\ / it's a FEATURE!" (508) 429-4357
( > )
\ - / - John Cooley, EDA & ASIC Design Consultant in Synopsys,
_] [_ Verilog, VHDL and numerous Design Methodologies.
Holliston Poor Farm, P.O. Box 6222, Holliston, MA 01746-6222
Legal Disclaimer: "As always, anything said here is only opinion."
|
|