Editor's Note: Remember my friend Sue, who got ovarian cancer two months
ago right after her 50th birthday and she just had surgery for it? Well,
when it rains, it pours. My girlfriend and I went to Cape Cod with Sue
and her new fiance, Ron, and Sue's 77 year old mother last weekend. (Yup,
knowing Ron for less than 3 months, they decided to get married next week
and her mom flew in for the wedding.) With such a crowd, we'd normally
spend our nights on the Cape at some nightclub dancing, but, since Sue
just had her first chemotherapy treatment, she'd feel great one moment and
then have to lie down 3 minutes later. So we spent the nights happily
playing cards and kibbitzing. On Sunday, my girlfriend and I went
out to get a quick brunch and to have 30 minutes to ourselves. When we
returned to the cottage, everyone was gone & there was a note saying Sue's
77 year old mom had fallen and they were on the way to the hospital! My
girlfriend had to be back in Boston in 2 hours, so we had to pack and go.
Phoning later, we found out that mom's hip was broken and that Ron had one
of his front teeth fall out -- 6 days before next week's wedding!! Oh,
and Ron just got some sort of legal tax notice from the state probate
court messing with his trying to sell his home. (And, yes, they insist
come Chemo/Mom/Teeth/Tax problems, Hell or high water, the wedding is
*still* going to happen as scheduled this Saturday!)
You won't believe the 'surprise' my girlfriend, I, and two other friends
have been working on for the wedding. (No, I'm not making this up.)
We've been practicing singing "What A Wonderful World". :^)
- John Cooley
the ESNUG guy
( ESNUG 318 Subjects ) ------------------------------------------- [5/99]
Item 1: ( ESNUG 316 #7 ) 'Borrowing' Temp FlexLM Licences For Y2K Testing
Item 2: Fooling Yourself By Validating Designs With Code Coverage Tools
Item 3: Mentor's CellBuilder On SPICE Libs To Make Synopsys Power Libs
Item 4: ( ESNUG 314 #1 315 #1 ) A Synopsys Rebuttal About Design Reuse
Item 5: ( ESNUG 316 #6 ) Gimicky Synopsys Support "Fatal Hunt" Searches
Item 6: Grumpy Collett & Other Reader Response To The SNUG'99 Trip Report
Item 7: ( ESNUG 316 #1 ) Valid Paths Getting Defined As Bogus False Paths
Item 8: ( ESNUG 316 #15 ) Can't Pipe VCS Output; Stdout Is Not A Stdout!
Item 9: Avant! Introduces Web-Enabled Session-Based Licensing For EDA
Item 10: Can't Get DC To Remove Tied-off/'Dead' Flip-flops From My Design
Item 11: A Free User-Written Bi-lingual Perl/Tcl Dc_shell Shareware Tool
( ESNUG 318 Item 1 ) --------------------------------------------- [5/99]
Subject: ( ESNUG 316 #7 ) 'Borrowing' Temp FlexLM Licences For Y2K Testing
> I'd like to hear what other EDA customers are doing about Y2K testing.
>
> Here at IDT, we are being required to test every piece of software (both
> EDA and non-EDA) under three conditions: 1999, 2000 & 2001. The problem
> that I'm running in to is that in order to test software under these
> conditions (i.e. resetting the system clock), I need temporary FlexLM
> license files that are valid into 2000 and 2001.
>
> That's not so bad, I just explain the problem to the vendor and ask for
> a temporary license. The problem is that the EDA vendors are trying
> to be helpful, but they claim that since no other customers are asking
> for this, they don't have a system in place to fulfill this request, and
> "we have to make a few calls", etc...
>
> What's going on? Is everyone else planning to trust that the vendor will
> take care of it, or are you waiting until later to do the testing?
>
> - Andy Frazer
> Integrated Device Technology Santa Clara, CA
From: [ Gozer, the Gozerian ]
John, sign me [ Gozer the Gozerian] again. Sigh.
After all the "oh, you're so bad for even thinking about such evil things"
responses that were posted, here is a perfect example of why people who run
license servers should understand them.
I assume that the license server you have is running a non-Y2K compliant
operating system (Solaris 2.5.1 or earlier), and you would like to minimize
the downtime. I also assume that the license server is named "license"
To test for Y2K compliance:
1) Duplicate your license server (if it's a Sparc Ultra1, get another one
with the same configuration, disk, memory, etc.), *including the boot
PROM* and IP address, as discussed in ESNUG 311. Let's call this
machine license1.
2) Disconnect license1 from the production network, and bring the license
daemons up on it.
3) Connect license1 to a test network, along with a "user machine" (called
user1).
4) Run a basic set of tests on the test network to make sure that things
are working just like your production network, just to be sure that you
haven't messed anything up in the setup.
5) Upgrade the OS on your license server. You may need to re-install the
daemons, etc.
6) Perform the Y2K testing of the license server, daemons, and user
software on the test network with license1 and user1. Make sure all the
usual EDA software works as expected.
7) Shut down license1 (and user1), turn it off, and announce a downtime to
your users.
8) During the downtime, shutdown license and power up license1. Change its
name to license. Connect it to the production network.
9) Duplicate license1 onto license. You now have a backup license server
just in case license1 ever dies.
None of this would have been possible if we didn't know how the license
server worked. Does this give dishonest people a way to get twice as many
licenses? Sure. Does it give honest people a legitimate way to safeguard
their businesses? Yep. Knowledge is a weapon - when the guys in the black
hats have it, it's dangerous, but when the guys in the white hats have it,
it keeps us safe. It's only when the knowledge is freely shared that we
can benefit -- trying to keep knowledge hidden just makes it available to
the bad guys.
- [ Gozer, the Gozerian ]
( ESNUG 318 Item 2 ) --------------------------------------------- [5/99]
Subject: Fooling Yourself By Validating Designs With Code Coverage Tools
> I'm starting to evaluate code Coverage tools for Verilog. Has anyone gone
> through this same effort recently? I'd be interested to compare info.
> Perhaps someone has seen an article posted or in a magazine with useful
> reviews and comparisons? I did a search but only found some typical
> isdmag "EDA tools are great" article with no tool tradeoffs.
>
> So far I've put together a script of questions for each tool so I can
> compare them in a similar manner. Please comment on what you think of the
> questions and if there's any stuff I should also ask.
> A list of questions to ask of a code coverage tool
>
> 1. What types of coverage is measured?
> - describe in terms of "standard" nomenclature
> (like software code coverage or terms in the RMM)
> - include details of extra or missing corner cases
> (e.g. expression coverage covers procedural but not
> continuous assignments)
>
> 2. Does any FSM tool support our coding standard?
> - we use a special one-hot encoding RTL coding method
> 2.a. What FSM features does it support?
> 2.b. Auto extraction or hand written comments?
>
> 3. Does it support multi levels of coverage features to tradeoff
> between speed & metrics measured?
>
> 4. What's the overhead?
> 4.a. Sim time overhead (list for each metric, see 3)
> 4.b. Instrumentation "overhead" (hassle)
> 4.c. Extra files to handle, compressed, etc.?
>
> 5. What's the scripting language?
>
> 6. What kind of profiling/test selection features does it have?
>
> 7. Is statement coverage line based or true "statement" based?
> Is "do something; do the next thing;" 2 statements? (what
> a cruddy question, I don't write code this way!)
>
> 8. Non-cycle glitch suppression supported?
> (e.g. if a term oscillates through many values triggering
> code coverage, but returns to its initial value before
> results are clocked, is it marked as covered?)
>
> 9. Is it GUI and script driveable?
>
> 10. What are its serial and parallel test coverage merge options?
>
> And the Catch-all: what are the remaining neato features and warts
> of the tool? ("interesting" or "strange" things (e.g. instrumented
> design has extra Verilog registers in it and will therefore have
> a $dumpvars performance impact)).
>
> - Paul M Gerlach
> Tektronix Beaverton, OR
From: "David Murray" <dmurray@iol.ie>
I agree with you when you say that using coverage tools without a plan
can be a dangerous thing. As far as I'm concerned, code coverage tools
should be used to access the test plan itself (it's effectiveness that is)
and not become an integral part of the main verification process.
Inexperienced designers can sometimes 'aim to cover' and not 'aim to test'
which can unfortunately mean that they are confident about their design but
not correct.
- David Murray Ireland
---- ---- ---- ---- ---- ---- ----
From: paulge@pwrtool.cse.tek.com (Paul Gerlach)
Could you expand on what you mean by this?
- Paul M Gerlach
Tektronix Beaverton, OR
---- ---- ---- ---- ---- ---- ----
From: "David Murray" <dmurray@iol.ie>
In order to clarify this point, I'm going to present a situation that can
happen to anyone, including experienced engineers. This highlights the main
perception flaw associated with code coverage tools. Please note that I use
'designer' as both a designer and a verifier.
Take regression testing, a perfect example. These tests are picked out on
the basis that if only a sub-set of all tests can be run (because of system
resources, time etc) you want the most comprehensive tests to run.
Now what is the most comprehensive test? - The burning question!
I have seen numerous methodologies who use coverage tools to pick out the
tests that produce the highest code coverage. This is where a perception
flaw can happen.
The most comprehensive tests are indeed the tests that produce the highest
coverage. This coverage, however, doesn't necessarily mean code coverage.
I take it to mean functional coverage
A very simple example of this follows. Take State 'B' to be the wait state
in an arbitration sequence of a state machine. This is only a small part of
the design.
A->B
B->B OR C
C->..
In the above definition we see that B can transition to B or C. This is a
logical transition. However, the following functional scenarios can happen
A->B->C : Fast Arbitration
A->B->B->C : Normal Arbitration
A->B->B->B->C : Slow Arbitration
We have two Tests
Test 1 : This has 95% Code coverage but only covers 1 of the above
sequences.
Test 2 : This has 70% Code coverage & covers all of the above sequences.
According to many code coverage methodologies we should pick Test 1, it has
the highest code coverage. However what about Test 2?. This would have more
functional coverage than Test 1. Isn't this what we want at the end of day?
To test the high-level requirements?
Every design is different and I think that a blend of the two is what is
needed. We just need to be aware of the weakness of code coverage to use it
to it's full potential.
Code coverage is linked to functional coverage. The link can be weak or
strong, depending on the type of design, but code coverage rarely equals
functional coverage.
This brings me back to what I wrote in the previous message. Inexperienced
designers can sometimes 'aim to cover' and not 'aim to test' which can
unfortunately mean that they are confident about their design but not
correct.
The same perception flaw (as discussed) can occur in this situation where
people link code coverage and functional coverage too strongly. Introducing
a code coverage tool without proper planning can sometimes be a distraction
to an inexperienced designer/verifier. They may think that full code
coverage is full functional coverage.
If a code coverage tool is used iteratively as a designer writes tests to
verify their block then they can very easily find themselves in a feedback
process whose focus tends toward optimal code coverage rather than optimal
functional coverage. So they can be confident that their code is fully
covered but wrongly confident that the functionality is covered!
Coverage tools are however evolving to provide a stronger link to this
functional coverage that I have been talking about. There are now state
machine coverage tools that bring a functional element to coverage and
provide a higher-level bridge to the low-level simulation.
In my opinion code coverage tools shouldn't be trusted but they should
definitely be used. Though it sounds like a contradiction, I think that
this is the best way to use them. A certain amount of distrust is healthy
(and makes for a good verification process). It ensures that they are used
wisely. If they are, they are of enormous benefit to the verification
process.
You can try the following links to find out what the vendors themselves say!
www.surefirev.com
www.covermeter.com
www.transeda.com
www.interhdl.com
www.designacc.com
Again, I welcome any comments.
- David Murray Ireland
( ESNUG 318 Item 3 ) --------------------------------------------- [5/99]
Subject: Mentor's CellBuilder On SPICE Libs To Make Synopsys Power Libs
> I am using an ASIC library (.35um) that does not currently contain any
> internal cell power data. I have SPICE netlists for each cell and would
> like to run HSPICE to get the power data. Has anyone done this that
> would be willing to share his/her scripts to automate the procedure. I
> am not totally clear on how Synopsys wants the data formatted and would
> really hate to write a parser myself.
>
> - Eugene Grayver
> UCLA
From: Magnus Soderberg <soderber@prinsen.quark.lu.se>
You may want to check with Mentor Graphics about their CellBuilder. It
probably does what you want. It does characterize for power, but exactly
into what formats I don't know, however writing a small script to translate
from a to b when the simulation is done shouldn't be too difficult (at least
when compared to writing scripts to automate simulation over temperature,
voltage, process, load and slew-rate). Commercial pricing is probably a
bit, shall we say prohibitive, but at least here in Scandinavia they have
very reasonable license fees for academic institutions and the same should
apply in the USA.
- Magnus Söderberg
Lunds Universitet Lund, Sweden
( ESNUG 318 Item 4 ) --------------------------------------------- [5/99]
Subject: ( ESNUG 314 #1 315 #1 ) A Synopsys Rebuttal About Design Reuse
> Design reuse largely fails for one main reason: motivation. Engineers
> (myself included) do not want to reuse somebody else's design. Engineers
> want to "create" and for the most part, they do _not_ want to modify
> another engineers code to do it. The prevailing attitude is: "I'm the
> design engineer now, I'll do it my way!"
>
> Since we all know that the verification effort is growing much faster than
> design complexity, why is the EDA industry focusing so much effort on the
> _easy_ part of the job (reuse) and not on TESTING? New, fluffy,
> IP-oriented EDA tools like Synopsys CoreBuilder and CoreConsultant are a
> waste of everyone's time. What we engineers _really_ want are EDA tools
> that make _testing_ more and more effortless!
>
> - Cliff Cummings
> Sunburst Design Beaverton, OR
From: [ John Chilton, VP/GM Design Reuse at Synopsys ]
John,
I read with interest the postings from Cliff Cummings (Design Reuse is myth)
and Dave Brier (coreBuilder uses proprietary encryption) in ESNUG 315.
We're especially pleased to see significant interest and discussion about
our reuse tools, both in ESNUG, and after our public announcement at IP'99.
I am the VP/GM responsible for Synopsys' reuse activities (including the
coreBuilder/coreConsultant tools that Mr. Cummings says "Kinda Suck"), so I
thought I should respond to some of his points as well as clarify Mr.
Brier's misconception about encryption in our reuse tools.
Mr. Cummings obviously is not a proponent of reuse, but he brings up a lot
of great points that are still shared by many engineers. He mentions the
Reuse Methodology Manual--I'm convinced that some people still think that
the RMM is some sort of an evil Synopsys conspiracy. Actually, we,
together with Mentor Graphics, published the RMM last year, because we
really thought that many people who were actively trying to institute reuse
practices kept spinning their wheels. There was a crying need for a common
understanding of the most basic principles. It's not the be-all/end-all of
design, but it covers a lot of good engineering practice (note that one of
the first observations of the RMM is that the first step toward
"design-for-reuse" is solid "design-for-use!" Some engineers out there must
like it because the RMM, because it constantly sells out at Amazon.com (no,
this is not a big source of revenue for Synopsys).
Having talked about reuse with hundreds of engineers over the last two
years, I can assure everyone that reuse is not a myth. It's a practical
reality and just common sense. I agree with Mr. Cummings that engineers
don't want to reuse someone else's design if the alternative is to design
something interesting if it's easier to do this than to reuse an existing
solution. What I don't understand is why anyone would want to create a new
CPU if an existing one would do, or why they'd want to sit down with 600
pages of standards documentation and design the world's ten-thousandth PCI
core, USB, MPEG, vector-multiplier or 8051. This is especially true if you
are responsible for a million gate design to be delivered in one year. It
would take a very special engineer, with a very understanding significant
other, to want to write those 300,000 lines of RTL, scripts, testbenchand
documentation from scratch.
What nobody wants to do is mess with someone else's design that either does
not work or is hard to figure out. That's where coreBuilder and
coreConsultant come in. These tools are not just fluff. They started out as
the vision of one of our best synthesis engineers about four years ago. He
put together a team of experienced DC engineers and developed a tool with
pretty deep technology. For example, one feature of the tool is that it
automates synthesis. This was necessary, because you can't require someone
to write and debug synthesis scripts for a piece of IP that they didn't
design (makes a lot of sense - that's the part that nobody wants to do).
They worked pretty hard to put into the tools strategies that an expert DC
user would employ, and it works. For example, we make an 8051 core, and we
thought that the scripts that we had written for it were pretty good.
It's either very good news (great tool) or very bad news (terrible scripts
to begin with), but when we ran the part through coreBuilder and
coreConsultant, it synthesized substantially smaller. We've seen similar
results with other cores with well structured designs. We've also seen no
improvement on designs that are poorly structured (those darn computers
aren't good at random tasks). coreBuilder guides the user to enter all the
information necessary for quality (and process-portable] synthesis; in each
case so far, it's detected problems with the existing core information
(e.g., missing or inappropriate constraints).
I'm not going to degenerate into a sales pitch here, but Mr. Cummings
mentioned that the main problem is testbenches. Here we have some good
news... One of the features of coreBuilder/coreConsultant is that it can
automatically configure the testbench along with the core. This is how we
ship our DWPCI macrocell in the Foundation library. You tell coreConsultant
(no licence needed for this tool--it's free like Adobe Acrobat reader) what
you want (64 bit, 64 MHz, xyz parameter, etc) and out spits an optimal
technology mapped netlist and a configured VERA testbench (which you can
run with VERACORE - again, no licence needed).
> I'm not too happy about the new Synopsys IP delivery tools. The problem
> I have with them is that they are tied to proprietary formats once again.
>
> Yuck.
>
> Rather than sit around complaining, my engineering group discussed the
> IP delivery problem and came up w/ the following solution. Our idea is to
> use a true encryption engine, with _no_ proprietary anything, to create
> secure source files for the exchange of IP via PGP. PGP is readily
> available around the world and easily used. We are proposing is that all
> EDA tools be able to call the PGP algorithm when they read a file if
> required.
>
> - Dave Brier
> Texas Instruments Dallas, TX
Regarding Mr. Brier's observations on encryption for IP -- his comments are
interesting and worth investigation, but are, in fact, based on
misinformation about our tools. To set the record straight, encryption of a
coreKit and the files that coreConsultant produces (e.g., RTL) is
**strictly optional**. It's done only if the **provider** of the core
chooses this option when using coreBuilder to create the coreKit. We put
this feature into the product based on consistent customer demand.
Specifically, core designers want to preserve the black-box nature of the
reusable core; hence, they want to keep core end-users from modifying the
source. In addition, they asked for a level of *basic* (lightweight)
protection of their RTL to "keep honest people honest," and to satisfy
legal IP protection requirements. Synopsys core competency isn't
encryption, so we welcome Mr. Brier's suggestions that we (and the EDA
industry) consider PGP (or perhaps other industrial encryption like RSA).
- [ John Chilton, VP/GM Design Reuse at Synopsys ]
( ESNUG 318 Item 5 ) --------------------------------------------- [5/99]
Subject: ( ESNUG 316 #6 ) Gimicky Synopsys Support "Fatal Hunt" Searches
> When I'm using DC and it occasionally crashes, the program catches itself
> and prints out several lines of cryptic debugging info. It instructs
> me to mail this information into Synopsys Support Center, & a "fatal hunt
> search mechanism" will see if it recognizes the problem and can send me a
> solution.
>
> I've tried this maybe ten times. Every time, I get back a response saying
> "This is a new bug ... you need to send a testcase to support_center".
> Should I ever bother sending in the fatal search request unless I'm
> prepared to follow it up with a testcase? Or have I just had bad luck,
> and others get useful info back after sending in the fatal hunt data?
>
> - John Busco
> Toshiba
From: Ihab Mansour <ihab@xlnt.com>
Hello John,
In respond to this item, I have a similar experience. Just this past
weekend, DC crashed on us few times. We saved the error messages and
called the Hot Line (HL) on Monday. The first thing the person said
from the HL, you have to send me a test case. I argued with him and asked
does these numbers means anything to you. Answer: "they are worthless you
need to send me a test case". I even called the local support guy and his
answer to the same issue: "true, they are worthless to you and me, we need
a test case, it only make sense to the R&D guy, it could help him in some
way, if he had a chance to look at them".
- Ihab Mansour
XLNT San Diego, CA
( ESNUG 318 Item 6 ) --------------------------------------------- [5/99]
Subject: Grumpy Collett & Other Reader Response To The SNUG'99 Trip Report
> Bad idea. It not only leaves the Synopsys "infrastructural problems"
> unfixable by customers, it also recklessly leaves Synopsys management
> vulnerable to their own soothsaying Marketeers, yes-men, and Rasputins.
> And many times these weaselly flatterers are more caught up in their own
> Palace Intrigues than presenting the Real Truths their customers are
> facing. Is a VSS (VHDL) Marketing Manager ever going to say "Hey, let's
> dump our internally developed VHDL simulator, buy Chronologic, and sell
> the world's fastest Verilog simulator instead!"? (And you tell me: would
> *you* bet *your* livelihood on a Collett Market Forecast???)
From: Ron Collett <ronc@collett.com>
John, is this your personal editorial?
It would appear so.
What I find so amusing about your attempts to criticize our firm is that 5
years ago when you and I met for the first and only time (in San Diego),
your lack of knowledge on design methodology astounded me. As I recall, you
couldn't understand why logical and physical design would ever have to be
integrated together. I tried to explain to you about the notion of
interconnect-delay dominating performance, but you resisted and were rather
dumbfounded by the concept. This is why I had no interest in sitting down
with you again after that meeting. I just didn't see any point to it. My
suspicions about your limited knowledge were confirmed when I saw you
moderate the CEO Executive Panel at a later DAC.
However, I'm a reasonable guy, and I'm willing to give you a second chance,
because perhaps you have gained lots of insight into what what kind of
design methodologies will be needed in the future -- I hope so for your
client's sake. What do you think the chip design methodology will look like
4 years from now? What are the issues? What will the solution space look
like? What does the design environment look like? Perhaps if you have some
time you should publish some of your thoughts on this. Please be specific.
Do you have the "professional courtesy" to publish this reply to your ESNUG
subscribers?
> "After my first experience hearing Ron Collet's VHDL presentation, I
> was always very skeptical of his market analysis. In 1996, it hit
> directly in my realm when he avoided the VHDL-versus-Verilog topic
> and began to harp on the demise of the UNIX workstation platform
> in favor of Windows NT."
>
> - Clay Degenhardt of Systems Science in ESNUG 286
John,
I don't mind a healthy debate, in fact I enjoy it, as long as it is grounded
in factual information -- debates surrounded by exaggerations, distortions,
statements captured out of context, and missing information are not worth my
time. I'm sure their not worth yours either.
It seems this guy Clay chooses to leave out relevant information when
characterizing what I say. In short, with regard to the NT vs. Unix
debate -- one of the assumptions in the model which I have presented many,
many times was that Microsoft would need to execute, which meant for example
that NT would have to reach parity with Unix's functional, performance and
reliability capabilities, and Microsoft would have to significantly ease
the switching costs for customers, etc.; in the absence of this, Unix
would continue to prevail. Thus far from what I've seen, Microsoft has not
been at all serious about the EDA market -- it's not surprising, given its
size. This is in spite of the fact that Microsoft did make a lot of noise
several years ago about NT for the EDA market -- e.g. coming to DAC, holding
big press conferences, rounding up EDA vendors to endorse NT, etc. It has
not added up to a whole lot. So when Clay talks about what I said, he omits
quite a bit. Moreover, from what I understand, Clay was an MIS guy at
Zycad -- not at all involved in any strategy decision-making. He sat in
once or twice on a company-wide, general presentation that I gave. He was
not at all privy to the details of the analyses, etc. I don't fault him for
what he doesn't know, but he should be a bit more conscientious before
making statements that exaggerate and omit relevant facts.
- Ron Collett
Collett International Santa Clara, CA
[ Editor's Note: Ron, before I address your questions, it's very important
to understand that we live in two different worlds. My world is deep in
the details of chip design in the *present*. It's t-shirts & cubicals,
workstations, PCs, vending machine fare for lunch, inferring 3:1 MUXes,
Test Compiler won't take parallel buffered clock trees unless you set the
variable test_allow_clock_reconvergence = true, and wondering if I should
take Synopsys Chip Architect classes for my next project. Your world is
the lala-land of what *might* be in chip design. It's suits & ties, slick
powerpoint slides, offices with personal receptionists, 'power' lunches
with CEOs in swank restaurants and snooty waiters, and wondering what
the Wall Street Journal will write next. I went to a state univeristy
and studied engineering. You went to Stanford and studied law. I watch
the TV show 'COPS' to see if any of my family is in trouble. You watch
the PBS 'Nightly Business Report' to see how your portfolio is doing. My
first traumatic life experience involved a sheriff's deputy leveling a
shotgun at my head on a dark winter night in Vermont. Your first
traumatic life experience involved maxing out daddy's credit card while
in Saks Fifth Avenue. Ron, our worlds are very, _VERY_ different.
Casting these sarcastic personal aspersions asides, we really do live in
very different worlds. You do financial 'stuff' and it's OK with your
Armani buddies to make wild predictions/projections that don't pan out.
I can remember when you wrote years ago that engineers were dying to
move to PCs so they can connect their EDA output to Excel spreadsheets.
Or that chip design was going to transform into just software coding with
cheap embedded processors mimicking chip functions. So what if these
guesses turned out to be completely wrong? The Armani crowd expects and
accepts wildly misleading prognostications. My world is firmly entrenched
in the gritty details of *here* and *now*. I'm paid to solve problems
*now*. Not in a year, not in six months, but *now*. And if I'm wrong, I
catch *loud* and *immediate* hell for it. Five years ago, my customers
and readers weren't talking about interconnect delay dominating anything,
hence I devoted zero personal effort to it. (In fact, I was busy learning
VHDL at the time, if I remember accurately, and many people were wondering
if they should switch to it -- hence my Verilog vs. VHDL design contest.)
It's only in the past year or so that I've had clients worried about a
chip's porosity or its gate-to-net ratio. So it's *now* I start paying
attention to P&R, Manhattan distances, & aspect ratios. I'm the ghost of
Christmas Present; you're the ghost of Possible Christmasses Yet To Come.
While you sell market forecasts and tweak powerpoint slides, I sweat the
details on chips that *actually must work* when they come back from the
fab in eight weeks. For me, far off into the future stops right at the
beginning of my next project.
Concerning your comments about Clay, I can't discredit his view as lightly
as you do. He *knows* about UNIX and Windows because he's hip deep in the
stuff every day. I respect his opinions on this far more than any windbag
who can't diff between grep and sort -u. So Clay didn't sit in on the
bigwig meetings. BFD. And, yea, I've seen my share of bad predictions
made by too-close-to-the-details engineers, too. But it doesn't mean he's
anywhere near as clueless as you're portraying him, Ron. Sheesh! - John ]
---- ---- ---- ---- ---- ---- ----
From: Austin Franklin <austin@darkroom.com>
Hi John,
Thank you for your 'impressions' on SNUG'99.
Since I am not a 'registered' Synopsys user, I can't get at the papers.
Would you be so kind as to email them to me or let me know how I go about
acquiring them?
(MA2) Tutorial of FPGA Compiler II
(MC3) Large FPGAs, FPGA Express
"FPGA Express Coding Techniques" by David Nye of Xilinx.
Regards,
- Austin Franklin
Darkroom
---- ---- ---- ---- ---- ---- ----
From: zyang@avanticorp.com
Hi, John
I am impressed by your ESNUG report.
By the way, do you have the numbers of IP and IP tool market today and 3/5
years projected?
- Z Yang
Avant!
---- ---- ---- ---- ---- ---- ----
> "Wireload models are like the weather. Many people talk about them,
> but not many people *do* anything about them! ..."
>
> - the abstract to Steve Golson's 1st place SNUG'99 paper
> titled "Resistance is Futile! Building Better Wireload Models"
From: Don Reid <donr@hpcvcdo.cv.hp.com>
John,
As things are now it is difficult to even experiment with anything
else. Until DC either supports some alternate model or provides hooks
to connect user defined load calculation, we are stuck with wire load
models.
- Don Reid
Hewlett Packard Corvallis, OR
---- ---- ---- ---- ---- ---- ----
> "I think Synopsys is facing some real challenges next year. I've got
> Design Compiler blowing out on my current design. As a user and a
> stockholder I'm concerned. Design Compiler is using too much memory."
>
> - chip designer and consultant Kurt Baty at Aart's speech
>
> "I second Kurt's problems. We, at HP, are seeing this, too."
>
> - an anon voice in the crowd at Aart's speech
From: "Ann Steffora" <asteffora@cahners.com>
John,
Do you think Avant! has a strong chance of being successful with their
methodology, which they are not shy about saying it is specifically not
based on synthesis? (I'm till trying to figure out why Avant! would have
sent out mousetraps and cheese to customers.... that's just plain weird!)
- Ann Steffora
Electronic News
---- ---- ---- ---- ---- ---- ----
From: Adrian Dunn <adunn@domosys.com>
John,
Let me congratulate you on a report that was both very informative, as well
as quite entertaining. Out company partnered with Cadence's Design Services
group on a past ASIC project, and as such, use many of Cadence's (and now
Ambit's) tools. Do you know of a similar conference for Cadence tool users,
as well as a user/guru who publishes such lucid summaries?
- Adrian Dunn, Software Analyst
Domosys Corporation Sainte-Foy (Quebec) Canada
---- ---- ---- ---- ---- ---- ----
From: Philip Freidin <fliptron@netcom.com>
I can't help but wonder how functional you are today, given that you must
be rolling around the floor laughing yourself silly.
I refer, of course, to the column by your favorite nostradamus (Ron Collett)
on page 53 of the April 26th EETimes ("It's Finally Jack's House"), and the
front page story of the following week's EETimes where Jack Harding is
ousted as CEO of Cadence.
- Philip Freidin
Fliptronics
[ Editor's Note: Believe me, Philip, I wasn't the only one laughing at that
one! I got 6 gushing phone calls and 4 e-mails when it happened. This
might explain Ron's grumpy letters (see above) from that week. - John ]
---- ---- ---- ---- ---- ---- ----
> The foundries are also backing PrimeTime with 6 of them (IBM, LSI, TI,
> NEC, Fujitsu, Toshiba, and Samsung) publically accepting design timing
> sign-off with PrimeTime."
From: reynoldk@us.ibm.com (Karla Reynolds)
John,
I need to correct this comment about PrimeTime you made in your SNUG'99
dissertation. Timing sign-off on ASIC designs manufactured here at IBM, is
done ONLY with Einstimer, our internally developed static timing tool.
PrimeTime, currently and for the foreseeable future, is not qualified for
timing sign-off.
- Karla Reynolds
Mgr. Timing and Synthesis Department
IBM Microelectronics Essex Junction, Vermont
[ Editor's Note: Karla, doing a web search, I found press releases from
each of those companies endorsing PrimeTime except IBM. For IBM, at
http://www.chips.ibm.com/products/asics/methodology/design_flow.html
clearly has PrimeTime being supported by IBM. My appologies if this
data is wrong; I assumed IBM's own web site was accurate. - John ]
( ESNUG 318 Item 7 ) --------------------------------------------- [5/99]
Subject: ( ESNUG 316 #1) Valid Paths Getting Defined As Bogus False Paths
> I would like to raise a question for discussion relating to Static Timing
> analysis and Synthesis.
>
> Is anyone doing anything to "Verify" their "False_Path" defines are
> correct? We are concerned that "Valid Paths" may get defined as "False"
> during timing analysis and synthesis but may, in fact, be a valid path.
>
> Proposed solution:
>
> For each defined false path, the designer could create a "Path Check"
> block for use during netlist simulation. These "Path Check" blocks could
> be contained in a single module that is at the testbench level.
>
> Rather than show the code, I'll describe the function of "Path Check"
>
>
> ff1 ff2
> ----- invalid path --------- -----
> |D Q|______________| |____________|D Q|
> | | | | | |
> CK __| | | MISC | CK __| |
> ----- | LOGIC | -----
> | |
> Valid Path ____| |
> | |
> ---------
>
> Verify that when ff1/Q changes state, that ff2/Q does NOT change
> state on the following clock. Also verify that ff1/Q has toggled
> to show that the test excited the source of the false path.
>
> I am also curious of some companies simply add gates (synchronizers,
> staging registers, etc) to remove the violations from these paths (they
> use the simple rule of "No False paths - anywhere").
>
> - Dan Pinvidic
> Qlogic Corp.
From: Chuck Benz <cbenz@nexabit.com>
John,
Actually, I think Dan might be referring to a multicycle path here, not
a false path (because he mentions the addition of staging registers and
also refers only to "the following clock"). And indeed, this really is a
concern for both multicycle and false paths.
To some extent, this is checked (albeit poorly/weakly at best) by
back-annotated gate simulation (thought you avoided it by using Static
Timing Analysis?).
I am aware of at least one IP vendor that will insert flops to break up
what could have been a multicycle path. IP vendors don't want the burden
of supporting lots of extra timing assertions. In general, that's a pretty
good approach -- in most cases the time/effort saved is worth the gates.
I believe that many of us have hopes that the model-checking side of
formal verification will eventually explore this concern. Today, it's a
matter of manual verification (combined inspection and simulation along
the lines you suggest), and trust between designers that it has been done
adequately.
- Chuck Benz
Nexabit Networks Marlboro, MA
---- ---- ---- ---- ---- ---- ----
From: Ken Rose <kenr@cisco.com>
John,
Maybe I use a different definition than Dan of false paths, but I don't
see how his solution will help. I often use false_path to eliminate from
timing optimization paths that I know are stable during normal system
operation; for example, I may have a threshold count that is settable by
the CPU in a n-bit register, but is typically set only during system
initialization and never changed. I don't want synopsys to waste time and
gates optimizing the timing from this register to the outputs, so I declare
it as a false path.
However Dan's proposed solution (a toggle-checker approach) does not seem
to allow this usage of false_path and would flag a violation.
And for those paths that are incorrectly marked as false when they should
not have been, Dan's solution relies on achieving high toggle coverage to
detect the fault. Creating such simulation vectors is a problem which is
why static timing has gained popularity.
- Ken Rose
Cisco Systems
( ESNUG 318 Item 8 ) --------------------------------------------- [5/99]
Subject: ( ESNUG 316 #15 ) Can't Pipe VCS Output; Stdout Is Not A Stdout!
> I run vcs sim. Pipe it to tee. What do I get? Nothing output in the
> shell! Run any-other-program-in-the-history-of-unix. Pipe it to tee.
> You get? Programs std output in shell. How can I fix/work around this?
>
> - Todd
> Electronics For Imaging, Inc. Foster City, CA
From: Peter Trajmar <trajmar@teralogic-inc.com>
John,
I'm not sure if this is the answer to this person's problem, but some
tools may output some information to stdout and some to stderr.
If you use a simple pipe, "|", you will get only the stdout information.
If you want both stdout and sterr piped, you need to use "|&".
- Peter Trajmar
TeraLogic, Inc.
---- ---- ---- ---- ---- ---- ----
From: Michael McNamara <mac@surefirev.com>
John,
Three comments:
1) most unix commands write errors to stderr, and regular messages to
stdout. using '|' redirects just stdout. If you'd like to redirect
both stderr and stdout, and you are using csh or tcsh, type:
% simv |& tee logfile
If you are using sh or bash, type:
$ simv | tee 2>&1
2) vcs and the simv write everything they write to the screen, also to
the log file. So why use tee? Instead just specify a log file:
% simv -l logfile
or
% vcs -l logfile foo.v ...
3) It's not my fault!
- Michael McNamara
SureFire Verification, Inc.
( ESNUG 318 Item 9 ) --------------------------------------------- [5/99]
From: Berger@pixelmagic.com (Elliott Berger)
Subject: Avant! Introduces Web-Enabled Session-Based Licensing For EDA Tools
Hey John,
Check out the article "Avant! Introduces Web-Enabled Session-Based Licensing
To Give Customers a Competitive Edge in Fast-Track Chip Design" in
Electronics Journal (Avanti sales rag) in May's "Integrated System Design".
I have been talking about Internet-distributed per-session licensing for
months. Avanti is signed up to do it! Check it out -- this is neat stuff!
- Elliott Berger
Pixel Magic
( ESNUG 318 Item 10 ) -------------------------------------------- [5/99]
From: John Patty <jrpatty@rtp.ericsson.se>
Subject: Can't Get DC To Remove Tied-off/'Dead' Flip-flops From My Design
Dear John,
I am having difficulty getting Design Compiler to remove flip-flops that
are tied off to a useless state. The specific example I am looking at is
a d-type flip-flop with an asynchronous clear. It's D input is tied to
'0'. The CLR input comes from other logic. So only 0's can be clocked in
to this flop, and it can be cleared to set it to '0'. There is no way to
set it to '1'.
---------
'0' ----| D Q |---- Output
| |
CLK ----|> |
Reset -----------| RST |
---------
It seems to me that Design Compiler should optimize this dead flip-flop
away and replace it with a tie-off to '0', which should then allow
logic on the output side of the logic to be further optimized away.
However, in my experience, this is not the case. The commands: compile,
compile -boundary, and compile -in_place will not do what I want it to
do. This configuration is not exactly the same as a tie-off since the
flop could possibly power on in the '1' state, but I don't think this is
a good enough reason not to allow the above optimization. Has anyone had
any success with this, or know a better reason why this should not be
optimized away?
- John Patty
Ericsson Research Triangle Park, NC
( ESNUG 318 Item 11 ) -------------------------------------------- [5/99]
From: Jeff Solomon <jsolomon@stanford.edu>
Subject: A Free User-Written Bi-lingual Perl/Tcl Dc_shell Shareware Tool
John,
I'm pleased to announce the release of a free shareware tool that combines
the power of Synopsys and the versatility of Perl. It's called Synopsys
Plus Perl (SPP). SPP is a Perl module that wraps around Synopsys' shell
programs. SPP is inspired by the original dc_perl written by Steve Golson,
but it's an entirely new implementation. Why is it called SPP and not
dc_perl? Well, SPP was written to wrap around any of Synopsys' shells. This
includes:
bc_shell
budget_shell
dc_shell
dp_shell
dt_shell
fpga_shell
lc_shell
pt_shell
ra_shell
SPP is a Perl module, not an application. It can be used to fully embed a
Synopsys script inside of Perl. SPP was written in an object-oriented way
so that each object totally encloses a Synopsys shell process.
The first example of an application using SPP is called synopsys_fe, a
frontend replacement for any of the Synopsys shells listed above.
synopsys_fe sports a snazzy GNU Readline interface with all of your
favorite terminal capabilities (command completion, up/down history, etc),
a convenient Perl interface, and other Perl niceties that you might expect.
But wait, there's more! Invoking the Synopsys shell in Tcl mode (SPP
supports both Tcl and default dc_shell mode), enables an auxiliary module,
Synopsys::Collection. This module maps the functionality provided by
Synopsys' collection idiom into Perl.
Why did I create this shareware tool?
People get into heated arguments over which is better, Tcl or Perl.
I'm no Perl bigot, although I must admit that I prefer Perl and I've
written many more lines of Perl than Tcl. I can certainly understand
why Synopsys choose Tcl over Perl. Its interface to C is more
straight-forward, it's more easily embeddable and more self-contained.
The problem is that chip designers love Perl! A completely unscientific
survey of chip designs (including ones that I've worked on) shows that most
chips are held together by vi, Perl, string and paperclips. Chip designers
love Perl because they have to run 18 different CAD tools each of which
outputs its results in an arbitrary text format and Perl is the only
language suitable for gluing them all together. Associative arrays are
king. Perl is write-only? You bet!! Why does a script need to be
readable if it's going to be less than 50 lines and run only once -- by its
author no less!? Who in their right mind would write a 50 line throwaway
Tcl script unless they didn't know Perl? No one I know.
Getting back to the question at hand, I wrote SPP to make my life
easier. I wanted to characterize all the cells in the chip I've been
working on. I noticed that Synopsys had all the information there, but
it wasn't very manageable. I remembered hearing about Steve Golson's
dc_perl and I downloaded it. From there, I quickly added the
Term::ReadLine::Gnu interface and I was off. After about three total
re-writes (the original version, like Steve's dc_perl, was based on
the dc_shell version and only wrapped around dc_shell). Along the way, I
used SPP to extract the clocktree from our chip. Using the speedy
pt_shell, I was able to turn an hour long job into a 90 second script.
Sound interesting?
http://www.stanford.edu/~jsolomon/SPP
SPP is free software. Anyone can redistribute it and/or modify it under
the same terms as Perl itself.
- Jeff Solomon
Stanford University
|
|