"At present, we don't know how we'll be defining EDA in 10 years, or
even if we'll still be using the term. Meanwhile, those who resist
the siren song of the dot-coms will get to make some very crucial
decisions about a very crucial industry."
- Richard Goering
EE Times EDA Editor
( ESNUG 340 Subjects ) ------------------------------------------- [1/00]
Item 1: ( ESNUG 339 #2 ) Mismatched Timing Engines Stall Cadence PKS
Item 2: ( ESNUG 339 #1 ) We Also Script Homebrew IPO Buffer Resizers
Item 3: How Do You Create "Fake" DC Library Elements From Sub-Designs?
Item 4: Testing Embedded Macros; Two Users Review Mentor's MacroTest Tool
Item 5: ( ESNUG 338 #8 339 #3 ) How We Search For Un-Initialized FF's
Item 6: ( ESNUG 335 #9 338 #3 ) Janick Critiques Co-Design's Superlog
Item 7: ( ESNUG 338 #4 ) Smart Flat P&R Designs Are Faster & Lower Power
Item 8: ( ESNUG 337 #8 ) Help! Cliff Did A "Playboy" Review Of My Book!
Item 9: ( ESNUG 339 #10 339 #11 ) Nine Mostly Positive VERA User Letters
The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com
( ESNUG 340 Item 1 ) --------------------------------------------- [1/00]
Subject: ( ESNUG 339 #2 ) Mismatched Timing Engines Stall Cadence PKS
> A final note: in it's current state PKS uses Ambit at the front end with
> the Ambit Static Timing Analyzer, and Qplace at the back end with Pearl
> as the Static Timing Analyzer, something we thought was a mishmash and
> certain to cause timing correlation problems.
>
> - Jon Stahl, Principal Engineer
> Avici Systems N. Billerica, MA
From: Jay Vleeschhouwer <Jay_Vleeschhouwer@ml.com>
To: Jon Stahl <jstahl@avici.com>
Hi, Jon
I read your recent postings on ESNUG concerning Chip Architect. At your
convenience, perhaps you wouldn't mind answering some follow-up questions:
1. You refer to the current state of Cadence PKS as a "mishmash". Is
that a fundamental design flaw or would you still be interested in
evaluating it if Cadence's sales and support were more responsive?
2. What do you think the ratio of Chip Architect licenses will be to
Avanti licenses?
3. Do you plan to evaluate Physical Compiler (PhysOpt)?
Thanks.
- Jay Vleeschhouwer, Analyst
Merrill Lynch New York, NY
---- ---- ---- ---- ---- ---- ----
From: Jon Stahl <jstahl@avici.com>
To: Jay Vleeschhouwer <Jay_Vleeschhouwer@ml.com>
Hi Jay,
My answers to your questions 1, 2, 3:
1. My "mishmash" comment refered to the fact that Cadence PKS currently
uses a combination of two timing engines -- always a bad situation.
They claimed to be fixing this and going to a common timing engine,
but didn't give us a timeframe.
2. That's a very involved question. Which Avanti licenses? A flow in my
mind would be getting placed timing closure with Chip Architect, and
then using Avanti tools for clock insertion (is this coming in Chip
Architect?), routing, parasitic extraction, and nvl/drc. These are
all separate licenses! A quick answer is that you iterate on
placement (i.e. Chip Architect) more often than the other things
mentioned on your way to tape out, so you would almost definitely have
more Chip Architect licenses than anything else.
3. Yes.
Hope that helps. (BTW, Jay, I own Cadence stock - could you please upgrade
your evaluation of them, OK? :^)
- Jon Stahl, Principal Engineer
Avici Systems N. Billerica, MA
( ESNUG 340 Item 2 ) --------------------------------------------- [1/00]
Subject: ( ESNUG 339 #1 ) We Also Script Homebrew IPO Buffer Resizers
> One of the biggest roadblocks was that we were unable to run DC IPO to
> our satisfaction. It took many days to run (on 360MHz UltraSparcs),
> was repeatably crash-prone, and didn't produce good results. (We tried
> both 98.08-1 "normal" IPO, and 99.05-2 Floorplan Manager IPO). ...
> I wrote a program that reads the Primetime output and upsizes gates
> whose individual delays exceed user-specified limits. That is, it
> looks for and speeds up all the slow gates in the failing paths. ...
>
> - Jeff Winston
> Maker Communications, Framingham, MA
From: Stefan Thiede <Stefan.Thiede@sv.sc.philips.com>
Hi John,
Since the days of JTAG we've been using Pearl/Primetime w/ some "scripting
glue" to fix everything that can be fixed by buffer insertion and cell size
changes. Thank Jeff for this and I'll have a look to see how he does it.
- Stefan Thiede
Philips Semiconductors Sunnyvale, CA
---- ---- ---- ---- ---- ---- ----
From: Mike Naum <Michael.Naum@East.Sun.COM>
Hi John,
Interesting that Jeff posted this. We've been using this very same approach
here at Sun for 4 years. Actually it started back in the Motive days and
we migrated it to Primetime. Our program was written in Perl and is
optimized for LSI and Lucent.
We have applied it onto multi-million gate, very high frequency ASICs and
it's worked everytime. We also wrote this nifty Perl program to handle the
hierarchical netlist changes.
In our flow we can either run the vendor's delay calculator or use Synopsys
set_load's. With set_load's we have a faster flow that we can iterate until
we meet timing, automatically writing out new netlists and re-timing them
along the way. When we are finished we verify with an SDF. We've even
taken our scripts to the next level by integrating it deeply with Avanti
P&R. Our ECOs flow with Avanti hold placement and routing remaining the
same. We're basically swapping the cells in and route to the altered
footprints.
Thi process works very good for us. So much for Synopsys solving all of our
problems! Sometimes the simple solutions are best.
- Mike Naum
Sun Microsystems Burlington, MA
( ESNUG 340 Item 3 ) --------------------------------------------- [1/00]
From: Tom Cruz <tomcruz@us.ibm.com>
Subject: How Do You Create "Fake" DC Library Elements From Sub-Designs?
Hi John
Is there a way to create a Synopsys library element from a design so that
Synopsys treats the subdesign as a library component?
Say, for instance, I have a large design that is made up of many subdesigns;
I'd like to be able to replace one or more of the designs with a Synopsys
library component to cut down on the memory usage to make things run faster,
etc. (And for the moment, lets not worry about loading ...)
We have done this in the past by creating a script to parse VHDL, add some
technology header stuff, and create a .lib file. Then we just do a read_lib
and write_lib of that file and we're done. The problem with this method is
when designs use different types or records in the port statements instead
of std_logic or std_logic_vector. Then we're stuck looking through files
for the type declarations to do some manual cleanup. And of course, if a
port changes, you get to do it all over again.
At a minimum, we'd be happy with a component that has the proper port map
and links properly, but of course a more ideal solution would be to have
the library component reflect the area number and loading of what it
represents -- almost like a technology library book.
It seems like Synopsys should have some way of doing this automatically,
but I've looked through the manuals and so far I haven't seen anything
to do this. Any ideas?
- Tom Cruz
IBM Microelectronics Division
( ESNUG 340 Item 4 ) --------------------------------------------- [1/00]
From: Tim Wood <tim.wood@amd.com>
Subject: Testing Embedded Macros; Two Users Review Mentor's MacroTest Tool
Hi, John,
We've been using MacroTest, a product from Mentor Graphics, and knowing
how you love "real life" user reviews of EDA tools, we thought we'd tell
you about it. How we found MacroTest was purely by accident. We
happened to stumble across it in some Mentor documentation about 2 years
ago and it seemed to be just the thing we needed to to test some very
difficult-to-test small embedded arrays.
MacroTest is a (separately licensed) feature of Mentor's FastScan ATPG
product. The embedded arrays we were trying to test were implemented as
both full custom blocks and as non-scan standard cell arrays, built out
of the equivalent of non-scan flip-flops (these were the only non-scan
flip-flops in the design). These arrays were too small to justify adding
BIST hardware and they couldn't be fully tested by traditional ATPG. We
also couldn't afford to provide access to each of these arrays through
the normal I/O's.
With small arrays like these, it's usually easy to come up with a set of
patterns if you can apply them to the array directly. It's very difficult,
if not impossible, to do this once the array becomes embedded and you're
restricted to using the I/O's of the part that contains the embedded device.
We were able to write patterns for the embedded arrays using their I/O's,
and used MacroTest to turn these patterns into full-chip scan patterns.
The patterns are written in a tabular format. Each row of a MacroTest
pattern file is converted to a scan test (scan chain load, apply primary
inputs, measure outputs, pulse the clock, scan chain unload). You must
provide both the input stimulus as well as the expected responses for
outputs. MacroTest takes the input stimulus and justifies it back to a
primary input or scannable element. The expected responses are
propagated forward to a primary output or scannable element.
So "what's required to use it?", you ask.
All that we needed to invoke MacroTest was the pattern file and the name of
the instance to apply the patterns to. The setup for MacroTest is the same
as for running normal ATPG patterns using Mentor's FastScan. Instead of
using the "run" command you just invoke MacroTest:
macrotest <instance_name> <pattern_name> [options]
The definition of a macro for MacroTest is a top-level model in the ATPG
library. This model defines the pin direction and the pin order in the
pattern file. The <instance_name> must be an instance of this model as
defined in the library.
The pattern format is the biggest weakness of MacroTest. Everything must
be specified in binary. The patterns quickly become unreadable in this
format when you have lots of I/O or wide busses. To get around this we
normally write patterns in Perl and output them in the format that MacroTest
expects. Alternately, you could set up a block level simulation of your
array in your favorite simulator and sample the stimulus and expected
responses to create the input for MacroTest. Its pattern file does support
comments and a means of specifying the ordering of pins.
Debugging
---------
Sometimes the tests you can write at a block level can't be applied once the
array becomes embedded. If you know these restrictions up front, you can
save yourself some debugging effort. If you don't know much about the
environment the array is embedded in, you'll need to rely on the debug
information that MacroTest provides. This is an area that has seen a lot of
improvement since we first started working with the tool. We're using
version v8.6_4.8, which was just released in December. The default for the
tool is to report all warning and information messages. It's important to
leave this "on" when trying a macro for the first time.
MacroTest starts off trying to justify the input stimulus. If it can't, it
goes through a process of trying to determine what couldn't be satisfied and
why. From what we can tell from error messages we've seen, MacroTest will
let you know if it can successfully justify each input pin individually. If
it can't, it lets you know which pin(s) it's having trouble with. If it can
justify the input stimulus individually it will then try and determine what
combination of inputs is causing the problem. Since MacroTest will
generate patterns that adhere to any ATPG constraints you applied, it will
also try to tell you if the problem is related to the constraints or not.
We've found that MacroTest does a pretty good job of reporting/identifying
problems with the input stimulus. When multiple input pins are in conflict,
you'll occasionally get extraneous pins listed as being part of the problem,
but we don't see this too often. To further debug problems you can use the
"report test stimulus" command.
Once MacroTest can justify all the inputs, it then works on propagating the
outputs such that they can be observed at primary outputs or scan elements.
This is another area that has seen dramatic improvement since we began using
the tool. There are 2 modes/algorithms for propagating the outputs. One is
random the other is deterministic. The random algorithm uses heuristics to
guide the selection of the observe point and is invoked by a switch on the
MacroTest command line. There are some additional switches to control how
many attempts are made at observation before it gives up. If the random
algorithm was not able to observe an output, MacroTest issues a warning that
the output can't be observed and it continues working on other outputs that
can be observed. The deterministic algorithm pre-selects observation points
and uses them for the entire pattern.
The majority of the arrays we test with MacroTest are successful with the
random algorithm. The remaining arrays require the use of another switch
for random mode, which forces it to exhaustively examine the search space
for a solution. The arrays that seem to require the exhaustive search
typically have many observe points and/or have more complex logic to
propagate through.
Testing Multiple Arrays At Once
-------------------------------
MacroTest allows you to apply tests to multiple arrays/instances at once.
You will probably find that there are some arrays that can't be tested in
parallel due to pattern conflicts. We end up doing a couple of MacroTest
runs because we have some arrays that can't be tested in parallel. Being
able to test multiple arrays in parallel is a big test time advantage.
Mentor MacroTest Summary
------------------------
Plusses:
- The diagnostic messages, while imperfect, are fairly sophisticated.
- The scan cells chosen for the application of the pattern can
either be static or can vary dynamically throughout the pattern.
- Honors existing ATPG constraints while generating patterns.
- Allows merging of multiple macro tests.
- Runtime performance is good.
- Can force MacroTest to use specific observation sites if desired.
- Fault simulation capability provided.
Minuses:
- You can't have any scan elements inside the macro itself (ouch!).
- The pattern format is primitive.
- If a transparent latch feeds a black box macro that you're trying
to test, MacroTest won't be able to justify the input back through
the transparent latch. A work-around exists for this situation.
- Doesn't handle bi-directional pins (not an issue for us).
MacroTest is still a first generation tool and there are many improvements
that could be made, but overall we're happy with the results we're getting.
We've been fortunate to be able to work directly with the developer for any
problems we've faced. The response time for bug fixes has been great.
- Tim Wood & Grady Giles
Advanced Micro Devices Austin, TX
( ESNUG 340 Item 5 ) --------------------------------------------- [1/00]
Subject: ( ESNUG 338 #8 339 #3 ) How We Search For Un-Initialized FF's
> The main point I was trying to make is that spending time crafting X's in
> the RTL is non-productive and error prone. In addition, as Cummings
> previously stated, it violates faithful semantics between the RTL and
> gate level simulation.
>
> - Harry Foster
> Hewlett-Packard Computer Technology Lab
From: Greg Brookshire <GBrookshire@raleighttech.com>
John,
Our approach to finding un-initialized flops on our last design was to use
several scripts to find flops without resets. Our methodology required that
all flops on the system clock be reset by a global reset signal that was
synchronized to the system clock. We drilled this into the heads of our
coders (mostly co-ops with Verilog experience) so we didn't have much of a
problem with flops that were not reset, but to be sure we used the following
two filters to find flops without a reset.
For our first non-reset-flop filter we used a script that reads a Verilog
file into DC then uses Perl to parse the register inference results. The
Perl script reports statistics such as the number of flops and latches with
and without resets. This was a quick way for our coders to make sure their
code was synthesizeable and to make sure they inferred the correct flops,
with reset, and no latches.
For a final filter we then used DC to generate two list of flops in the
design. The first was the list of all flops on the system clock. The
second was the list of all flops on the system reset. By comparing the two
lists in Perl, we found flops that were not reset. When we later ran gate
level sims we had very few problems with un-initialized flops.
- Greg Brookshire
Raleigh Technology Corp Cary, NC
( ESNUG 340 Item 6 ) --------------------------------------------- [1/00]
Subject: ( ESNUG 335 #9 338 #3 ) Janick Critiques Co-Design's Superlog
> So far we are getting positive feedback from the people we talk to, but
> are always interested in other opinions. Here is a chunk of Superlog
> code to provide a feel of how it looks: ...
>
> function ref node treeFind(string str, ref node parent);
> if (parent == null) return null;
> visited++;
> if (str == parent->s) return parent; // string compare
> if (str < parent->s) return treeFind(str, parent->left);
> else return treeFind(str, parent->right); // recursion
> endfunction
> ...
> Do you have any feedback on the approach we are taking? Do you think your
> readers are interested in this approach?
>
> - Dave Kelf
> Co-Design Automation, Inc. Melrose, MA
From: "Janick Bergeron" <janick@qualis.com>
John,
The Superlog state machine description in ESNUG 338 #3 looks a lot like the
FML2 description used in Nortel's early in-house synthesis tool, back in
1989. I'm sure other languages designed for synthesis have similar
features. Cool.
> state {S0, S1, S2} cstate; // state variable with enumeration
Can these enumerals be used in any expressions (like VHDL's) such as
"if (cstate == S0) ...", or is it limited to transition statements (I'd
prefer the former)? Can they be overloaded?
> always @(posedge reset)
> transition (cstate) default: ->> S0; endtransition
> ...
> always @(posedge clk iff !reset)
> transition (cstate)
> S0:if (inp == 0) ->>S2; // change state
> S2:if (inp == 1) ->> S1; else ->> S0;
> S1: ->> S0 n = treeFind("shergar", root);
> endtransition
I object to this style! Why did you have to use two parallel constructs for
two operations that are clearly mutually exclusive? Sequential code is
perfectly acceptable and should be preferred to parallel code. What if
those two blocks were separated by several other "always" and "initial"
blocks? It would be difficult to figure out the functionality of the
state machine.
If you're creating a state machine statement, why not go all-out? e.g.:
fsm
async (reset == 1'b1): -->> S0;
transition @ (posedge clk): cstate
...
endtransition
endfsm
Also, what is the user forgets your "iff !reset"? Do you get a race
condition? What if the reset is generated using a block like this:
begin
...
@ (posedge clk);
rst = 1'b1;
....
end
Is it possible for the "posedge clk iff !reset" to be interpreted
differently because of the execution order?
> S1: ->> S0 n = treeFind("shergar", root);
^
Aren't you missing a semi-colon here? Or is it a compound statement like
the @, # and wait statements in Verilog? (which I would object to;
sequentiality should be described using separate statements).
> function ref node treeFind(string str, ref node parent);
Hurray! High-level data types on Verilog interfaces! I hope you do proper
type checking in expressions and interfaces as in ANSI-C, not K&R C....
Are Superlog tasks re-entrant, too?
I was going to ask for functions and tasks inside structs with inheritance,
virtual subprograms, etc... but then I'd be asking Superlog to become a
mix of e and VERA. :-)
- Janick Bergeron
Qualis Design Corporation Somewhere, Oregon
( ESNUG 340 Item 7 ) --------------------------------------------- [1/00]
Subject: ( ESNUG 338 #4 ) Smart Flat P&R Designs Are Faster & Lower Power
> I know it's very common for some companies to do layout as a process on
> one big flat design. We considered flat, but these five "hells" came up:
>
> - big flat designs are run-time hell
From: [ Intel Inside ]
John, A few comments (please keep me anon on this)...
Yep, big flat designs are run-time hell. This is the biggest challenge,
and ultimately may be the showstopper for some people...
> - big flat designs are extraction hell
Naw. If you run 2.5D P&R extraction, runtimes are overnight. There are
plenty of industry tools available where a multi-threaded approach can be
used to hit these thru-put requirements on 500+K instance designs. The
major limitations we usually encounter relate to memory issues on 32 bit
operating systems. Once 64 bit code becomes more available, it should be
less of an issue.
> - big flat designs are back-annotation hell
If you have to live with what is available in the marketplace, this may be
true. It doesn't take a rocket scientist to write code to take "big flat"
delay calculation results and back annotate individual unit results back to
each unit owner (providing synthesis is still being done hierarchically).
This allows a hierarchical synthesis loop with flat P&R.
> - big flat designs are clock tree hell
It is more difficult to handle clocks flat, but in the long run, the global
clock network is smaller and hence consumes less power. I have not
encountered any industry tools where you can get away with pushing a button,
and getting results anywhere near where you need them; however, with
creative solutions and manual tweaks, 50 psec clock skews can be achieved
on 400K instance flat designs on a quarter micron process with the "help"
of industry tools and a day or two of effort. Whoever is responsible for
clock treeing needs to understand the architecture of the design...
> - big flat designs are timing closure hell
Huh? All designs are timing closure hell - I think it all boils down to
when you solve your unit to unit timings. I believe design teams focus more
on top level timing issues much earlier in the design flow on hierarchical
P&R designs than they do on flat P&R designs; hence, flat designs can be
timing closure hell if you focus on unit level timing first and save top
level timing for last.
> In practical terms, with engineers here running around tweaking & pumping
> netlists out of Design Compiler every day, some way to compartmentalize
> their work is a MUST. So, we chose the hierarchical approach.
If the same people who do the synthesis also do the P&R, the controversial
flat vs. heirarchical arguement is usually a non-issue because the
trade-offs are understood. Most large companies have one cluster of people
do synthesis and a different crew do the P&R.
That's when the decision process gets muddled. (My personal observation is
that the less influence the P&R team has over design methodology, the more
likely it is to be hierarchical, because that is what design engineers
understand most.)
I don't like the real estate penalty you pay with hierarchical designs.
Also, a hierarchical design is only as good as the planning and partitioning
that is done up front - the further you get down the design cycle, the more
difficult it becomes to make any changes to partitions.
To summarize on the flat vs. hierarchy arguement, I think methodology and
discipline are the biggest factors that should determine which direction a
design team should go...
- [ Intel Inside ]
( ESNUG 340 Item 8 ) --------------------------------------------- [1/00]
From: "Zain Navabi" <navabi@ece.neu.edu>
Subject: ( ESNUG 337 #8 ) Help! Cliff Did A "Playboy" Review Of My Book!
Dear John,
In this email I would like to take a few minutes to react to the review of
my book "Verilog Digital System Design" that appeared in ESNUG Post 337.
Essentially, Cliff Cummings did a Playboy review of my book; he only looked
at the pictures and didn't read anything.
It appears that Cliff only just scanned the Verilog code in the first part
of the book. His comments appear based on Verilog code samples and appear
completely out of context with the text written around the code. And, as
you know, John, the design process includes strategy, documentation,
simulation and synthesis. Verilog is used not only for synthesis, but also
for simulation, test bench generation, and hardware modeling. But before
one can use Verilog, he or she must know what simulation is, how simulation
handles timing and concurrency, what to expect from synthesis, how a design
is partitioned, what constructs of the language to use for various parts of
a design, and above all, you have to know hardware. This is why I write
text (that Cliff ignored) that explains all these concepts to newbies.
Let me address Cliff's concerns point by point:
> It is also common courtesy to correctly spell the names of companies...
The first printing of the first edition of any new book has a few typos. We
try to avoid this as best as we can, and those that are not caught before
the first print usually make it for the second printing.
> The first two code examples in the book are not Verilog. What's the
> point?
It is not the first two code examples; it is the first three code examples
that are not Verilog. One is ISPS, one is VHDL and one is AHPL. These are
in Chapter 1 where a general overview of HDLs is given. Readers will see
lots of Verilog code in the rest of the book; it is good for a reader to
see some other old and new languages as well. "What is the point?" The
point is specifically discussed in the text part of the book and would have
become clear if Cliff had read the text of the book in this chapter. A
figure's caption is not the place to say why a figure is being presented!
> Figure 1.5 shows the book's first "Verilog description". Unfortunately
> the example is not Verilog, it is VHDL.
The text in the section that this figure appears presents VHDL as a means
of describing hardware at the structural level. I suggest that Cliff read
the paragraph before that figure.
> Indeed, the book appears to be a VHDL book that has been translated to
> Verilog (poorly). There is a definite VHDL flavor to the book.
I agree that there is a definite hardware flavor to the book. VHDL and
Verilog are both ways of describing hardware, so a correspondence is only
natural. After all, both languages are intended to describe the same
hardware, so why wouldn't one be the translation of the other?
> * The pseudo code description at the top of page 62 uses VHDL
> variable assignments.
Another statement out of context, and clearly indicates that Cliff has not
read the book and has only glanced at what appears to him to be code. As a
matter of fact, that snippet happened to be Pascal partial code. The line
above these statements mentions software languages. I was showing how
things are done in a software language; therefore software language
notations are used.
> * The state machine code on page 55 is even a poor coding style
> for VHDL.
That happened to be an example for continuous assignments with right hand
side conditions. A case statement construct would not be a good example
for showing syntax and semantics of the condition statement construct.
Cliff's comment here does not consider why the code is being presented and
where in the book the code appears.
> * Figure 10.16 (page 298) VHDL-like verbose assignments
>
> ac <= 8'b00000000;
> mar <= 12'b000000000000;
>
> The more Verilog-like way to make these assignments is:
>
> ac <= 8'h0;
> mar <= 12'h0;
I am being very explicit. Instead of using the implicit zero padding, all
bits are specified. This is writing for students new to HDLs, remember?
This format clearly shows how to assign of any bit pattern to the left
hand sides. A "ac <= 8'h0;" and "mar <= 12'h0" wouldn't show this to a
new Verilog student.
> The first actual Verilog example of the book shows up on page 34. Far
> too many silly diagrams and pseudo-code examples occupy the first 33
> pages of the book.
Diagrams are there to show a top-down design process, they only look "silly"
if one does not understand why they are being presented. Actually reading
the text helps! The title of this chapter is "Design Methodology Based
on Verilog" and is not to be regarded as a Verilog tutorial. Design
methodology the focus here.
> The use of nonblocking assignments in the book is inconsistent and does
> not follow common recommendations.
Nonblocking statements are used in examples only after the concept of
nonblocking has been discussed. All examples use correct semantics of the
language. I suggest that Cliff write a paper about a new language and
present his "common recommendations" without explaining the basic syntax
of his new language. Only then might he understand the importance to
teaching basic syntax first and then giving recommendations.
> Many of the early Verilog examples emphasize Boolean equations...
That is because the early parts of the book is for describing
Boolean expressions and the structure of components is being presented.
It's part of the language.
> The reader is developing the wrong habits early in the book.
Easier material is presented first. Readers are not expected to stop on
the first part, as Cliff did, but to go beyond that. This is not a
reference book; it is a book with flow and a specific mission. Examples
are built as constructs of the language are presented. Examples in the
later parts of the book are built upon constructs presented in the earlier
parts of the book.
> The UDP flip-flop in figure 5.15 on page 97 is wrong. Table entries...
That simple example was to serve the purpose of showing UDP syntax and
semantics of sequential tables. The necessary declarations were shown in
this figure, a partial syntax tree is shown in the next figure, and edge
specification formats are shown in the next after that.
> A full Boolean description for a bit comparator in Figure 5.24. Why?
> If one is trying to design everything using gate primitives, this might
> be a good reference book.
Once again, it is important to read the book and see for yourself why an
example is being presented. Without reading the book, this information is
hard to extract. Figure 5.24 is not a Boolean representation, it is a
structural representation based on language primitives. The section that
this example appears in is "5.2 Wiring of Primitives". This series of
examples is used to illustrate the use of primitives, how wired logic is
formed, ways that timing parameters are specified, and ways that primitive
strengths affect outputs. I can only use examples that are based on
primitives to illustrate issues related to language primitives.
> In Figure 6.8 (page 137), using a 'include to include a task is silly.
You include a modular code so that the code does not have to be repeated in
every description that it is being used. This way if you decide to modify
the code you only have to do it in one place. This is a simple coding issue
taught in many beginning programming courses.
> A casez instruction would have greatly simplified the example in figure
> 8.9 on page 188.
This section is on continuous assignment and condition statements. Figure
8.4 shows the syntax tree of continuous assignments and Figure 8.7 shows
that of a condition expression. This figure illustrates the use of right
hand side condition expression is a continuous assignment. Why would I use
a casez statement if this is not what is being discussed? Treatment of case
statements including their syntax details appears in Chapter 9.
> The treatment of state machines in section 8.3 is pathetic. Ignore the
> coding style shown in this section. The state machine code described
> at the top of page 247 and shown in Figures 9.32 and 9.34 includes an
> initial block to handle the reset. It's both bad & non-synthesizable.
> Initial blocks are not synthesizable and should generally only be used
> inside of a testbench.
The three categories of a synthesis subset are "Supported", "Not Supported",
and "Ignored". Synthesis tools ignore initial blocks, and it is not wrong
to use these statements in synthesizable code. Note that these examples are
not presented as synthesizable descriptions. Initial blocks are useful for
pre-synthesis simulation of state machines that do not have any hardware
resetting mechanisms. This section presents a basic state machine and adds
to it in the later sections. Without a resetting mechanism, the only way
to simulate is to use an initial block. With resetting mechanism, it is
still a good practice to use initial blocks. Page 252 shows a state machine
with an asynchronous reset.
> RTL coding styles are almost absent from this book.
Chapter 8 lays out RTL style of coding and Chapter 9 shows more ways of
describing RT level components. Chapter 10 shows a complete CPU designed
at the RT level.
I think it's very important to understand that this is not a cookbook nor
was it ever intended to be used like one. If you do read it that way,
I'm sure you'll get into all sorts of trouble like my poor reviewer
Cliff Cummings did here. I don't mind my book being judged, I just
hope that who's judging it actually reads it.
- Zain Navabi
Northeastern University Boston, MA
( ESNUG 340 Item 9 ) --------------------------------------------- [1/00]
Subject: ( ESNUG 339 #10 339 #11 ) Nine Mostly Positive VERA User Letters
> VERA's biggest rival, a language called "e" from an in-your-face Israeli
> start-up named "Verisity", is eating Synopsys' lunch in that market.
> According to Dataquest, in 1997, Verisity had 84% and VERA had 16% of that
> $7.5 million market. In 1998, after the world wide Synopsys marketing
> army had ownership of VERA, VERA grew to 19 percent of that now $13.5
> million market. It's been 18 months now. We won't have the 1999
> Dataquest numbers for another 9 months, but as the ESNUG moderator I know
> should have been seeing all sorts of customer e-mails about VERA by now.
From: "Faisal Haque" <faisal@growthnetworks.com>
John,
Your EE Times Vera Fiasco article is a classic example of why engineers
should leave market analysis to market analysts. I disagree with your
statement about Vera. We have been using Vera at GNI and almost all but
one networking company that I know of is using Vera.
I have been in the verification business for a few years. In fact at Bay
Networks/Nortel we originally chose to go with Verisity because it was the
only VHDL tool. I have been told that Bay and Nortel have decided to use
Vera instead of Verisity. This does not take anything away from Verisity.
They have a good product and it should do quite well. In fact their
biggest hindernace is their sales/marketing management IMHO (and I am
speaking from experience, rather than from conjecture.)
- Faisal Haque
Growth Networks, Inc. Mountain View, CA
---- ---- ---- ---- ---- ---- ----
From: "Larry Melling" <larry@ikos.com>
John,
I was very surprised by your article about Vera and it's failing market
position. Our experience at IKOS has been quite opposite of what you are
seeing. In fact we are currently working with Synopsys to integrate Vera
with both our accelerators and emulators, because of customer demand from
leading electronics companies like Infineon and Compaq to name two.
- Larry Melling
Vice President of Business Development
IKOS Systems Cupertino, CA
---- ---- ---- ---- ---- ---- ----
From: "Jay Salinger" <salinger@ironbridgenetworks.com>
Hey John,
I haven't seen you since the Mandela thing in Boston about 10 years ago. I
was forwarded a copy of your article about VERA so I thought I'd reply.
I work at IronBridge Networks and we use VERA. We like it. It was helpful
in allowing us to put together a decent DV platform quickly. It has its
limitations, but if you don't have a good DV to Designer ratio and need
something in a hurry, it sure beats rolling your own. Its combination of
Verilog constructs (like always @) and C++ (classes, object, etc) give you
Verilog flexibility with C++ power.
- Jay Salinger
IronBridge Networks
---- ---- ---- ---- ---- ---- ----
> "Brilliant" was the word that Dataquest analyst Gary Smith used. It was 18
> months ago and he was reacting to the news that Synopsys had just bought
> System Science for $26 million.
From: [ A Little Bird ]
John, keep my name out anything you may wish to print.
The price for VERA was over $50 million, not the $26 million EE Times
reported. I actually heard a figure of $54 million. What a deal!
This is from the Synopsys 10K.
In fiscal 1998, the Company acquired SSI and two small privately held
companies in the EDA industry. The acquisitions were accounted for as
purchases with the Company exchanging a combination of cash of $26.0 million
and notes of $12.0 million. In addition, the Company reserved approximately
318,000 shares of its common stock for issuance under SSI's stock option
plan, which the Company assumed in the acquisition. The total purchase
price of $51.3 million was allocated to the acquired assets and liabilities
based on their estimated fair values as of the date of the acquisition.
Approximately $33.1 million was allocated to in-process research and
development and other costs.
2000 (or more) seats of Vera are at SUN. SUN started the VERA development
then jettisoned the technology, Daniel Chapiro (and the other guy) got it
and ran (stumbled) with it. Other non-synopsys (Cadence, Mentor) field
contacts I know say they run into Veristy in sales engagements on a
frequent basis but not VERA. My take, Synopsys bought the cheaper company,
Vera, not the better tool, Verisity.
- [ A Little Bird ]
---- ---- ---- ---- ---- ---- ----
> In the 1999 Synopsys Customer Education schedule, there were 45 VHDL and
> 53 Verilog oriented classes pre-scheduled for 1999. Although it does
> mention a VERA class, zero VERA classes were pre-scheduled in 1999. Huh?
From: "Yatin Trivedi" <trivedi@seva.com>
Hi John,
You should look at the Synopsys 2000 Customer Education schedule. It has
40 Vera classes scheduled. Concerning the lack of Vera training you saw
in the 1999 Synopsys Customer Education schedule, since April 1999, I have
conducted 9 Vera training classes at the request of the Synopsys Educational
Division. I have also declined 4 classes because I do training only between
projects when I have some time on hand. Also, I am NOT one of the three
primary instructors employed by Synopsys. As far as I can tell, I get
request only after the three primary instructors are fully booked. Even if
each instructor does only 1 training a month, Synopsys must have done,
conservatively speaking, in excess of 30 trainings in 1999.
Considering my classes have at least 15 students, I have had interactions
with at least 125 engineers. Even if half were attending for evaluation
purposes, I must have taught more than 50 engrs that are current customers.
I don't get the information about who is paid attendee and who is comp, so
I can't possibly tell you that information. However, I can tell you that
when I see 3 or 4 engineers from the same company, it is beyond reasonable
doubt for me that their company is a customer. Of course, their questions
indicate that a few more of their colleagues have already started using
Vera in-house and they had the privilege of reading their code. From my
classes, I can tell you that I have seen at least 40 different companies,
15 of which fall in more-than-3-students category. Of my 9 classes, I did
4 at company sites.
One other quantitative point - as part of Seva, we were asked 4 times in the
first 6 months of 1999 by Vera customers for additional resources. We could
not provide resources in 3 cases because of prior commitments. In the
latter half of 1999, as part of Intrinsix, we did 3 projects that involved
Vera and had to walk away from 2 more. That's just in our Silicon Valley
Design Center. I don't keep track of the other 17 Design Centers of
Intrinsix around the country.
- Yatin Trivedi
Seva / Intrinsix Fremont, CA
---- ---- ---- ---- ---- ---- ----
From: "Chandresh Patel" <cpatel@caesium.com>
Dear John,
You wrote: "And when I read in that press release that Synopsys was claiming
that they had 5,000 VERA users, my bullshit detectors went off."
If this is 5,000 Vera *licenses*, it would be probably OK. There is a
difference between claiming a Vera user and Vera license. I & my colleague
here at CAESIUM have used Vera on a previous project, and are amongst 4
verification engineers. For regression purposes, the Vera license-to-user
ratio is not 1:1. It can easily range from 2:1 to 4:1 licenses:user.
Currently, my colleague is again working on another project where Vera is
being used.
More and more, C++ is becoming the choice for system level simulation
at some the projects we have recently worked on.
- Chandresh Patel
Caesium, Inc. Santa Clara, CA
---- ---- ---- ---- ---- ---- ----
From: Jon Stahl <jstahl@avici.com>
John,
It was interesting to see your latest article on Vera. We use it, and the
compelling reason was an ~5x price difference with w/respect to Verisity
(X 40 seats) -- although a secondary reason was our head of verification
came over from Sun (a big Vera house). I have also wondered at times why
there wasn't more discourse on ESNUG concerning Vera.
- Jon Stahl
Avici Systems N. Billerica, MA
Editor's Note: For the record, Compaq, MIPS, and ATI also wrote similar
letters supporting Vera. Also, for the record, I'm NOT disputing the
usefulness of Vera nor the fact that there are a number of companies
curious about it -- I'm disputing that there are *5,000* engineers
*using* Vera as Synopsys claimed in its Dec. 23rd press release. With
the www.deja.com data from ESNUG 339 #11 establishing a 9.4 percent to
27.1 percent Letter Rate for EDA tools, plus knowing there were 32 Vera
letters in 1999, I calculate there are from 118 to 340 Vera *users*.
( If .094X = 32, then X = 340, etc.) On Monday, I searched the large
jobs database on www.monster.com and found:
Number of Min. Number
EDA Keyword Jobs Offered of Users Job Rate
-------------- ------------ ----------- -------------
"Verilog" 520 19,207 .02707 OK...
"VHDL" 477 24,374 .01957 OK...
"Synopsys" 247 16,000 .01543 OK...
"VERA" 12 "5,000" .00024 Huh???
Job Rate = ( # of Jobs Offered / # of Users )
Again, Vera's Job Rate of .00024 is TWO orders of magnitude down for a
tool that supposedly has 5,000 *users*. Using the high Job Rate of
.02707 and the low Job Rate of .01543 for known EDA tools and doing the
algebra ( If .01543X = 12, then X = 778, etc.), I calculate there are
from 443 to 778 Vera users estimated by this technique.
Using Yatin's letter (i.e. he reports 3 full time Vera instructors) and
generously assuming each instructor teachs a new class of 15 users every
3 weeks plus the 125 Yatin taught, I get 3*18*15 + 125 = 935 Vera *users*
trained -- which is in the same order of magnitude as the 118, 340, 443,
and 778 users estimated by my prior methods. So, average the whole mess
and you get approximately 523 engineers *using* Vera -- not the 5,000
*users* Synopsys claimed in its Dec. 23rd 1999 press release.
Anyway, I'd like to move beyond this issue and encourage more detailed
*technical* user letters about Vera and/or Specman -- you know, bugs,
tips, gotchas, workarounds, scripts, etc...
- John Cooley
the ESNUG guy
|
|