In ESNUG 297, John Cooley wrote:

 > On a more personal note, my whirlwind de-Bachelorization_Clean-Up_Of_My_
 > Living_Space_Because_My_New_Girlfriend_Is_Coming_Over_For_The_First_Time
 > went splendidly.  I'd like to thank Tim Davis for his tip:
 >
 >     From: Tim Davis <timdavis@tdcon.com>
 >
 >     John,
 >
 >     I understand that Cadence Spectrum services has a special branch to
 >     handle the "cleanup" problems you are experiencing.  They will come
 >     in and tell you exactly what needs to be cleaned up, draw up a plan,
 >     statement of work, etc.   All that for only $300/hr.  (You of course
 >     have to do the actual cleanup work yourself.  Their simulation of
 >     the cleanup process guarantees that your girlfriend will be
 >     completely happy with the result however.)
 >
 >     Tim Davis
 >     Timothy Davis Consulting                          Broomfield, CO


  From: "Tommy Hunter" <tommy.hunter@lmco.com>

  John,

  Tim Davis omitted a significant feature of the Cadence cleanup guarantee.
  The guarantee is that if the girl isn't happy, Cadence will dismiss your
  local support people (with no guarantee they'll be replaced).

    - Tommy Hunter
      Manager, Computer Architecture
      Lockheed Martin Vought Systems                       Dallas, TX


( ESNUG 299 Item 1 ) ---------------------------------------------- [9/98]

Subject: ( ESNUG 297 #5 ) PowerMill "spFactor" & L-U Matrix Factorization

> spFactor is a L-U matrix factorization function in a sparse matrix
> package -- evidently PowerMill uses "sparse" from Berkeley.  Looking at
> the version of "sparse" that I have, error=3 corresponds to a singular
> matrix -- but the PowerMill authors could have changed the codes.
>
> So, do you get any topology error messages (loops of inductors/voltage
> sources, floating nodes) if you run the netlist in HSPICE?  This would
> be an easy way to generate a singular matrix.
>
> PowerMill should give a bit more diagnostic help here... 
>
>   - Steve Hamm
>     Motorola                                    Austin, Texas


From: "Anil Gundurao" <akg@cypress.com>

John,

We had similar problems with TimeMill 5.1 when we had inductors in our
circuit.  By using the following config command (as advised by Synopsys/EPIC)
we got around the error: set_sim_imna.  (It applies the Modified Nodal
Analysis with ideal vsrc & inductors.)

  - Anil Gundurao
    Cypress


( ESNUG 299 Item 2 ) ---------------------------------------------- [9/98]

From: [ "FedEx, They Ain't." ]
Subject: Crappy Chip Express Documentation Cost Us An Additional 2 Days Spin

John,

My project is just like so many other projects you probably read about in
your ESNUG incoming e-mail box.  Like everyone else, we're under a very
tight deadline to get it completed.  Missed weeks (or even days) mean missed
cash from our bottom line.  That's why we chose Chip Express.  Turns out,
Chip Express is, IMHO, composed of rank amateurs and completely
mis-documented their vector format.  Now we need to spin another 2 days
worth of effort to fix our test bench to re-generate the vectors in a
completely different mode than what they originally told us.  Arrgh.
Probably don't want to post this with my name on it, but I figured you'd be 
interested.  I will make damn sure that my company never uses them again.
Way too many problems.

  - [ "FedEx, They Ain't." ]


( ESNUG 299 Item 3 ) ---------------------------------------------- [9/98]

From: Richard Schwarz <aps@associatedpro.com>
Subject: A Good EDN article About FPGA Synthesis

Brian Dipert just did a good article featuring the MINC, Accolade & Synopsys
Tools (Foundation uses Synopsys).  The article is available on line at
http://www.ednmag.com.  It shows what a great buy both of these tools are,
and especially the Accolade tool in MULTIVENDOR version.  Also when you
include the simulator in with the PeakVHDL suite, you have to conclude
that the Peak suite is the best overall multivendor buy.

  - Richard Schwarz, President
    Associated Professional Systems Inc.                  Abingdon, MD


( ESNUG 299 Item 4 ) ---------------------------------------------- [9/98]

Subject: Going From Synopsys Synthesis To Cadence Layout

> How can I get the synthesized code from Synopsys to Cadence to generate
> the layout ?  I've read about the EDIF format, but it is incompatible.
>
>   - Marek Ponca
>     Technische Universitaet Ilmenau


From: Peter Sandberg <peters@cadence.com>

Hi Marek,

There are several alternatives available:

  1. Have Design Compiler to generate Verilog and import the Verilog
     directly into Silicon Ensemble to do P&R.

  2. Have DC to generate Verilog and import into Physical Design Planner
     and do floorplanning, timing analysis, congestion analysis, WLM model
     generation, etc and then run SE from PDP.  This will also give you the
     option to use IC Craftsman for chip assembly from PDP.

  3. Have DC to generate Verilog and import the netlist into DFII using
     VerilogIn and then through Preview do floorplanning and run SE for P&R.

  - Peter Sandberg
    Cadence


         ----    ----    ----    ----    ----    ----   ----

From: "Matthias Brucke" <Matthias.Brucke@Informatik.Uni-Oldenburg.DE>

We used to use EDIF and use Verilog now (which works).  Perhaps you have
a look at: "http://tech-www.informatik.uni-hamburg.de/Dokumentation
/DokuHomePage.html#Cadence" which is OK for you as these documents are
in German.

  - Matthias Brucke
    Universitaet Oldenburg


( ESNUG 299 Item 5 ) ---------------------------------------------- [9/98]

From: [ Stuck Up Ganges Creek Without A Paddle ]
Subject: Design Compiler (98.08 - 3.4b) Double "AND" Gates W/ Resets Problem

Hello John,

Please keep me anonymous.

We are seeing that Design Compiler is adding double "AND" gates when it is
not necessary.  The created gate level netlist will has two AND2 gates at
the end of the path and both AND2's will have reset as one of the inputs.


   +----+ sel_d1
   |FLOP|-----------------+
   +----+                 |
                          |
   +----+ vishnu_d1[0]  |\|
   |FLOP|---------------| \
   +----+               |M |      +----+       
                        |U |------|    |     +----+
   +----+ vishnu_d2[0]  |X |      |AND2|-----|    |     +----+
   |FLOP|---------------| /    +--|    |     |AND2|-----|FLOP|
   +----+               |/     |  +----+   +-|    |     +----+
                               |           | +----+
   +----+ soft_reset_10        |           |
   |FLOP|----------------------+-----------+
   +----+    


Here's our source code that creates this logic:

  module krishna ( shiva_ptr, sel, vishnu, reset, clk );

  output [5:0] shiva_ptr;
  input        sel;
  input [5:0]  vishnu;
  input        reset;
  input        clk;

  reg          soft_reset_10;
  always @(posedge clk) soft_reset_10 <= reset;

  // synopsys sync_set_reset "soft_reset_10"

  reg [5:0]    vishnu_d1, vishnu_d2;
  reg [5:0]    shiva_ptr;
  reg          sel_d1;

  always @(posedge clk) begin

    if (soft_reset_10) begin
        sel_d1 <= 0;
    end else begin
        sel_d1 <= sel;
    end

    vishnu_d1[5:0] <= vishnu[5:0];
    vishnu_d2[5:0] <= vishnu[5:0];
  end

  always @(posedge clk) begin

    if (soft_reset_10) 
       shiva_ptr[5:0] <= 0;
    else if (sel_d1) 
       shiva_ptr[5:0] <= vishnu_d1[5:0];
    else 
       shiva_ptr[5:0] <= vishnu_d2[5:0];
  end

  endmodule


The apparent cause is using sync_set_reset and having a synchronous reset
flop with enable and having the enable* come from a flip-flop which is
itself reset by the same reset signal.  (*- experimentation has shown that
having a reset on any of vishnu_d1, vishnu_d2 OR sel_d1 will cause this
same problem.)

The above example has been changed in RTL structure several times
(implemented as a CASE statement, if-else-if-else, as well as using a "?"
operator.)  All results were identical.

While the circuit above is functionally correct, it is of course a
sub-optimal result.  The same result was obtained using DC 98.08, 98.2,
and 3.4b.

  - [ Stuck Up Ganges Creek Without A Paddle ]


( ESNUG 299 Item 6 ) ---------------------------------------------- [9/98]

Subject: ( ESNUG 298 #1 )  The Synopsys/Mentor "Reuse Methodology Manual"

> Overall Conclusion: Excellent book, wrong title.  A more appropriate title
> would have been "Best-In Class ASIC Design Practices". 
>
>   - Janick Bergeron
>     Qualis Design Corporation


From: Yatin Trivedi <trivedi@seva.com>

John,

First the good news - 

The book is actually readable.  It is written for engineers, not English
majors.  Each chapter starts with a good overview of the topic at hand,
provides a good "glossary" type overview of terms.

There are a lot of flow diagrams in the book.  In most places, the diagrams 
make the reading and understanding the surrounding text a lot easier.  An
exception is figure 2-3, system-on-a-chip design flow, that could use 
at least another page worth of explanation such as how those vertical
partitions interact with the horizontal ones and what are the problems that
possibly get addressed (causing inward spiral) at these points.

If you are willing to forget words like "reuse", "IP", etc. and just
remember ASIC, HDL-based design methodology, functional verification,
structured design practices, etc., you will get more out of the book. 
Otherwise, you will spend more time looking for the relevant topics and 
getting to those few pages than actually reading the text.

In other words, if you have taken a course such as "Intro to VLSI Design"
or "CMOS Design 101", read this book to get the "Big Picture".  A good
book to review the night before the interview, specially if you are
interviewing with a large company.  Good chances that the design, synthesis,
and verification tasks are carried out very close to what the book says.
So your interviewer is likely to find you "ready to jump in, feet first".

If you can recall Methodology Notes that came out in early 90s, most of the
guidelines were already known to most readers.  If you have been designing
with DC for sometime, you have read these guidelines, used them yourself,
seen them being used, recommended to use, and felt the unbearable pain when
you or someone on your team didn't follow them properly.  Some of us also
had/have the privilege to teach it and discuss it and take sides with (or
against) it.

If I are a Synopsys sales person, I would give 100 free copies of this book
to each engineer so that his/her bookshelf has no place to keep any other
books that contained a design flow that is feasible with other (superior??)
commercial tools.

Ah, so I must have started drifting towards the criticism ....

Let's see, how many of the Mentor's IP acquisitions (excuse me, Inventra)
have rewritten their models (or plan to) based on these recommended
guidelines?  Practicing what you preach may be a good start.

I don't think there was any mention of PLI in the book (OK, maybe a passing
reference), yet a bulk of Logic Modeling components are PLI based (for
Verilog users).  Should we count them out as "Unsafe Sex?"

I would also be curious to know how many of the leading IP (soft and/or
hard) vendors actually carry out the guidelines and methods suggested
/recommended in the book?  I (and others at Seva) have had the (sometimes
unpleasant) pleasure of being at the receiving end of the IP spectrum:
integrating and verifying.  The book, with its less than 10 pages devoted to
this end, certainly falls short.  After using 5 or 6 different IPs from
different vendors on different projects, I can't even tell you that I know
the golden rules.  All I can tell you is that the woodpecker proverb means a
lot to me ("If we built the society the way we build software, the first
woodpecker to come along our way will destroy the whole civilization.")

Yes, the macros work standalone and conceptually we the buyers believe that
we can integrate and verify the whole system and be happy ever after.  But
Toto, we are not in Kansas anymore.  It is a lot more than connecting the
ports and passing/overriding parameters on an instance to integrate the
macros in our systems.  All sorts of issues ranging from initialization and
register/signal accessibility to testbenches and debug process become severe
bottlenecks.  Integration of multiple IP blocks, possibly not from the same
vendor, is no trivial matter.  Sometimes, you have to run three different
license daemons just so that you can integrate these models ... who needs
lawyers?

OK, so I digress into a soapbox talk.

Most of Chapter 5, RTL Coding Guidelines, can be found in HDL compiler
manual or other HDL books.  A good job on Basic Coding Practices section,
however.  We have written custom "checker" tools for these sorts of things.
Seeing a compilation is one place makes me very happy.  In 5.2.11, port maps
and generic maps, the book recommends use of explicit mapping for ports and 
generics (parameters for Verilog).  The Verilog part of example 5-7,
unfortunately, must use unnamed parameter passing (for the purists, module
instance parameter value assignment) because HDL Compiler does not accept
the named association (defparam) construct.  The example shows port
connection by name and silently skips the parameter association.

Section 5.5.5, blocking and non-blocking assignment, missed one of the
important guideline that the two types of assignments should not be mixed 
for assignment to the same register. (i.e. a = b + 1 ; and a <= c - 1;)

Later on, section 5.5.8, coding state machines, has a guideline about
creating enumerated type (in VHDL) for state vector, and using `define in
Verilog.  The example, 5-36, turns around and uses parameter [1:0] ...
Although the example shows the *real* preferred method, the use of bit range
for parameter declaration is an illegal syntax (see IEEE 1364-1995, section
3.10, page 25).

The testbench chapter (Macro verification) is interesting, but weak.  More
talk than real substance. Chapters 10-13 are nothing more than a cursory
overview, a rather disappointing search for the real substance.

A question for Kluwer Publishers: did you get this book reviewed "by peers"
who were not employed by Synopsys and Mentor? and no one objected to the
blatant marketing/sales push for the tools? Or maybe the standards and
considerations have changed and I missed my calling ... There *IS* a 
difference between a tool user's guide and a technical reference material.

The authors state on page 1 of chapter 1 that they expect to update this
document on regular basis.  I sincerely hope that the book goes beyond a 
marketing push for tools and really sticks to methodology, guidelines
and practices.  Tools are an essential part of the solution, but several
good tools exist in the market place and several new ones will come in.
Due "respect" should be given to several tools including Mentor's system
level and co-sim tools.
                  
  - Yatin Trivedi
    SEVA Technologies, Inc.                             Fremont, CA

         ----    ----    ----    ----    ----    ----   ----

From: [ Michael Keating of Synopsys & Author of the "RMM" Manual ]

John - in response to Janick's review:

Rant and Rave Section: Companies don't write books, individuals do.  It is a
book written by an engineer (me) based on a bunch of internal manuals and
papers (all written by ASIC designers, not EDA/CAD guys).  The original
intent was for internal consumption - a guide for our engineers, and
contractors working for us, to make sure the designs we got were reusable.
We're the group that owns DW Foundation (and drove the bug rate down to
zero), developed the 8051 (we think a showcase for ease-of-integration) and
the new PCI (we think a showcase of how to design and deliver a highly
parameterized core).  We are also the group that did the synthesizable ARM
7TDMI. The book really captures the technical side of what we had to do
when we (principally myself and Warren Savage) came in and had to clean up
the mess of the original PCI.  Everything we talk about in the book is based
on real problems we have seen in real IP, both Synopsys-developed and third
party IP. 

Those of us who wrote and reviewed this book do design for a living - just
like Janick.  Our over-riding goal was to create a methodology that works,
not to hype Synopsys or impress academics.


Now for my calmer, more reasoned response:

John - Janick clearly read the book carefully and critically, and with
great passion for the subject.  No author can ask for more.  A lot of his
points are valid, and no doubt many of his suggestions will find their way
into the next edition of the book.  But it's probably worth while to address
some of the points he raises, particularly where he may have missed the
intent of what I was trying to do.

Janick's concerns fall into two categories: sins of commission (errors in
the book) and sins of omission.  Of these, his greatest concern appears to
be the sins of omission.  So let me address these first. 

The biggest problem a technical author faces is that engineers don't read.
I was taught this in a tech writing class many years ago and certainly
re-learned it with the RMM.  The first version (Synopsys Design Reuse Style
Guide) which we put together early in 1997, was twice the size of the RMM,
with a great deal more material, especially in design for test and
chip-level implementation.  The trouble was, I couldn't get anyone to read
it!  Five hundred pages was just too thick; people weren't even cracking it
and reading the first page. 

So the next year was spent taking much of the material out of the book, and
refining the remaining sections to hit what we thought were the most
critical issues.

The original style guide was, as Janick observes of the RMM, focused on IP
creation much more than IP integration.  This is based on our experience
creating and integrating IP.  If you try to integrate crappy blocks into
your chip design, no methodology is going to solve your problem.  On the
other hand, good IP can be integrated effectively into chip designs using
many different flows and methodologies.  Our conclusion is a basic rule of
reuse:

   The reuse battle is won or lost based on the quality of the IP.

That's why we focus on IP creation almost to the exclusion of integration.
Of the dozens of 8051 customers, only one had any problem at all
integrating the 8051 into their design (and that one, I hate to say, was
staffed by really, really junior engineers).  On the other hand, I have
recently been working with a customer trying to integrate a processor where
they were given the transistor netlist and told - go ahead, good luck!  They
ended up doing hardware-software co-simulation using SPICE, for god's sake. 

The real problems that real chip teams are facing using the IP are
incredibly basic.  And for that reason, we felt we needed to start by
establishing a baseline of good engineering practices that are the real key
to success in reuse.  That means defining the deliverables for IP (the
critical point at which the IP creator and IP integrator meet), and showing
how to create deliverables that can be used.  Janick seems to feel we have,
by and large, met this goal, and this is gratifying feedback.

Janick states that the book sins in making all the guidelines very DC
centric.  Here he has perhaps missed some points. First of all, the basic
design guidelines (fully synchronous design, etc) are just good design
practices -- I was doing this long before I even heard of DC.  He may be
talking about the coding guidelines.  But (my second point) synthesis is a
cornerstone of IP.  Even hard IP must have a synthesizable RTL version as a
reference implementation description.  Direct hard IP porting is just too
slow to get critical blocks migrated to new technologies fast enough.
(Thirdly), certainly the guidelines, where synthesis-specific, tend to be
couched in DC syntax.  But a couple of my contacts elsewhere in the industry
have commented that the guidelines closely match the requirements for all
synthesis tools.

This might be a good time to discuss Janick's apparent concern about some
of the specific tools mentioned.  Verilint, he claims, is given inadequate
importance.  It's hard for me to argue with his perception; if that's what
he got out of the book, then that's what he got out of the book.  But
certainly the intent was to highlight linting tools, such as Verilint, as a
key component of the flow.  It appears in all the flow diagrams right there
beside synthesis and functional verification.  We tried to reflect in the
book, as in all the presentations I make to customers and conferences, that
linting tools and code coverage tools are critically important, and that
they have made a dramatic contribution to the design process.  To the dismay
of some Synopsys sales guys, after my customer presentations on reuse, the
tools most often discussed are lint and code coverage, perhaps because many
customers are unfamiliar with them.

The intent of the book was to talk about the tools that are actually used
in practice by engineers to solve their problems, not to promote any one
companies tools. When Janick points out that Chrysallis is not mentioned,
he is absolutely right to call this an oversight.  I actually thought I had
fixed this, but to err is human.  Chrysallis has made a critical
contribution to the industry.  There was always some formal verification
available in DC, but until Formality, Chrysallis was the only practical
formal verification tool, and saved the bacon for a number of design teams.
This will be fixed (along with may other shortcomings), in the second
version of the RMM).

Janich states that we put too little emphasis on behavioral models and
testbenches, and again it's hard for me to argue with what's in the eye of
the beholder.  So let me clarify the intent, and let your readers decide if
this is reflected in the book.  Behavioral models and testbenches are
critical (and show up at the top of the flow diagrams) for a large class of
designs, especially those with algorithmic complexity.  For something like
processor or an MPEG design, simulating at a high abstraction level is the
only way to get the simulation performance you need early in the design
cycle.  Developing a testbench at this level also allows you to verify the
RTL design against the behavioral model.  On the other hand, behavioral
models are not useful for another class of design (like a bus interface)
where the cycle-by-cycle behavior is the critical design factor. 

Again, where he says Verilog simulators are as troublesome for portable
code as VHDL, this has simply not been our experience.  We test our code on
VSS, Vantage, MTI, Leapfrog, XL and VCS. Consistently our greatest problems
are with the VHDL simulators, though we have had some problems with the
Verilog ones.  With Verilog, the basic key is to stick to fully synchronous
design, and follow the blocking-non-blocking rules, and you'll run into few
problems - except for real bugs in the simulators.  Maybe we should have
pointed out that Verilog-XL, with certain optimizations turned on, has
bugs. I ran into this problem years ago; at that time it looked like they
were doing port collapsing wrong.  Some of the problems have been fixed and
others have not.  It just didn't occur to me to talk about coding around
these bugs.  It just seemed like info that would be obsolete (hopefully) by
the time the book got out.  Maybe it is worth revisiting this decision.

As for specific complaints on blocking-nonblocking, and the example in
5.5.6, come on Janick, give us a break!  It's a toy example, for God's sake,
to show the coding style, not to show an example of exactly how getting the
order wrong can screw up your code.  This is covered in the good VHDL and
Verilog books.  We're really just trying to get people to think about these
issues and re-check their VHDL/Verilog books.  We don't want to replace
these books.  (On the other hand, there is a very embarrassing typo in
another example, where we use blocking assignement inappropriately.  I can
only say Warren and I must have been dozing when we proof-read that example).


In general, as Janick points out areas where we could have devoted more
discussion, he is right.  We will be updating the RMM, hopefully for a new
version out at next DAC.  His input, as well as all the input we get from
readers, will be factored into the updates.  Again, we will have to make
tradeoffs in order to keep the size of the book under control, but
unquestionable the book will be improved from the feedback we have gotten.

Finally, let me address Janick's final remarks:

1) lack of new material: In our experience, the problems that real creators
of IP have is not following the basic good design practices described in
the book, and in not providing the deliverables their customers need.  The
intent of the book is to address the most common, not the most advanced,
issues in design reuse.  We wanted to establish a baseline of practices so
that all IP is created in a way that allows integrators to build their
chips.  This, we feel, is the most important contribution we can make in an
initial book. Future books and articles will address some more advanced
issues.  The two most important ones are probably how to verify IP and how
to design parameterizable IP. Both of these are, in my opinion, still
research topics with no definitive solutions.

2) Why does reuse fail when the technical problems are overcome: This is a
fascinating question, and one that again is a research issue with no
definitive solution.  I address some of this in an article in an upcoming
issue of Computer Design.  In a nutshell, based on my observations, the
problems come down to this: The incentives within a company often do not
reflect the incentives (ie, market pressures) the company receives from the
outside world.  Engineers are rewarded for tricky designs instead of designs
that contribute to the bottom line, managers are rewarded for short term
goals at the expense of the long term goals.  Solving this problem is
complex; we are working with some customers to try to solve this today. 

But I have to question Janick's assumption that there are some design
groups that have solved all the technical problems in reuse.  There are at
least two unsolved technical problems: the problem of verifying a design to
100% levels of confidence, and the high cost of design for reuse.  There is
a clear need for IP that has no bugs; why reuse code you end up having to
debug.  But there is no way, today, to produce a significant design with
(provably) no bugs.  And the cost of designing for reuse often exceeds 2x
the cost of developing a design for a single use.  This is a serious
liability when engineering teams decide what to make reusable. 

In summary, there are solved problems in design reuse; we have tried to
include the most important ones in the RMM, and we will expand this work
based on feedback from readers like Janick.  There are unsolved technical
and non-technical problems, and we will probably address these in separate
papers as the technology evolves and as we help companies migrate to
reuse-based design. 

Anyway, thanks to Janick for the thoughtful review, and I would certainly
invite him to be one of the reviewers of the next version of the RMM before
it goes to publication.

  - Michael Keating
    Author of the "RMM" Manual
    Synopsys


( ESNUG 299 Item 7 ) ---------------------------------------------- [9/98]

Subject: ( ESNUG 297 #4 ) More On The EDA-Should-Support-Linux Debate

> hello john,
>
> veriwell used to support a verilog simulator for the linux platform.  i
> received the following message from veriwell customer support (indicating
> that veriwell not longer supports linux) ...


From: Elliot Mednick <elliot@wellspring.com>

John,

Since I'm the founder of Wellspring, which sells Veriwell, it looks like I
can't stay out of this much longer.  I can argue from both sides of the
fence as both a (former) Linux vendor and from a (yes, former) Linux user.
Here goes my world.

Taking a user perspective, I share most of the favorable opinions of Linux.
It's decent, cheap, fairly stable, and now supported by a commercial entity
(Red Hat, etc.).  I used to use it regularly, but I soon ran into it's big
weakness: lack of specific hardware support.  I have two PCs, one a Dell
notebook and the other a generic clone, each with some piece of hardware
that is not supported.  At least by Red Hat.  So, I had to give it up.
(FYI, I ran OS/2 for a long time, so I'm not adverse to running non-Windows
operating systems.  But, now I have to.)

For a vendor perspective, the cruel reality is that Linux users are
inherently cheapskates.  That is, the culture around Linux is that it
should be free (freely available) and almost all of the tools running under
Linux should also be free/cheap.  That's why Linux is so popular.  (If you
disagree on that assertion, before Linux you could buy "Coherent", a full
Unix OS for the PC for $100.  Linux drove them away.)  There seems to be a
natural resistance around having relatively expensive tools running under
Linux, especially outside of the EDA industry.

During the time Wellspring supported a Linux version of VeriWell, we had
lots of downloads of the free version, but very few sales (which, I
suppose, could mean something else...).  So, the sales of the Windows
version was supporting the Linux support.  We could not maintain this
drain.  We had problems keeping up with the kernel, as well.  So, as
altruistic as we would like to be, we could not continue to sink resources
into something that, albiet popular, was not generating revenue.

Additional ports of software are Really Hard to support.  At one time,
Wellspring supported 6 ports of VeriWell.  For each port, there was the
initial porting effort, and then the ongoing support, release, regression,
etc. issues as well as maintenance of the respective platforms and
operating environments.

Other companies have attempted Linux versions of their products and have
given up.  As Richard Goering pointed out, maybe there are lots of
individual engineers who use Linux, but the CAD buyers aren't interested.
It is a lot of effort to support another OS.  And, the critical mass of
products have to be there (simulators, synthesis, etc.).  And, for a short
time, there almost was a critical mass at the lower end (Exemplar,
Wellspring, Fintronic, etc.).  But, this is a chicken-and-egg problem,
isn't it.

So, here is what I did: I punted on Linux.  I run Win95 on my notebook.  I
downloaded the GNU-WIN32 package from http://www.cygnus.com which contains
a bash shell and almost all of the Unix utilities you would even need and
they run under Win95/NT.  I run GNU Emacs and Perl 5.004 on my notebook
(Xemacs is also available).  I have the advantage of being able to run the
Windows tools AND can run Unix scripts/commands/etc as well as the
commandline versions of the CAD tools.  X11 is also available.

There.  The best of both worlds.  Of course, someone will point out
something that GNU-WIN32 won't do, but it works for me.

  - Elliot Mednick
    Wellspring Solutions, Inc.                          Salem, NH


( ESNUG 299 Item 8 ) ---------------------------------------------- [9/98]

Subject: (ESNUG 296 #9 297 #10) Cadence's PB-OPT Trounces reoptimize_design

> With regard to Cadence's PB-OPT, I implemented this tool flow (in a former
> life) and used it on very high performance designs.  The results seemed to
> match Cadence's claims and this option enabled me to achieve a one-pass,
> timing-convergent, design flow from RTL->layout.  This additional feature
> allowed me to replace my  previous methodology, which was to iterate
> (sometimes > 15 times) through Design Compiler's reoptimize_design and
> Cell3/Silicon Ensemble's ECO place and route.
>
>   - Brian Arnold
>     Fusion Networks Corp.                       Longmont, CO


From: Chakki Kavoori [chakki@Aureal.com]
To: Brian Arnold <briana@SiTera.com>

Hi Brian

	I read your response on the ESNUG post regarding generating gcf files
from Synopsys constraints file.  I was just curious if you used the sdf
constraints file, or was it some other report that Synopsys generated?
(could you be more specific on the command, too?)

	We're just starting to migrate to the Cadence recommended Timing Driven
Flow (including PB-Opt) - one of the things I find missing from PB-Opt is
an equivalent of the Synopsys "don't touch" command -- where I know better
than to resize some of the elements because I have them "spiced".  Did you
face any such  issues/problems?

	Were you using Silicon Ensemble (SE) in a flat mode or did you do any
hierarchies -- and if you used hierarchies -- did you use Design Planner (DP)
as the floor planner?  I'm interested in finding out about your experiences.

  - Chakki Kavoori
    Sr. Design Engineer,
    Aureal Semiconductor                              Fremont, CA

         ----    ----    ----    ----    ----    ----   ----


 [ Editor's Note: Before Brian answers these questions, I thought I'd
   provide a set of definitions for the Synopsys users to better understand
   the Cadence tools being discussed.

      DP - Design Planner.  DP typically refers to LDP.

      LDP - Logical Design Planner.  Reads in Verilog and produces groups
      and a pin placement that assists QP in doing a better job.  It also
      generates custom wire load models for you.  This package is priced
      in the "arm or leg" realm.

      PDP - Physical Design Planner.  Incorporates the functionality of LDP
      and provides an interface for you into SE (Silicon Ensemble), the CCT
      router (Cooper & Chan Technologies) as well as Cadence's back end
      analysis tools (I think).  This package is priced in the "arm & leg
      plus your first born child" realm.

      SE - Silicon Ensemble.  It is the replacement for Cell3 and includes
      QP (Quadratic Place) and Wroute (Warp Router).  PBOPT (Placement Based
      Optimization) is an option to SE as is CTGEN (Clock Tree Generation).
      This package is priced in the "arm & leg plus your first born child
      plus the Devil gets your soul" realm.

   Hope this makes the following reply more understandable now.  - John  ]


         ----    ----    ----    ----    ----    ----   ----

From: Brian Arnold <briana@SiTera.com>
To: Chakki Kavoori [chakki@Aureal.com]

Chakki,

The great thing about using the GCF format is you do not need an SDF
file.  The GCF format accomplishes the same result as does the Synopsys
constraint file.  GCF defines clocks, input and output delays, input
slopes and output loading, then the internal timing engine (Pearl in the
case of PBOPT) performs the appropriate calculations between all timing
begin/end points, which is comparable to Design Compiler calculating
timing arcs during a compile.  However, Pearl/PBOPT is also able to
analyze the routing of the wires and get a very good idea of the load on
each wire and appropriately size/split/move_cells on each net.  The
default mode for PBOPT is to generate an RSPF file from a global route,
however, I found this to not be as accurate as I needed.  Therefore, I
modified the PBOPT flow so it used an RSPF file from a first pass timing
driven P&R run.  Therefore, the layout data seen by PBOPT was as
realistic as possible.  The entire tool flow from HDL->GDS does become
rather complex when you have to maintain consistency between the HDL and
Synopsys db files and schematics and artwork layout, but it is manageable
just as long as the PBOPT Verilog writer produces the results you need.

In order to reduce the amount of complexity seen by the users, I wrapped
this tool flow (from HDL->Synthesis->floorplanning->P&R->GDS) into a GUI
that allowed a designer to "easily" navigate their block through all the
steps.  I didn't think of this idea (kudos to Steve Rich), I just
implemented my own version (thanks for the toolkit Dave Clark).  When you
have a lot of people using a complex tool flow, this is a great path to
take.  The amount of up front work, really pays off in the amount of
day-to-day support you have to provide and the increase in productivity
seen by the designers.

Getting back to your SDF question, I do not personally like to use SDF
as it has three main problems:

   1) The files take a long time to create/read
   2) The file size is huge for large designs
   3) SDF misses paths and does not provide complete coverage, which leads
      to problems for timing driven placement

I know most of the Synopsys constraint commands has an equivalent in the
GCF realm, but I don't know about dont_touch, I'm sure it does but I
don't have my GCF spec handy, call your local PBOPT AE and ask him for a
GCF spec.  I don't have an example of a GCF file for you or I would send 
it.  As I did just change jobs I couldn't take any of this with me, bummer.

No, I didn't use DP as the floorplanner, I wrote my own that helped designers
create a custom floorplan on a block-by-block basis.  My wrapper around the
Silicon Ensemble (SE) flow extracted the floorplan information and fed 
it into SE during a floorplanning step.  This allowed a designer much 
tighter control over what their block looked like.  With this capability 
they could insert things into the floorplan like pre-placed wires, 
preplaced cells, blockages, groups and regions very easily.  The 
composed blocks were then connected together at a higher level and 
eventually routed with the full chip.  LDP didn't help me out at all, 
mostly because of the type of designs that I ran through SE.  However, 
for very large designs I can see how LDP would be beneficial as it has 
capabilities to automatically determine pin placements, create groups 
and run through QP, and I was told that PDP is a good interface if you 
continue on and use CCT for your top level route.

The designs I fed into SE were flat, but in general, the designs were 
very hierarchical.  The process I used that generates DEF flattens the 
hierarchy to one level.  However, those blocks that did utilize PBOPT 
were flattened in synthesis (which may have alleviated a lot of problems 
for me, I don't know).  Therefore, I'm not sure how good of a job the 
PBOPT Verilog writer does at generating hierarchical Verilog as I never 
needed to test it out.

  - Brian Arnold
    SiTera Incorporated                               Longmont, CO


( ESNUG 299 Item 9 ) ---------------------------------------------- [9/98]

From: Gregg Lahti <glahti@sedona.intel.com>
Subject: ( ESNUG 297 #8 ) IPO -- Messy Equivalents When I Wanted Buffering!

> Anyone know how to get Synopsys *not* to change the cell-type during an 
> -in_place optimziation compile (98.02-2)?  Specifically, it will change 
> functional equivalents like the following:
>
>   from  Z = !(A*B!)
>
>     to  Z = (A + B!)  ( with nets to A & B swapped )
>
> (Yes, it's convoluted but they ARE functional equivalents if you think it
> out -- but this really isn't what I had in mind.)  I was intending to
> freeze the netlist & layout and just add buffering or swap out cells with
> higher/lower drive to meet the min/hold design rules.  I had everything
> defaulted except "compile_ok_to_buffer_during_inplace_opt = true".  In the
> case above, DC decided that the funky NAND (for lack of a better term) was
> slower than the other and swapped the cell rather than add a buffer.  Size
> of the cells were identical, but the input net connections got reversed in
> the process and not reported in the change log.
>
> Of course, this blows the LVS checks.
>
> Also, anyone else not really happy with the change log contents?
>
>   - Gregg D. Lahti
>     Intel Corporation


From: Gregg Lahti <glahti@sedona.ch.intel.com>

John,

I figured out the problem once my AE sent a quick blurb on the two variables
that cause cell swapping.  However, this still brings up the change log
inadequecies.  Here's our thread so far...

  - Gregg Lahti
    Intel


My AE at Synopsys said:

  Can you double check the settings of the following vars just before
  you hit "compile -in_place" ?

              compile_ignore_area_during_inplace_opt
              compile_ignore_footprint_during_inplace_opt

  By default, these vars are false and should force all swaps cells to
  match footprint, pin names, pin count, cell area and logic  function.
  If set false and all the above criteria are met, then the cell swap
  can take place.

  If however, these vars get set to true, it will enable DC to swap
  cells based on function alone (or so the documentation says).  Let me
  know what you find - if at least one of these vars is not set to TRUE
  and the above criteria are not all met, then this sounds like a bug.

  You probably already know - but just in case - There are many variables
  that control the IPO opto within compile and reoptimize_design.  You can
  get a list of these using "list -var links_to_layout" within dc_shell.

  let me know what you find,

         ----    ----    ----    ----    ----    ----   ----

I let him know with the following:

  Thanks for the call.  I checked these variables:

               compile_ignore_area_during_inplace_opt
               compile_ignore_footprint_during_inplace_opt

  Both of these variables were set to true by default when I start up
  dc_shell (and when I load in our environment setup).  I think I found
  the culprit that allowed it -- our dso_synopsys_dc_setup.scr was
  setting them.  I read the man pages on it, and assume (wrong) that
  they were default set to false but never checked it.

  Our script has been changed to protect us from more evil transgressions.
  ;^)

  However, that doesn't solve the end solution of the change log
  generation- dc_shell just doesn't provide the needed info when it does
  do the logic function swap (ie the input pin nets were swapped).  We
  could have handled the cell swap much easier if the change log
  generated by the IPO had stated that the input nets were swapped as
  well as provided info on the buffer tree insertion points.  It seems
  to me that dc_shell was poorly equipped to handle the reporting of the IPO.

    - Gregg 


         ----    ----    ----    ----    ----    ----   ----

And then he returned with:

  Gregg, glad you found the culprit.  You're right, I didn't mention the
  change_list file pblm.  This list will include any new cells or new nets.
  It probably gave no indication of net swap because a new net was not
  created, but nonetheless I can see how this could cause potential pblms
  with netcompare.

  Sounds like a good time for an enhancment request.  I don't suppose you
  can prune a small testcase for this - can you?  Perhaps I can help -
  being that you're pretty busy right now.

  I'd like to touch base on the clock tree insertion issue you raised as
  well.  Let's touch base Monday.

  TGIF

And we're still working on the clock tree issue now.

  - Gregg Lahti
    Intel


( ESNUG 299 Item 10 ) --------------------------------------------- [9/98]

Subject: ( ESNUG 297 #6 ) Need Datapath Compiler For Fast Multiplier & Adder

> I need a fast 6 by 10 multiplier and an 18 input adder.  Does anyone know a
> data path compiler for a very popular .35u process which can just give me
> layouts for these things?  Synthesis is not fast enough for me even though
> I have tried booth multipliers and wallace tree adders.  I guess I need
> physical level design output.
>
>   - Muzo
>     Kal Consulting


From: Stephen McInerney <stephenm@faraday.ucd.ie>

Hi John,

In reply to Muzo's posting, could he be more specific about speed?  My
initial impressions of Module Compiler are positive, can other people share
experiences?  (It also seems to achieve library independence.)

For IP block selection, you might want to publicise this excellent and
comprehensive site:  http://www.design-reuse.com

  - Stephen McInerney
    University College, Dublin

         ----    ----    ----    ----    ----    ----   ----

From: Michael Solka [michael@sgroup.com]

Hello John,

I saw Muzo's ESNUG post and can help.  My company does contract design work
and specializes in high performance full custom CMOS.  We have done several
execution units which use different size multipliers and adders.  We have a
multiplier which we can license.  We can provide physical layout as well as
full schematics, simulation models, and timing views.  Please see our web
site (www.sgroup.com) and send me some e-mail if you think that we can help
you out.  I can also be reached at 512-329-5295 (x102).

  - Michael Solka
    The Silicon Group, Inc.

         ----    ----    ----    ----    ----    ----   ----

From: [ A Little Bird ]

The "Epoch" tool from Duet/Cascade Design Automation should be able to do
this.  It's an obscure but pretty powerful tool.  Disclaimer: I worked there
two years ago. :-)  You might also want to check out Arcadia's Mustang.

  - [ A Little Bird ]


( ESNUG 299 Item 11 ) --------------------------------------------- [9/98]

Subject: (ESNUG 296 #1 297 #1) VERA vs. Specman: VERA Is A Subset Of Specman

> I've been using Specman for almost four years (since the erlier version).
> Similar to this user, I didn't use the other tool (VERA in my case) and
> feel 'Specman expert' enough to make the comparison you proposed.  However
> I think that this comparison could not be done, since VERA is only a
> _subset_ of Specman.  ...  Specman is able to GENERATE tests using its
> built-in constraints solver.  This is the core of the tool, and the reason
> we use it.
>
> (Theoretically you could use Specman to just generate the tests, then run
> the tests using VERA -- or any other tool -- but, there's probably no
> reason to do it that way.)
>
>   - Boaz Tabachnik
>     National Semiconductor                     Tel-Aviv, Israel


John,

I find it interesting that Boaz started by saying he don't know enough about
Vera to do a comparison, and in the same sentence inform us that Vera is
"only a  subset" (which is not the case). 

Just to set the record straight, Vera *does* have powerful automatic stimulus
generation capabilities.  Let's keep comparisons objective.

  - Daniel Chapiro
    CEO of Systems Science (which is now a division of Synopsys)


( ESNUG 299 Item 12 ) --------------------------------------------- [9/98]

From: Kim Flowers <kimf@translogic.com>
Subject: Problems Exporting From Synopsys dc_shell To EPIC PowerMill

I am using EPIC's PowerMill to estimate power usage for netlists which I
have created through Synopsys dc_shell.  I would like to export the
pre-layout parasitic values being calculated for each net from Synopsys's
wire load models so that I can use them in the PowerMill run.

I haven't been able to find a direct way of doing this, however.  The
closest "approved" method I've seen of getting the parasitic values out
from the dc_shell is using the "write_parasitics" command, which generates
an SPEF-format file.  Unfortunately, the only format for a parasitics file
which PowerMill accepts is DSPF, which is apparently incompatible.

Is there an easy translation available for SPEF to DSPF, or is there a
freeware/shareware/commercial product available which can do this?  (Or any
other solution to this problem?)

  - Kim Flowers
    TransLogic Technology, Inc.



 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)