Editor's Note:  Well, the world didn't end with <yawn> the dreaded Y2K
  Bug that was supposed to wipe out human civilization and I'm really
  excited about Jeff Winston's freeware IPO buffer resizing program,
  IPOfix, in Item 1 below.  (Way to go, Jeff!)  And, just to be completely
  sure that everything's back to normal, you'll find one of my righteous
  rants in Item 11 below, too.  Enjoy!  <grin>  Welcome back, all!

                                             - John Cooley
                                               the ESNUG guy

( ESNUG 339 Subjects ) ------------------------------------------- [1/00]

 Item  1: ( ESNUG 335 #1 )  IPOfix - A Freeware IPO Buffer Resizing Program
 Item  2: ( ESNUG 335 #1 338 #1 )  Wall Street Curious About PKS & PhysOpt
 Item  3: ( ESNUG 337 #3 338 #8 )  X's, Optimism, Resets, & Verilog Case
 Item  4: ( ESNUG 335 #2 338 #6 )  DC/PT 3-D Load-Dependent Lookup Tables
 Item  5: Seeking User Input On Verilog Pretty Printers & Obfuscation Tools
 Item  6: ( ESNUG 338 #2 )  Yay!  We're Not The Only Ones Using Silos III !
 Item  7: ( ESNUG 337 #9 )  Hidden DC/BC Switch To Force Usage Of Xilinx DW
 Item  8: ( ESNUG 321 #3 )  Well, We're Quite Happy With Veritools' HDLlint
 Item  9: A Freeware User-Friendly Synopsys License Fetching TCL Procedure
 Item 10: Users Say Why Verisity's Specman Is Eating Synopsys VERA's Lunch
 Item 11: Cooley Provides More Data On The Synopsys VERA Marketing Fiasco

 The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com


( ESNUG 339 Item 1 ) --------------------------------------------- [1/00]

Subject: ( ESNUG 335 #1 )  IPOfix - A Freeware IPO Buffer Resizing Program

> Using this flow, layout partitions typically took 6 to 10 passes to
> achieve timing.  Each pass could take 2 to 3 days.  Our main headache was
> that Reoptimize Design would make timing by disturbing a large percentage
> of the netlist.  Then we'd get caught up in a chicken & egg loop where the
> incremental P&R required to fix the reoptimized design would cause enough
> P&R disturbance to require another major pass in DC reoptimize design.
>
> We often discovered that going through a large number of reoptimize design
> passes would result in an unroutable layout.  Reoptimize design, by
> running outside of the layout environment, just did not have enough
> information to make good IPO decisions.
>
>     - Bob Prevett, Design Engineer
>       NVIDIA                                       Santa Clara, CA


From: Jeff Winston <jwinston@maker.com>

Hi John.

We recently finished taking a fairly large, complicated, fast chip thru
to tape out.  One of the biggest roadblocks was that we were unable to run
Design Compiler IPO to our satisfaction.  It took many days to run (on
360MHz UltraSparcs), was repeatably crash-prone, and didn't produce good
results.  (We tried both 1998.08-1 "normal" IPO, and 1999.05-2 Floorplan
Manager IPO).  While we watched the tool slowly crunch our design, I did
a little C programming and developed what may be a better way...

I wrote a program that reads the Primetime output and upsizes gates
whose individual delays exceed user-specified limits.   That is, it
looks for and speeds up all the slow gates in the failing paths.   It
applies these changes to the netlist, and adds a suffix to the names of
the changed gates to force Primetime to re-calculate the delays (the
suffix can be easily removed later).  It is very careful to change only
the characters it needs to in the netlist, allowing before/after
comparisons using UNIX diff.   For our design, an entire IPO interation
took about 5 minutes to run the program, and then 2 hours to re-run
PT.  This admittedly brute-force approach allowed us to completely IPO
our chip very quickly, adding only 0.3% to the gatecount (well within
Avant's ECO capabilities).  Before we were done I had enhanced the
program to read the PT transition-violation output and upsize as needed,
and to read the PT min-delay violation output and add hold buffers as
needed (and generate an estimated .SDF file for the newly-created
hold-buffer nodes).  Though not perfect, and possibly a little wasteful,
this program solved nearly all of our timing-closure problems (the rest
required placement changes), and ran in minutes rather than days.  We
intend to use this methodology on future designs here.

I'm happy to make my source code available to anyone who wants to use
it, though with no promise of support (I'm by no means a polished
software engineer but the code is commented and fairly readable).  The
program is optimized for use with VLSI Technology's libraries, but could
be changed to work with gates from another vendor.  The only request I
make is that I have free access to any enhanced versions of the code.

    - Jeff Winston
      Maker Communications,                       Framingham, MA

[ Editor's Note: To get IPOfix, go to http://www.DeepChip.com and you'll
  be able to easily download it from there.  Good job, Jeff!  - John ]


( ESNUG 339 Item 2 ) --------------------------------------------- [1/00]

Subject: ( ESNUG 335 #1 338 #1 )  Wall Street Curious About PKS & PhysOpt

> Anyway, I hope this info helps someone else who is out trying to make tool
> decisions.  Despite my gripes, I really like the tool and plan on using it
> on all new projects.  Although we haven't taped out anything with Chip
> Architect yet, "Curly" is on the fast track to go and is definitely moving
> faster than it would be without something like Chip Architect.
>
>     - Jon Stahl, Principal Engineer
>       Avici Systems                             N. Billerica, MA


From: Garo Toomajanian <gtoomajanian@dainrauscher.com>
To: Jon Stahl <jstahl@avici.com>

Jon,

I just read your ESNUG posting.  I heard that you guys have started to use
Physical Compiler (PhysOpt) there at Avici in addition to Chip Architect (I
talked about this a little bit with Synopsys).  Can you tell me what other
tools you looked at?  Avant! Saturn?  Cadence PKS?  Any info regarding how
you arrived at your decision would be of interest to me.  

    - Garo Toomajanian, Analyst
      Dain Rauscher Wessels

         ----    ----    ----    ----    ----    ----   ----

From: Jon Stahl <jstahl@avici.com>
To: Garo Toomajanian <gtoomajanian@dainrauscher.com>

Hi Garo,

Our decision was kind of involved.

We took a look at Cadence PKS, out at DAC, and at customer sites whom were
friends of ours, and were interested.  However three things ruled Cadence
off of our list.

    1.) We had a difficult time interacting with the Cadence salespeople
        and pre-sales support engineers -- they had their own idea of
        how the sales/eval process should go and we had ours.

    2.) Cadence layout software is in flux, and doesn't currently have an
        API (they got rid of SKILL and have something new coming out), or
        a common database -- both of which we consider very important.

    3.) It seemed unless you already owned the Cadence layout tools it was
        impossible to get on the short list to look at PKS.

A final note: in it's current state PKS uses Ambit at the front end with the
Ambit Static Timing Analyzer, and Qplace at the back end with Pearl as the
Static Timing Analyzer, something we thought was a mishmash and certain to
cause timing correlation problems.

We purchased Avanti basically after eval'ing it on a few of our designs.
The Avanti routers seem to be especially efficient, but I can't tell you
much more since we haven't used it in depth yet.

I am sorry if my posting was a little confusing, but we haven't looked
at Physical Compiler yet -- only Chip Architect.  We decided to purchase
Chip Architect in addition to Avanti for the reasons stated in the posting,
and due to the fact that it is easier for my designers to use than the
Avanti tools. It has more of the look and feel of DC.  Avanti has a STEEP
learning curve.

I have been happy with Chip Architect except for - -as I talked about in
the posting -- the way it deals with physical/logical hierarchy (and it's
inability to write a logical netlist).  I have run into more and more
problems because of it, and although I am banging on Synopsys to fix it,
I don't see a resolution in the short term.

    - Jon Stahl, Principal Engineer
      Avici Systems                             N. Billerica, MA


( ESNUG 339 Item 3 ) --------------------------------------------- [1/00]

Subject: ( ESNUG 337 #3 338 #8 )  X's, Optimism, Resets, & Verilog Case

> Here are two coding examples that attempt to overcome Verilog optimism w/
> respect to propagation of X's through case statements in simulation.  We
> were burned by not finding a reset problem until late in the schedule
> when our gate level simulation propagated the X correctly.
> 
> These case statements are easier to read and use than Harry's examples in
> ESNUG 337 #3.
> 
>         reg [1:0] d, e;
>         ...
>         begin
>          case (d)
>            2'b00: e = 2'b01;
>            2'b01: e = 2'b11;
>            2'b10: e = 2'b10;
>            default: 
>              begin
>    // synopsys translate_off
>                if ((|d[1:0]) === 1'bx))
>                  e = 2'bxx;
>                else
>    // synopsys translate_on
>                  e = 2'b00;
>              end
>          endcase
>
> Our design team is currently using this methodology to find more
> intialization problems earlier.
>
>     - Lauren Carlson
>       StarBridge Technologies, Inc.                Marlboro, MA


From: "Harry Foster" <foster@rsn.hp.com>

Hi John,

Concerning Lauren Carlson's reply to my letter, there's a problem with her
example as coded, which illustrates my original point on how difficult it
is to correctly craft X-state support into the RTL.

If d takes on either the value 2'b1x or 2'bx1, then the Reduction OR
operator (|d[1:0]) will result in a 1'b1 (not a 1'bx).  Hence, the example
as coded will not propagate the X correctly.  This example can be improved
by using the Reduction XOR operator (^d[1:0]), which will work correctly.
However, the problem still comes down to coverage "controllability vs.
observability" in the test suite.  There's no guarantee that the
initialization error would be observable in later stages of logic for a
given stimulus.  The main point I was trying to make is that spending time
crafting X's in the RTL is non-productive and error prone.  In addition, as
Cummings previously stated, it violates faithful semantics between the RTL
and gate level simulation.

I believe the verification process is better served by:

  (a) Eliminate X assignments in the RTL.
  (b) Use faster RTL 2-state simulation with random 1 or 0 initialization
      (as opposed to X-state initialization.)
  (c) Use assertions to trap and halt simulation errors instead of
      propagating the problem.

Hope this helps.

    - Harry Foster
      Hewlett-Packard Computer Technology Lab

         ----    ----    ----    ----    ----    ----   ----

From: "Glenn Poole" <gpoole@home.com>

John,

Lauren's example above is incorrect.  If d is 2'b1x, (|d[1:0]) will give
a result of 1.   You need to use the XOR operator "^" to make this work.

    - Glenn Poole
      Poole Design


( ESNUG 339 Item 4 ) --------------------------------------------- [1/00]

Subject: ( ESNUG 335 #2 338 #6 )  DC/PT 3-D Load-Dependent Lookup Tables

> Synopsys version 1999.10 supports 3-dimensional lookup tables for delay
> modeling.  I hope to find out what other tools (e.g., Avanti, Cadence,
> etc.) already have or plan to have compatible timing modeling.  What have
> you heard from your vendors?  Are there characterization tools out there
> that can write .lib's with this syntax?
>
>     - Andy Pagones
>       Motorola Labs


From: Stefan Scharfenberg <Stefan.Scharfenberg@motorola.com>

Hi John,

Please allow me to comment on the 3-D Lookup Table discussion.  I think Andy
made a good point and the Synopsys Library Compiler CAE's reply answers this
to a certain extend.  What I always find difficult when creating libraries
is to find out how are certain ways to model the library or certain
constructs impacting other tools?  The Synopsys .lib has become a very
popular format which can be parsed by many tools (from Synopsys and other
vendors).  Along with this development additional commands and constructs
like the 3-D tables were added.  These are very well described in the
Library Compiler Manuals, what is missing is a list or table of tools that
can read this information and actually use it in a meaningful way.  Let me
give an example.

  /* Library defaults */
     slew_lower_threshold_pct_rise : 20.00 ;
     slew_upper_threshold_pct_rise : 80.00 ;
     slew_derate_from_library      :  1.00 ;
     input_threshold_pct_fall      : 50.00 ;
     output_threshold_pct_fall     : 50.00 ;
     input_threshold_pct_rise      : 50.00 ;
     output_threshold_pct_rise     : 50.00 ;
     slew_lower_threshold_pct_fall : 20.00 ;
     slew_upper_threshold_pct_fall : 80.00 ;

The above lines are only meaningful to PrimeTime.  There is no hint to this
in any documentation that I looked at, so a user may have the impression
that all Synopsys tool understand this.

    - Stefan Scharfenberg
      Motorola SoCDT                                Munich, Germany

         ----    ----    ----    ----    ----    ----   ----

> I'm the Library Compiler CAE.  Since Andy brought this up, I would like to
> give a brief history of 3-D timing modeling plus some recommendations.
> We've had a lot of experience at Synopsys modeling unbuffered outputs.
> 
> In the early days of DC, we modeled one output dependent on output loading
> another output, but no output-to-output timing arcs.  Many people use this
> today, though fewer & fewer libraries heavily leverage unbuffered outputs.
>
> A few years ago we added setup & hold constraint 3-D tables.  Some vendors
> requested them and 3-D constraint tables are generally used when we have
> load dependant constraints.  Setup/Hold depends on loads on both Q & Qbar.
>                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>     - [ The Synopsys Library Compiler CAE ]


From: [ Made In Taiwan ]

Hi, John.  Anonymous, please.

I am wondering how this can be true: "Setup/Hold depends on loads on both Q
and Qbar." from the SYNOPSYS CAE' statment.

For DFF's, we see a master/slave architecture.  Otherwise, it is a LATCH.
So, the setup time of a DFF should not be affected by LOADING, since the
loading is blocked and afftected only the slave stage.

Can anybody tell me what I am missing out ??

    - [ Made In Taiwan ]


( ESNUG 339 Item 5 ) --------------------------------------------- [1/00]

From: Tomoo Taguchi <tomoo@sdd.hp.com>
Subject: Seeking User Input On Verilog Pretty Printers & Obfuscation Tools

Hi, John,

I'm looking for two things:

  1.) Pretty Printers.  I'd like to have a pretty printer for Verilog (and
      other things like TCL, dc shell, perl) which highlights keywords, etc.
      In poking around I ran across a2ps, Enscript, and Trueprint.  I've
      tried out Enscript and it pretty well does everything I want, but I
      was wondering if anyone out there has tried all three or another that
      I haven't heard of and has an opinion on which is best (or are they all
      pretty much the same)?

  2.) Obfuscation Utilities.  I want to send test cases to vendors, but I
      want to obfuscate the code/netlist (change reference names like wire,
      port, instance, module names) so that the design intent is not obvious,
      but the design is still simulateable/synthesizable.  Back in Feb 1997,
      you mentioned a program called KRYPTON from a French company called
      LEDA that does what I want.  I poked around the web to see if I could
      find them and couldn't.  Also in an old ESNUG, someone mentioned that
      Verilint does the same function.  Those are the only two that I've
      heard of, but I was wondering if there are others out there.

Although I haven't tried out KRYPTON, I see some limitations.  The main one
is that back in '97, it only claimed to do VHDL.  I need Verilog.  I hope
that if LEDA and KRYPTON are still around that they offer Verilog as well.
Also, the renamed references seemed to be very confusing mixtures of 1, 0,
o, and i.  Although this makes the code very unreadable, I'm looking more
for just renaming things a, b, c, d, etc.  I'd like to be able able to
discuss the test case in a sane way over the phone with the vendor.  Being
able to say look at register "a_reg" is a lot better than saying look at
register "I-0-1-o-1-1-i_reg".

I've tried out the obfuscation feature in Verilint, and although it does
some things I want, there doesn't seem to be much controllablity to the
feature.  It obfuscates the library as cells as well as the source
code/netlist, which might be nice featured, but my vendor already has the
library that we are using, so obfuscation the library cells would be a pain
in my case.  Also, I couldn't figure out any way to print out the cross
reference between the original reference and the obfuscated reference,
which I think would be a must of a software that does this function.

So, what I'm looking for is a utility that:

  1.  Obfuscates references (but with some controllability so
      it doesn't mess with libraries).
  2.  Has some controllability to the way it renames things.
  3.  Can generate a cross reference table.
  4.  Can obfuscate RTL and netlist with the same cross reference
      table.  (So "reg a;" in RTL will be "reg a_reg" in the
      netlist).

Anything like this out there?

    - Tomoo Taguchi, ASIC Whipping Boy
      Hewlett-Packard                              San Diego, CA


( ESNUG 339 Item 6 ) --------------------------------------------- [1/00]

Subject: ( ESNUG 338 #2 )  Yay!  We're Not The Only Ones Using Silos III !

> After reading the install notes, I figured it would be worthwhile to
> update my Silos III licence on my laptop to the new version 99.1 of the
> tool.  And after wasting a good hour chasing this, e-mailing that,
> reading this, I discovered during the stage where have you edit the
> licence file to put in:
>
>    FEATURE Sse simucad 99.101 permanent uncounted 12A8B1B56A42 \
>                    HOSTID=SIMUCAD=601037 ck=97
>
> It's REALLY important you type in "99.101" even though all over the Silos
> documentation they talk about version "99.1".  Yes, it's minor, but I
> figure if this saves 1,000 engineers an hour's hassle, it's worth
> publishing.
>
>     - John Cooley
>       the ESNUG guy


From: Lynn Reed <Lynn.Reed@tekmos.com>

John,

We did not have a problem with our installation.  The Silos license file
(fax) stated 99.101, and we just copied it in.

We were also pleased to find out that someone else in the world uses Silos.
We thought that we were the only ones.

    - Lynn Reed
      Tekmos, Inc.                                      Austin, TX


( ESNUG 339 Item 7 ) --------------------------------------------- [1/00]

From: Dieter Peer <peer@iis.fhg.de>
Subject: ( ESNUG 337 #9 )  Hidden DC/BC Switch To Force Usage Of Xilinx DW

John,

One of my colleagues has found an "undocumented switch" in the DC/BC
software.  We now have the following statement in our .synopsys_dc.setup
file:
            synlib_preferred_library = {"xdw_virtex"}

This seems to force the compiler to prefer the Xilinx DesignWare (xdw) lib
over the standard DW designware modules, even if the results in the timing
report look worse concerning "design for speed".

As the Xilinx backend place&route (par) tools seem to recognize these xdw
modules in the EDIF netlist, the overal design performance on Virtex-6 is
now about 20% faster. It seems that xdw DesignWare modules are recognized
by Xilinx place&route, thus taking advantage of special interconnect
schemes inside the FPGA.

Unfortunately "xdw_virtex" does not (yet?) contain multipliers, so BC/DC
uses standard designware multipliers, whose Xilinx par routing results look
disadvantageous ("rat's nest style") in the floorplaner, compared to the
clearly structured xdw adder/subtractor blocks.

We are aware of the ESNUG 311 #14 post, that warns of using "hidden,
undocumented features", but in our case it has at least partially improved
our results.

    - Dieter Peer
      Fraunhofer-Gesellschaft                    Erlangen, Germany


( ESNUG 339 Item 8 ) --------------------------------------------- [1/00]

Subject: ( ESNUG 321 #3 )  Well, We're Quite Happy With Veritools' HDLlint

> [ Editor's Note: This price gouging of charging $47K for interHDL's/
>   Avant!'s VeriLint may spur a real search for alternatives.  The tool
>   that comes to mind for me is 'HDL LINT' from Veritools.  'HDL LINT'
>   sells for $3K a copy (and gets cheaper if you buy more copies) and is
>   said to run 700-800 lint checks.  It has *no* cheesy usage timers
>   involved.  Plus it has a Perl interface so users can customize it and
>   do style checking.  (See http://www.veritools-web.com )  My question
>   is "has anyone used 'HDL LINT'?  Is it a viable alternative?"  - John ]


From: "Gareth O'Loughlin" <goloughlin@extremepacket.com>

Hi John,

Love your show ... 

We evaluated Veritools' (http://www.veritools.com) HDLlint tool a few months
back.  We were impressed and purchased a seat.  The tool is a command line
program that lints Verilog code.  They claim to do 100% of all the checks
Verilint does (overpriced tool from Avanti, originally from interHDL).  
Veritools is offering their lint tool at a reasonable price compared with
other alternatives.  Despite being a new tool, HDLlint is very usable.  The
tool can even provide output that is parsable by Emacs.  This allows you to
do an interactive debug -- you can click on a line error reported by HDLlint
in an Emacs buffer and the source file will pop up in another buffer with 
the cursor at the offending line.  For now we also have a very simple
wrapper to make the output from their tool look a little nicer for Emacs.
We expect that Veritools will have some more improvements moving forward.
So far we have found them to be very prompt in responding to any suggestions
we have made.

    - Gareth O'Loughlin, Design Engineer
      Extreme Packet Devices                 Kanata, Ontario, Canada


( ESNUG 339 Item 9 ) --------------------------------------------- [1/00]

From: Gregg Lahti <glahti@sedona.ch.intel.com>
Subject: A Freeware User-Friendly Synopsys License Fetching TCL Procedure

Hi, John,

In the midst of converting our DC scripts to TCL scripts for use in 99.05,
I noticed that a few things with DC that seem broken:

  1) If you already have a license (say HDL-Compiler) and you request 
     another one, DC offers an error return status with no nice warning.
     Not very useful.

  2) The list_license command in dc_shell-t (TCL) mode is broke.  One
     cannot do a "set licenses [lic_license]" and expect the output
     to be in the $licenses variable.  Instead, one must stuff the
     output to a file and then parse the file to get the actual results.
     How ugly & rude!

Hence, I wrote a modular TCL procedure to fetch a license & provide correct
output status accordingly.  The procedure is setup to retry over for a
period of time (it's user settable, but I just hardcoded the variable in
the procedure.)  We use it to be "license usage friendly".

  #   usage:  P_get_lic {[license]}
  # example:  P_get_lic {HDL-Compiler} 
  #
  # procedure to get a license.  Note that DC gives error status
  # if we already have the license.  Hence, must determine if we
  # do have the license first before we actually get it.
  # Also re-check for the license in 60 second intervals for
  # 1 hour before we exit with an error if we can't get a license.

  proc P_get_lic {ln} {

   set fetch_license 1
   set sleep_val 60;          # sleep in seconds we wait for a license
   set max_timeout 60;        # number of 60-second waits until we die

   # First must determine which licenses we have.  To do this
   # we get the output of list_licenses into a TCL variable.
   #
   # Hack! DC is broke & can't set list_lic output to a variable,
   # so must stuff it to a tmp file & read it back in.  Use process
   # id (pid) in filename as a safer-alternative, touch & rm -rf to
   # first to be extra safe

   exec touch [eval pid].tmp
   exec rm -rf [eval pid].tmp
   list_licenses > [eval pid].tmp
   set licenses [exec cat [eval pid].tmp]
   exec rm -rf [eval pid].tmp

   # now that we have a TCL variable with the list_license output,
   # strip out the crap that DC uselessly puts in and parse the list

   regsub {Licenses in use:} $licenses {} licenses
   foreach checkedout_license $licenses {
      if [string match $checkedout_license $ln] {
         set fetch_license 0
      }
   }
   if {$fetch_license == 1} {
      set sleep_time 0
      set dc_status [get_license $ln];
      while { ($dc_status == 0) && ($sleep_time < $max_timeout) } {
         if {$sleep_time == 0} {
            set current_time [exec date]
            echo "Info: waiting for license $ln @ $current_time\n";
         }
         sh sleep $sleep_val;
         set sleep_time [expr $sleep_time + $sleep_val]
         set dc_status [get_license $ln]
      }
      if {$dc_status == 0} {
         echo "Error: Cannot get license $ln after $sleep_time seconds, dying!\n";
         return 0
      } else {
         echo "Info: checked out license $ln\n";
         return 1
      }
   } else {
      echo "Info: already have license, continuing along\n";
      return 1
   }
  }
  ## end P_get_lic


For those engineers who are always on the cheap-as-possible budget, you
might find this TCL script useful.  ;^)

    - Gregg Lahti
      Intel Corp                                    Chandler, AZ


( ESNUG 339 Item 10 ) -------------------------------------------- [1/00]

Subject: Users Say Why Verisity's Specman Is Eating Synopsys VERA's Lunch

> "Brilliant" was the word that Dataquest analyst Gary Smith used.  It was 18
> months ago and he was reacting to the news that Synopsys had just bought
> System Science for $26 million.  ...
>
> VERA's biggest rival, a language called "e" from an in-your-face Israeli
> start-up named "Verisity", is eating Synopsys' lunch in that market.
> According to Dataquest, in 1997, Verisity had 84 percent and VERA had 16
> percent of that $7.5 million market.  In 1998, after the world wide Synopsys
> marketing army had ownership of VERA, VERA grew to 19 percent of that now
> $13.5 million market.  It's been 18 months now.  We won't have the 1999
> Dataquest numbers for another 9 months, but as the ESNUG moderator I know
> should have been seeing all sorts of customer e-mails about VERA by now.
> Remember, we're talking 2 to 4 verification engineers for every chip
> designer.  And I know there are quite a few verification engineers on ESNUG
> (you get a *lot* of different types when you have 10,000 subscribers) yet
> there's *no* discussion of VERA whatsoever???  What up here???  Yup, it's
> probably some world class Synopsys marketing incompetence at work here.


From: Miroslav Pokorni <mpokorni@camintonn.com>

Dear John: 

What you are trying to do, embarrass Gary Smith?  These analyst clowns
get tested for their reaction to being held accountable for past 'analyses';
if any sensitivity is found, a frontal lobotomy is performed to make the
individual a suitable employee of a market analyses company. 

Way back in early 80s, when I began to get involved in ATE PCB testing, what
is arguably contemporary equivalent of today's ASICs, automatic test
generation was already a hot topic for several years. Everyone was talking
about doing it, but no one delivered. At one of ATE conferences, a well
respected consultant in ATE field, whose name escapes me now, asked why is
it that ATE industry is the last one to believe in Automatic Program
Generation.  If you are old enough, you must remember that in 70s and 80s
there was lot of talk about computer generated code; nothing came out of it,
except for Microsoft's code and we do not want to dwell on that disaster. 

    - Miroslav Pokorni
      Camintonn
 
         ----    ----    ----    ----    ----    ----   ----

From: Evyatar Hadar <ehadar@lucent.com>

Dear John,

My name is Evyatar Hadar.  I work at Lucent Israel as an ASIC Verification
team leader.  Our main tool is Verisity's Specman and we're working with it
since December 1995.

After seeing your "VERA Fiasco" article, I thought I'd forward you a letter
I sent ~2 years ago regarding the Verisity/Specman tool.  The guy who sent
me the detailed questionnaire worked at Lucent.  He sent the questions
before Lucent acquired us.  Since then, all issues mentioned just got
better.

    - Evyatar Hadar
      ASIC Verification Team Leader
      Lucent Technologies                          Tel-Aviv, Israel


  From: [ Evyatar Hadar ]
  To: [ Shrikant Nilakhe ]

  Dear Shrikant Nilakhe,

  Following are my responses to your inquiry regarding "Specman" and
  "Verisity".  All of it is my own private opinion.  I'm working with
  Specman for more than 2 years so I managed to gather quite a lot of
  experience as our Specman's focal point.
		 
  > 1. What's your general opinion of SPECMAN?

  SPECMAN is a great tool and platform to work with.  Its capabilities and
  its flexibility make it a powerful engine, by which you can pass your
  verification environment through a steep step function successfully by
  enhancing its performance and coverage dramatically.

  > 2. Are you a VHDL or Verilog shop?

  We're Verilog users.

  > 3. How big are the device's you normally do?

  Around 300 KGates (excluding SRAMs).

  > 4. Did SPECMAN cut down on the time verification took?

  SPECMAN cut down on the time verification took significantly.  It can be
  measured in various aspects:

      - the time it took to ramp up a working verification environment in
        terms of headcount and work weeks.  It includes planning,
        implementing and integrating of generators, drivers and verifiers.
      - the time it took to debug it.
      - the time it took to enhance it on the fly.
      - the time it took to maintain it.
      - the time it took to generate extraordinary test cases.
      - the time it took to find coverage holes and cover them.

  All the above should be compared to other alternatives.  Comparing to
  implementation with Verilog, C Code and PLI between them, the time it
  took was reduced by 40-60% more or less.

  > 5. Did it improve the quality of your verification?

  Absolutely, since it enabled us to generate extraordinary test cases
  relatively easily and fast. We managed to perform much more verification
  checks at the given time, covering more "design corners" than we might
  do otherwise.

  > 6. Did it do BOTH?

  As I wrote above - it did both.

  > 7. Are you happy with the support you've gotten?

  I was very happy with the support I got in the past.  It was a pure "red
  carpet" treatment.  These days I have to share Verisity's experts time
  with many other customers (some of them are stronger, bigger and more
  eager for support than me), so I lost my unique position.  The level of
  support we get now is sufficient.
   
  > 8. What's your opinion on the learning curve?
  > 
  >     i. experienced engineers?

  It should be a fast steep learning curve.

  >    ii. not-so experienced?

  It might be a slow learning curve due to the complexity of the platform.

  >   iii. pure hardware folks vs engineers w/ alot of software experience.

  Engineers with a lot of software experience would probably have a faster
  and steeper learning curve than pure hardware folks.  Yet - the "pure
  hardware folks" (as myself) would probably create a more Hardware oriented
  code than the SW folks, thereby fitting it better to the Hardware
  requirements, make it simpler and easier to debug and to port.

  >    iv. are the engineers getting into the object orientation?

  It takes a while until the object orientation perception is well
  understood, and then - it runs pretty fast.

  > 9. How much of a problem was it for everyone on the team to embrace it?

  At first the whole idea sounded like a huge overhead.  Then, after getting
  some "live shows" with examples and templates to adopt and modify, it was
  embraced pretty fast.

  It depends on your model of work to what extent the team should embrace
  it.  There are some models of work in which the levels of controlling this
  tool and mastering its complex features might vary a lot between team
  members.  In these cases you need few experts, whereas all the rest might
  be a low level Specman users.

  > 10. After a month of use,  did engineers perception change? How?

  The engineers' perception changed a lot, since they saw directly the
  impressive performance, capabilities and results of using the tool.  As
  they used it more and more, it became easier to control it and get more
  out of it.
		      
  > 11. How many SPECMAN bugs have you found?

  We were Beta site of SPECMAN (we started using it more than 2 years ago).
  We found many bugs.  All of them were fixed pretty fast.  The platform 
  you see now is much more cleaner than the one we saw and used.

  > 12. What it's the biggest advantage?  What's your favorite feature?

  The flexibility and the modular and dynamic built-up of the verification
  environment is the biggest advantage I can see.

  My favorite feature is the capability to create Verilog's internal full
  path names using variables as part of the strings.  You don't have this
  feature in Verilog and it costs a lot of time, code length and readability
  as well as debug time.

  SPECMAN's people focuses on the RANDOM GENERATOR capabilities of the tool,
  yet I personally focuses on the deterministic tests.  In most cases it
  depends on the design style and the type of application in which it should
  work.  So, in other cases this might be regarded as the biggest advantage.
  In my case I see other advantages.

  > 13. What it's biggest disadvantage?  What would you change if you could?

  The biggest disadvantage I can see is the complexity of the "e" language.
  It takes its time to master it.  Yet - if it wasn't this way - I wouldn't
  use it at all, since it then wouldn't be rich enough in capabilities.
  Paying this price for the benefits of the tool is quite reasonable.

  In our case I would like to have more bit and byte manipulations.

  > 14. How did you evaluate it? (maybe: How long did try it before buying)?

  Due to a manpower shortage we allocated only 2 people for a few weeks.  We
  managed to finish the task within 3-4 weeks replacing existing Verilog
  generator and verifier models with SPECMAN code and making sure we get the
  same results.

  > 15. Did you slowly phase the tool in or flash cut it.  Are all your
  >     engineers using it or is there mix?

  We slowly phased the tool in.  It depends on maturity of current projects
  within the company, since some of them might be in a point, where it might
  be too late to use it.
		  
  Not all our engineers are using the tool.  There's a mix and there are
  many levels of expertise.

  > 16. Is it used at all the different levels? (block, chip, multichip)

  Absolutely !

  > Any other things we should know about the Verisity ?

  The people there are highly motivated.  I really enjoy working with them!

         ----    ----    ----    ----    ----    ----   ----

From: [ STMicroelectronics ]

John,

We have a purchasing agreement with Synopsys for a package of tools which
happens to include Vera, however in the last six months the license has
been checked out a total of 12 times.  Even though we have access to the
tool, the "marketing" of the product within our own company is nil.  <anon>

    - [ STMicroelectronics ]

         ----    ----    ----    ----    ----    ----   ----

From: "Ihab Mansour" <ihab.mansour@intel.com>

John, FYI,

We have been using Vera here for more than 2 years by now. At the time we
evaluated Verisity "e" also, and decided to go with Vera.  Vera is doing a
great job for us with minor to negligible problems.

    - Ihab Mansour
      Intel                                   San Diego, CA

         ----    ----    ----    ----    ----    ----   ----

> Functional testing is ugly, mind-numbing, labor-intensive work. At OVI'98,
> Jack Harding (then the CEO of Cadence) said in his keynote: "Our customers
> tell us 25 to 75 percent of their time is spent in verification."  Another
> design engineer later e-mailed me: "Glen Dearth of Sun Microsystems said
> in his OVI paper that they have a 4:1 ratio of verifiers to designers.  I
> heard another speaker claim a 2:1 ratio. A friend at Compaq said they were
> probably 1:1 but growing."  Four months after that, Al Sibert of Nortel
> said at DAC'98: "Staffing is 2:1 of verification engineers to designers."


From: Jim Mott <James.Mott@Eng.Sun.COM>

Hi John,

You mis-spelled Glenn Dearth's first name.  (It has two n's).

Our west coast ASIC group has a little over 3 to 1 for verification vs.
design engineers, but we'd like to have more verification.  It's daunting to
realize after 18 months of 24/7 simulation on dozens of machines, that the 
actual device has (potentially) run more cases than verification did within
the first few minutes after its initial power on.

    - Jim Mott
      Sun Microsystems

         ----    ----    ----    ----    ----    ----   ----

From: Glenn Dearth <gdearth@simplicity.East.Sun.COM>

Hi John,

I would like to make a small correction to your article.  You have someone
quoting me as saying: "Glen Dearth of Sun Microsystems said in his OVI paper
that they have a 4:1 ratio of verifiers to designers..."  This is not
correct.  What I said was that there is perhaps a 3:1, maybe even as high
as 4:1 ratio of verification code to design code.  This ratio takes into
account all environment code, test code and process scripts.  Although, at
times I wish there was a 4:1 ratio of verifiers to designers I have not
seen this ratio in Sun.  Thanks.

    - Glenn Dearth
      Sun Microsystems

         ----    ----    ----    ----    ----    ----   ----

From: Allan Silburt <asilburt@cisco.com>

For the record, its Allan Silburt.  Formerly of Nortel.  I left for a
startup last year and we were later acquired by Cisco.  So, now I'm at Cisco.

And... we've used Vera on two projects in a row over the past 2 years at 3
companies.  Good product.  But if I had my choice, which I never seem to,
I'd still like to see a good industrial strength 'C' based environment.  I
don't like proprietary languages.

    - Allan Silburt
      Cisco Systems                          Kanata, Ontario, Canada

         ----    ----    ----    ----    ----    ----   ----

From: Sean Ogle <sean@pixelcam.com>

John,

What are you talking about "Design Engineers" vs. "Verification Engineers".
I've never heard of a company where design engineers don't also do
verification.  Most companies don't have the luxury of having purely
Verification people.  System emulation is the only answer to verification.
Don't believe the fantasy that some software tool will magically take away
all the hard work.

    - Sean Ogle
      PixelCam                                Campbell, CA

         ----    ----    ----    ----    ----    ----   ----

From: [ Shaken, Not Stirred ]

John, Keep my name out of this...

I think it is simply that Sysnopsys does not understand verification as a
business.  They may have a few switched on dudes in eng/apps, but at senior
level they have no idea.  Take, for example, the Arkos fiasco.  When they
dumped that technology they also dumped a number of verification savvy
people either by choice or by accident.

    - [ Shaken, Not Stirred ]

         ----    ----    ----    ----    ----    ----   ----

From: [ Luck Favors The Ready ]

John, 

Please keep me anonymous.

Man did you ever hit the nail on the head with that story.  We contacted
Synopsys several times over the last year about working with Vera.  We are
a large Synopsys customer and we get good-to-great support for most of
their tools, but they basically blew us off when it came to Vera.  Guess
what tool we're ramping up on now?

    - [ Luck Favors The Ready ]


( ESNUG 339 Item 11 ) -------------------------------------------- [1/00]

From: John Cooley <jcooley@world.std.com>
Subject: Cooley Provides More Data On The Synopsys VERA Marketing Fiasco

What caused me to write that Industry Gadfly column publically questioning
VERA's supposed success was the Synopsys press announcement where they
promoted Gulam Nurie to a VP.  Check it out.  One of my "jobs" as an EDA
consumer advocate is to debunk the really outrageous claims some of these
EDA companies try to pull on users.  And when I read in that press release
that Synopsys was claiming that they had 5,000 VERA users, my bullshit
detectors went off.  It's like me claiming that I've been crowned the King
of the Elves and putting out a press release saying that plus the fact that
I had 5,000 elves living in my basement.  Of course I can't show you the
elves because they're too shy.  Honest.  They're there.  Just trust me, OK?

OK, given the "you can't go in my basement" restriction, a clever engineer
could at least test the veracity of the "5,000 elves living in my basement"
claim by checking all the indirect evidence.  How so?  Well, 5,000 elves
would eat food, drink water, (presumably) need showers & toilets, and have
occasional accidental interactions with humans.  That is, they'd have a
serious impact on my water, electricty, sewer, and heating bills plus
somehow the local grocery stores would be feeding these 5,000.  On trash
day, I'd see a mountain of trash set out on the curb from these 5,000 elves.
Also, there would be random police reports concerning accidental elf/human
interactions; sightings, unexpected car-hits-elf incidents, thefts,
altercations, etc.  You simply can't have 5,000 elves without an impact.

You also can't have 5,000 VERA users without an impact.

Do the quantitative comparisons and you'll see what I mean.  I searched in
the ESNUG archive and found only 5 VERA letters for the 1999 calendar year.
Pretty low.  For Verilog, I found 110 user letters.  OK.  When I went to
the newsgroup archives at www.deja.com, for the 1999 calendar year in the
comp.* newsgroups I found:
                            Number of    Min. Number    Annual User
       EDA Keyword         Deja letters    of Users     Letter Rate
      --------------       ------------  -----------   -------------
       "Verilog"              4,300         19,207         22.4%  OK...
         "VHDL"               6,600         24,374         27.1%  OK...
  "DC" or "Design Compiler"   1,500         16,000          9.4%  OK...
       "Synplicity"             328          3,500          9.4%  OK...
         "VERA"                  32         "5,000"         0.6%  Huh???

 Annual User Letter Rate = ( # of 1999 Deja Letters / # of Users ) x 100%

The "Min. Number of Users" are numbers I got from Dataquest or, in one case,
from my contacts at Synplicity EDA.  (I researched Synplicity because
it's a VERY simple tool to use -- so it should probably have a lower than
normal User Letter Rate -- and it's a very unique word -- easy to find in
the archives.)  And here's where the VERA marketing claims fall apart.  From
this data, it appears that a baseline Letter Rate of ~10% is to be expected
for really easy to use EDA tools like Synplicity or for mature tools like
DC.  Complicated, involved-to-use software like Verilog or VHDL has a Letter
Rate of roughly 25%.  Makes sense.  They're more complicated, so users
should be writing about them.  And I'll wager that complicated functional
verification software like VERA is more akin to Verilog than Synplicity, so
it should have a Letter Rate of roughly 25%.  Instead it had a lame 0.6%
Letter Rate!!!  If VERA really did have these 5,000 users, there should have
been at least 500 (~10%) to at most 1,250 (~25%) VERA letters swiming around
on the Internet instead of those paltry 32!  (And 10 of those VERA letters
were within the last 2 months of 1999!)

In the 1999 Synopsys Customer Education schedule, there were 45 VHDL and
53 Verilog oriented classes pre-scheduled for 1999.  Although it does
mention a VERA class, zero VERA classes were pre-scheduled for 1999.  Huh?

So, either VERA has 5,000 abnormally quiet and ultra-satisfied users who
didn't need to be trained on how to use it - OR - Synopsys VERA marketing
is trying to pull a fast one on us.  And until I start seeing more actual
evidence of the supposed 5,000 VERA elves alleged to be living in Gulam
Nurie's basement, I'll keep publically yarping about fast ones.

    - John Cooley
      King of the Elves



 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)