> "EDA is the only industry I know of that refuses to serve its customers.
  >  For every dollar spent on EDA, something like two dollars is spent by
  >  the user after he spends that dollar.  That tells me the EDA industry
  >  is not providing the service customers need and is leaving a huge
  >  amount of money on the table."
  > 
  >      - Steve Domenik, general partner with venture capital firm
  >        Sevin Rosen Funds (EE Times 9/6/99)


  From: Alain Hanover <hanover@incert.com>

  Hi, John,

  I continue to enjoy reading your ESNUG newsletter, but have to tell you
  that Domenik's quote is inaccurate.  I'm involved in a new industry with
  new players and new acronyms.  Instead of EDA and HDL, it's now ERP and
  SAP.  ERP is Enterprise Resource Planning.  It's a huge industry of $30B
  in annual software revenues and SAP, Oracle, Baan, Peoplesoft are the
  leading players.  SAP is the leader; its Quarterly sales are $2.3B, almost
  the entire EDA annual total.  Anyway, ERP industry surveys show that for
  every dollar spent on SAP software, $7 is spent by the customer on
  implementation and consultants.  So EDA's $2 is not that bad off.

      - Alain Hanover, 
        former CEO of ViewLogic
        current CEO of InCert Software                Cambridge, MA


( ESNUG 335 Subjects ) -------------------------------------------- [11/3/99]

 Item  1: A Design Engineer's Impressions Of The New Synopsys "PhysOpt" Tool
 Item  2: Does Anyone Actually Use 3-Dimensional (Load-Dependent) Timing?
 Item  3: Bad Code In Armstrong's "Structured Logic Design with VHDL" Book
 Item  4: SNUG'00 Europe -- 'Abstract' Deadline Extended Until November 30
 Item  5: ( ESNUG 300 #3 309 #8 )  The Politics Of Testing & Test Engineers
 Item  6: Making Latches Transparent For ATPG Generation In Test Compiler
 Item  7: ( ESNUG 334 #9 )  My "Translating Dc_shell To TCL" Horror Story
 Item  8: Where Can I Purchase A Hard Copy Of The SNUG 1999 Proceedings?
 Item  9: ( ESNUG 334 #8 )  Ten More Letters Doubting C-Based HW Design
 Item 10: ( ESNUG 334 #1 )  Faster Verplex Also Does Transistor Comparisons

 The complete, searchable ESNUG Archive Site is at <http://www.DeepChip.com>


( ESNUG 335 Item 1 ) ---------------------------------------------- [11/3/99]

From: Bob Prevett <prevett@nvidia.com>
Subject: A Design Engineer's Impressions Of The New Synopsys "PhysOpt" Tool

Hi, John,

I know you like user reviews of Synopsys products, so I thought I'd send ou
a review of PhysOpt, their new physical synthesis tool.  At my company,
NVIDIA, we must create large, high-speed designs as soon as possible.  Time
to market is *everything* in the graphics business, so shrinking the timeit
takes to place and route a design while achieving timing convergence is a
critical business interest for us.  For us, time really is money.

Our Old DC Reoptimize Design Flow
---------------------------------

To understand PhysOpt, you first have to understand what we used to do
before PhysOpt.  Here's our old design flow.

    1. Write, simulate, synthesize Verilog into gates using DC.

    2. Partition the gate-level netlist into 250K to 300K blocks
       for P&R in Avanti.  Partitioning to smaller blocks is a
       file management headache; partitioning to larger blocks
       causes Avanti Apollo to choke.  Around 300K is optimal.

    3. Floorplan each partition with Apollo and some internal tools.

    4. Placement in Avanti Apollo.

    5. Routing in Avanti.

    6. Extraction of loading and RC data in Apollo.

    7. Generate annotated .db file in Design Compiler.  Use PrimeTime
       to explore the timing of the annotated .db file.

    8. Use -reoptimize_design in DC to tweak the design.

    9. Generate new netlist and incremental PDEF.

   10. Go back to step 4 until everything converges.

Using this flow, layout partitions typically took 6 to 10 passes to achieve
timing.  Each pass could take 2 to 3 days.  Our main headache was that
Reoptimize Design would make timing by disturbing a large percentage of
the netlist.  Then we'd get caught up in a chicken & egg loop where the
incremental P&R required to fix the reoptimized design would cause enough
P&R disturbance to require another major pass through DC reoptimize design.

We often discovered that going through a large number of reoptimize design
passes would result in an unroutable layout.  Reoptimize design, by running
outside of the layout environment, just did not have enough information to
make good IPO decisions.

Our New PhysOpt Design Flow
---------------------------

PhysOpt accepts 2 basic types of input: RTL or gates.  When we started with
PhysOpt, we already had a working netlist, so we naturally used PhysOpt at
the gate level.  In this new PhysOpt flow, steps 1 through 3 (above)
remained the same.  What changed for us was at step 4.

    4. Synthesis/Placement using PhysOpt

    5. Routing in Avanti.

    6. Extraction of loading and RC data using Avanti Apollo.

    7. Generate annotated .db file.  Use PrimeTime to explore the timing
       of the annotated .db file.

    8. Go back to PhysOpt in step 4 using annotated db from step 7; repeat
       this loop until everything converges and is routable.

Using our PhysOpt flow, 300k gate layout partitions typically took 2 to 3
passes to achieve both timing and routing convergence.  Each iteration
(doing steps 4 through 8 above) took 2 days for the first pass and 1 day
for each incremental pass.  Since PhysOpt tweaks placement for timing
fixes while simultaneously assessing routing congestion, we found it made
better optimizations.  We also found first pass placement quality from
PhysOpt was better in both timing and routeability than first pass
placements from the Avanti Apollo timing-driven placer.

We were able knock off about 3 to 4 weeks in our layout process by using
the new PhysOpt flow.  In addition, our flow became more streamlined.  That
is, we already had a flow in place using Design Compiler and Primetime to
specify timing constraints and to run back annotation.  Our annotated db
could then be directly fed to PhysOpt without having to translate everything
back into the Avanti database for each iteration like we did with our old
design flow.

Using PhysOpt was very similar to using Design Compiler with TCL.  A sample
PhysOpt script that compiles a mythical block called "george" looks like:

 psyn_shell> set physical_library lsi25.pdb
 psyn_shell> set target_library lsi25.db
 psyn_shell> read_db george.db
 psyn_shell> read_pdef george.pdef
 psyn_shell> set_ideal_net scan_enable
 psyn_shell> set compile_delete_unloaded_sequential_cells false
 psyn_shell> set_dont_touch_network [ list [ all_clocks ]]
 psyn_shell> physopt -effort medium -congestion -congestion_effort medium
 psyn_shell> write_pdef -v3.0 -output george_out.pdef
 psyn_shell> write -f db -o george_out.db
 psyn_shell> report_timing -nets -input_pins -physical > george_report
 psyn_shell> report_qor >> george_report

Overall it was a very easy tool to use.  We were an alpha code site for
PhysOpt.  They've improved its run time considerably.  Initially, they had
PhysOpt (first pass) compiles that took 72 hours -- but Synopsys R&D quickly
got that down to 20 hours.  The incremental compiles are now down to 4
hours.  Also, when we first used PhysOpt, about 1 in 4 of our design blocks
we fed it compiled to unroutable designs.  Now, all our design blocks are
fully routable coming out of PhysOpt.  ( Using "-congestion_effort high"
helped a lot with this. )  It's production quality code now.

While PhysOpt wasn't that Holy Grail of synthesis tools, it represented for
us an important step forward towards that goal.  It knocked about 4 weeks
off our design schedule.  In the PC graphics chip world, this has a very
significant impact on our bottom line.

    - Bob Prevett, Design Engineer
      NVIDIA                                       Santa Clara, CA


( ESNUG 335 Item 2 ) ---------------------------------------------- [11/3/99]

From: Andrew Pagones <Andy_Pagones-ACIC22@email.mot.com>
Subject: Does Anyone Actually Use 3-Dimensional (Load-Dependent) Timing?

Hi, John,

I'd like to rehash an old issue now that it's possible to actually use it.

Synopsys version 1999.10 supports 3-dimensional lookup tables for delay
modeling.  I hope to find out what other tools (e.g., Avanti, Cadence, etc.)
already have or plan to have compatible timing modeling.  What have you
heard from your vendors?  Are there characterization tools out there that
can write .lib's with this syntax?

Also, the Synopsys docs note that 3D modeling is useful for cells with
load-dependent outputs, such as unbuffered flops or adders.  My local
Synopsys rep, however, related a customer complaint that even buffered
outputs exhibit a slight degree of dependence on the other output's
loading and that the inaccuracies increase with finer geometry processing.
I'd like to ask the ESNUG readers to please share their experiences with
this issue and enlighten us.

    - Andy Pagones
      Motorola Labs


( ESNUG 335 Item 3 ) ---------------------------------------------- [11/3/99]

From: wayne.a.miller@smsc.com ( Wayne Miller )
Subject: Bad Code In Armstrong's "Structured Logic Design with VHDL" Book

Hi John,

I'm in a bind here.  I need to be able to explain (with some credibility)
why the following code doesn't work as I would expect it to.  I'm
presenting this material to new VHDL users from the text "Structured Logic
Design with VHDL", by Armstrong & Gray.  They use lots of non-synthesizable
constructs in their delivery (even their D flip flop example is not
synthesizable!), and I can't explain the following behavior.

This is just one example.  It's a simple oscillator, or so I thought.
(Code example converted from BIT types to std_logic.)

   library IEEE;
   use IEEE.std_logic_1164.all;
   entity COSC is
     port ( run : in std_logic; clock : out std_logic);
   end cosc;
   architecture ALG of COSC is
     begin
       process
       begin
         wait until RUN='1';
         while RUN='1' loop
           CLOCK <= '1';
           wait for 100 ns;
           CLOCK <= '0';
           wait for 100 ns;
         end loop;
       end process;
   end ALG;
   configuration cfg_cosc of cosc is
     for alg
     end for;
   end cfg_cosc;  

Yet when I try it in Synopsys VSS, I get:

   %vhdlsim cfg_cosc

    # cd cosc
    # assign '1' run
    # run 200
    200 NS
    # eval *'signal
    RUN             '1'
    CLOCK           'U'
    #

Unless I set RUN to '0' for a few simulation cycles, and then to '1', the
output does not toggle.  Why isn't the initial assignment of RUN from 'U'to
'1' considered an event that would trigger "wait until RUN='1';".

I appreciate everybody's help.

    - Wayne Miller
      Standard Microsystems Corporation        Long Island, NY


( ESNUG 335 Item 4 ) ---------------------------------------------- [11/3/99]

From: Ronald Niederhagen <ronald@synopsys.com>
Subject: SNUG'00 Europe -- 'Abstract' Deadline Extended Until November 30

Hi, John,

Due to some scheduling issues, we've extended the deadline to submit
abstracts for SNUG'00 Europe to November 30th, 1999.

If your European readers have information on high-level or physical
design methodology, or experiences with Synopsys tools that would be of
interest to other users, please encourage to present in one of the sessions
listed below.

  -  Design Compiler, Low Power, Datapath Design

  -  SoC Design Flow

  -  Verification of large systems, including Hardware-Software
     co-verification

  -  Static Timing Analysis and Formal Verification

  -  Physical/Transistor Level Design and Verification

To submit an Abstract, send a summary paragraph to "SNUGEurope@synopsys.com"
and we'll take it from there.  If they need help writing their Abstract,
tell them to feel free to contact their local AC.  They'll help.  For more
information on SNUG'00 Europe, visit http://www.snug-universal.org/

    - Ronald Niederhagen
      EuroSNUG'00 Technical Chair                    Munich, Germany


( ESNUG 335 Item 5 ) ---------------------------------------------- [11/3/99]

Subject: ( ESNUG 300 #3 309 #8 )  The Politics Of Testing & Test Engineers

> With all the above good reasons, it is no surprise that test automation
> tools have always been undersold, under-utilized and have been considered
> unnecessary by most companies except the leading-edge, quality conscious
> companies, and companies that have been in areas that I have mentioned
> above where the liabilities can be substantial.
>
> So how has the EDA industry responded to such a low level of interest from
> customers?  Like most insurance policies, manufacturing test tools have
> sold very well after disaster strikes.  Without giving specific instances,
> it should suffice to say that most companies have used EDA tools for test
> only after they were desperately arm-twisted by their potential or current
> customers for lack of quality, lack of relaibilty, terrible defect rates,
> etc...
>
>     - Shankar Hemmady
>       Guru Technologies                        Cupertino, CA


From: Duncan Walker <walker@cs.tamu.edu>

John,

A few minor quibbles with Shankar's ESNUG post:

 1. Everyone eventually gets burned, thus everyone eventually uses a better
    test methodology.  Even if the company burns up, the engineers will
    still take that memory to the next company.

 2. Those with tight schedules are the ones mostly likely to just say use
    full scan, run ATPG, and take what we get.  Even a no-brainer solution
    can get much higher coverage than functional vectors, and take much
    less time to generate (assuming you don't get some from the designer).

 3. Test can play an important role in getting product to customers.  If
    some QA devices fail, the customer does not want that lot until the
    fails are explained.  That means you need to do diagnosis pronto.
    Without DFT/DFdebug, this is impossible on a large chip.  Then the
    marketing guys must sing and dance and convince the customer those
    fails are really random, and the customer should accept the product.

 4. Design for debug.  This is where the designers get burned and demand
    more debug features on the next chip.  Debug != test, but anything hat
    increases observability is good for test.

 5. The metrics are good, fast, and cheap.  Test is really about goodness.
    Obviously this is lower priority than fast and cheap in many
    applications, but this has always been so.  But there is an increasing
    fraction of electronics used in safety or money-critical applications,
    which in turn drives testing.

 6. One thing you didn't mention for ASICs is that often the chip test can
    be so-so, and then the customer can do the real test at board level.
    This costs much more to repair, but if it the defect level is low
    enough, it may cut the time to volume it may be worth it.  So the chip
    is getting tested, but not at chip test.

 7. Some designs are simple enough with high enough volume that brute-force
    functional test and customer feedback is enough.  Do you know that MIPS
    just hired its first DFT engineers last year?  (A former student is of
    mine is one of them.)  Many controllers are relatively simple, cheap,
    high-volume, and high quality.  This is so because they are mature, and
    the functional test sequence was iterated enough that the defect level
    is acceptable.

 8. If your yield is 100%, you don't need to test.  I saw an SSI/MSI
    assembly and test factory where the test was just a pins shorted/open
    test.  The chip manufacturing yields were high enough that no chip test
    was necessary.

 9. If you weight by dollar value/volume, the vast majority of chips havea
    real test methodology applied.  In many ways your discussion was from
    the EDA seat-count point of view.

I think a lot of it is just ignorance.  Look at the discussion of memory
testing in the very same ESNUG 309 #4 -- some participants seemed unaware
of the huge body of work on this topic.  The distribution of DFT expertise
among companies is mirrored in many other ways.  Too many companies are run
using a series of "get it out the door any way I can" hacks without ever
developing true expertise in the relevant technology areas.  I suppose this
is why consultants can make a living, bailing out those who run into a
situation that hacks cannot overcome.

    - Duncan Walker
      Texas A&M University                      College Station, TX


( ESNUG 335 Item 6 ) ---------------------------------------------- [11/3/99]

Subject: Making Latches Transparent For ATPG Generation In Test Compiler

> I am using the Synopsys Test Compiler for ATPG Generation in a design with
> latches.  I tried to make the latches (which are not in the scanchains)
> transparent with the 
>
>              set_scan_transparent true -existing
>
> command.  I assume that the ATPG generator tries to make the latches
> transparent by generating appropiate patterns which force the enable
> signal of the latch to '1', (i.e. the latch is transparent during parallel
> capture.)  If Test Compiler fails to compute such a pattern, i.e. the
> latch is in holdmode during parallel capture, the output of the latch has
> to be assumed 'X'. 
>
> O.K. now Test Compiler has produced testpatterns but during pattern
> resimulation the latches are NOT transparent BUT the input at the latch
> is assumed to be at the latch output in the following. 
>
> Has anybody used the set_scan_transparent feature of the Synopsys 
> Test Compiler successfully? 
>
> I also asked the Synopsys Hotline and they said that Test Compiler ASSUMES
> the latches to be transparent although in the documentation it says that
> the ATPG generator will provide the patterns. 
>
>     - Friedrich Beckmann
>       Infineon Technologies AG                   Muenchen, Germany


From: Frank Emnett <frank@aiec.com>

What the Synopsys person told you agrees with my experience.  The -existing
option (which is _required_ ...  go figure) tells Synopsys that you've
already added logic to make these transparent during scan.  I manually added
logic to my circuit to force the latches transparent whenever a TEST_MODE
signal was asserted.  Then I did a

                 set_test_hold TEST_MODE 1
                 set_scan_transparent true -existing {latches}

Syntax may not be quite right, I'm not at work.

The downside is that you don't get good coverage on the enable logic or
latch gate pins, but in my case, they're easily testable through functional
vectors.  Scan vectors produced this way have run on the tester just fine.

    - Frank Emnett
      Automotive Integrated Electronics Corp.


( ESNUG 335 Item 7 ) ---------------------------------------------- [11/3/99]

Subject: ( ESNUG 334 #9 )  My "Translating Dc_shell To TCL" Horror Story

> Hope this is helpful for someone out there.  It took me ALOT of hair
> pulling to even get ONE of my scripts running.  I've pretty much decided
> that I have many too many personal man-years invested into my scripts,
> and if Synopsys ever forces me to go pure TCL I'll jump off a building
> (or quit engineering, whichever would make my wife happier at the time -)
>
>     - Gzim Derti
>       Intrinsix Corp.                          Rochester, NY


From: Rodney Ramsay <rramsay1@ford.com>

Hi, John,

Gzim forgot to add Headache Number 8 : 

Synopsys has a tough *%&^%^% attitude.  As far as they are concerned, their
TCL translating tool, Transcript, works fine.  I wish they would just give
us the darned source code for Transcript so we could fix it for them.

    - Rodney Ramsay
      Ford Microelectronics

         ----    ----    ----    ----    ----    ----   ----

From: [ Tickle Me Elmo ]

Hi John,

As a "TCL advocate" I thought I'd write a few lines that just might make
Gzim's TCL a bit less painful.  Forgive me if you already know this!

Normally you don't need all those calls to "format %s%s"; use "" instead:

        write -f db [get_object_name $module] \
              -o [format "%s%s" [get_object_name $module] {.db.elab}]

could become

        write -f db [get_object_name $module] \
              -o  "[get_object_name $module].db.elab"

or perhaps

        set module_name [get_object_name $module]

becomes

        write -f db $module_name -o "${module_name}.db.elab"

Note the {} around module_name inside the "" - this is optional and is
useful in ambiguous cases like "$x_y", which could be "${x}_y" or
"${x_y}".)  Sometimes concat or join might be better to use.  You should
only need to use "format" if you're trying to output numbers in hex or
enforce particular field widths or something like that.


> In my dc_shell scripts, I leave places for the user to create a
> <blockname>.dscr (designer script) so that they can add some special
> stuff to blocks such as set_implementation, set_loads, etc.  In dc_shell,
> if these files DON'T exist, then dc_shell notes the problem in the log
> file, but continues on with the rest of the compile like nothing major
> happened...
>
>       include module + ".dscr"
>
> Is all I had to had to have for things to run along without a problem. 
> BUT, in tshell I had to use the following to get the TCL parser to get
> past the problem:
>
>  if { [file exists [format "%s%s" [get_object_name $module] {.dscr}]] }
>    {
>    source -echo -verbose [format "%s%s" [get_object_name $module] {.dscr}]
>    }
>
> As if THAT's intuitively obvious...


Here's how I'd do your "user include file" thing:

   proc source_if_exists {fn} {
     if {[file exists $fn]} {
       uplevel source $fn
     }
   }

(You need the "uplevel" as otherwise variables in the file become local
variables in the proc, not globals.)  Then invoke it as

   source_if_exists "[get_object_name $module].descr"

(I like that I can define functions like this and then use them in lots of
different tools -- ModelSim and Leonardo at the moment.)


> I looked for this in the big black TCL book by Welch and didn't find it,
> but if you are trying to do a compare of a boolean to true or false, as
> in an if statement, here's the syntax...
>
>     if {[is_true <boolvar>]} { do stuff here }
>     if {[is_false <boolvar>]} { do other stuff here }

I'm not sure why you need this - it must be some sort of Synopsys magic. In
"normal" tcl, booleans store 0 and 1 for false and true respectively, and
you just write

    if {$var} {....}
    if {!$var} {....}

Regards,

    - [ Tickle Me Elmo ]

         ----    ----    ----    ----    ----    ----   ----

From: Gzim Derti <gderti@intrinsix.com>

Good Morning "Elmo",

Thanks for the reply to my post!!  Just so you know, I have used TCL before
to so some little stuff, but mostly I was playing with TCL/TK once I found
out about it and have written some frontend to some scripts that are
basically push button using this stuff...

The results that you saw were basically direct dumps of what the

                  dc-transcript -source_for_include

command gave me.  You're right, I could have swore that string concatination
was ALOT easier when I did it before than the result that Synopsys sent out.
I don't know if there's a reason that Synopsys translates things the way
they do (using format.)

Thanks for the procedure.  It makes sense to do it like that, but my only
issue is adding MORE things to source in order to run my scripts.  I already
use a lot of files and this is just more overhead.  My original gripe with
this was that of the fact that DC_shell acts differently than TCL_shell.
Just as sort of a heads up.

And as far as the boolean check goes, I could have sworn that I tried that
originally and found that TCL_shell basically blew through the if statement
when I assumed the usage of the boolean as you show it.  I'll have to
check again, but I seem to remember that the only way I could get those
checks to work was to use the "is_true" and "is_false" checks.

In the end, it took me the better part of a day to get ONE of my scripts
even partially working in TCL_shell.  While I'm all for some sort of
standarization, I just feel that after fighting with DC_shell for nearly 
decade and learning the quirks of one methodology it's REALLY hard to
change gears...

PLUS, while TCL is all well and good, there are probably alot of reasons
why the original designers of the environment did things the way they did,
maybe to make life as easy as possible for the tool in question??  I'm not
against using TCL, it's just that DC_shell makes so much MORE sense when
you're  actually trying to root around within a design.

ANYWAY, again, thanks for the notes...  If I get back into this thing I'll 
try and work someother things out with what you've given me.

    - Gzim Derti
      Intrinsix                               Rochester, NY


( ESNUG 335 Item 8 ) ---------------------------------------------- [11/3/99]

From: Petter Gustad <pegu@dolphinICS.no>
Subject: Where Can I Purchase A Hard Copy Of The SNUG 1999 Proceedings?

Hi, John,

Where can I purchase/download a hard copy of the SNUG 1999 Proceedings?

    - Petter Gustad
      Dolphin                                         Norway


( ESNUG 335 Item 9 ) ---------------------------------------------- [11/3/99]

Subject: ( ESNUG 334 #8 )  Ten More Letters Doubting C-Based HW Design

> I still think it's a dumb idea...  I was in a conference call yesterday
> with the folks from Synopsys talking about this "SystemC" stuff.  Some
> parts I like, but others are just too much trouble -- you'd might as well
> just learn VHDL or Verilog (so my theory is confirmed, once again :).
>
>     - John Reynolds
>       Intel


From: GRIFFIN@slxcg01.csw.L-3com.com

If I use C/C++ to design my ASIC how do I simulate it with the same
capability I get in a Modelsim where I can trace signals, etc?

    - Griffin
      L-3 Communications

         ----    ----    ----    ----    ----    ----   ----

From: [ Been There, Done That ]

Hi John -

I have to concur with your comments on C++ as a next generation HDL.  I 've
talked with several of these EDA vendors that you mentioned regarding
C->HDL->gates flow.  And each time I've come to the same conclusion: "the
emperor has no clothes!"  

What I really wanted to tell you about though was that I and many colleagues
were doing C-based synthesis at Bell Labs in the late 80's.  The product as
an internaly developed tool called "cones" and was developed by Bell Labs
folks in Murray Hill, NJ.  Cones was quite a capable tool, in fact the
project I was on produced 5 ASICs in the then state of the art 0.9 u process
using the tool.  There were other versions of the tools as well that were
used to produce many ASICs (Spruce was another similar version).  There was
a simulator too (named ATTSIM), that could perform co-simulation with Cones
(restricted) C, true C, and Verilog/VHDL.  Bells Labs tried for several
years to market these products, and eventually sold most of its assets in
this area to Cadence in 1997 (Cadence bought Bell Labs Design Automation
group you may recall).  I've heard similar stories from folks at IBM, but
apparently IBM internaly is still supporting the internally developed tools.

So basically, the EDA vendors are going full circle because they don't have
anything new to sell with the traditional tools.  Wouldn't it be nice if
someone had the source code for a product like "cones" that could get it
into the hands of a group like the MIT GNU folks.  With that scenario C
would stand a better chance.

Please keep me anonymous.  

    - [ Been There, Done That ]

         ----    ----    ----    ----    ----    ----   ----

From: Nick Okasinski <nicko@mti.mti.sgi.com>

Hi, John,

I shook my head in disbelief when I first read of the recent push to
make C++ the basis for high-level synthesis tools.  C++ is totally
unsuitable to this.

As a software developer who has designed commercial HDLs (remember the
SILOS Behavioral Language? I thought not.  It was Verilog roadkill a few
months after it was released in the mid 80's) I think I have a unique
perspective on this.

C++ is a cumbersome, complicated language with myriad subtle pitfalls.
The language does so little to protect people from writing patently
invalid programs that I can't image it's going to help people stay
within the "synthesizable subset".   Large-scale structure is difficult
to express and can be casually circumvented.

I haven't been following the current debate closely, so I can't speak to
how the goals of the C++ camp might be better achieved.

But I can say this much:  I would implore the hardware community to
learn from the experiences of the software community.  C++ is
ubitquitous, but it's also clearly late in its lifecycle.  Superior
languages have not only emerged, they're already in mainstream use.

If you're a Verilog type who wants to sketch some ideas and watch them
run, you'll be estatic about an HDL based on Python.  If VHDL is your
style, you'll feel right at home with the iron-clad interfaces and
abstractions of Java.

There's much to be said for basing an HDL on an existing, popular
language.  I learned that the hard way when my languge got trounced in
the market by the very C-like Verilog.

Designing a 100M transistor chip will be complicated enough.  We can't
afford to have tools that are unnecessarily complicated, or that don't
adequate support and enforce the designers intent.

    - Nick Okasinski, CAD Software Designer
      SGI High Performance Microprocessor Group

         ----    ----    ----    ----    ----    ----   ----

From: "Duncan Walker" <walker@cs.tamu.edu>

I think you do a disservice to academia when you mention corporate CAD
groups, but fail to mention the probably much larger body of work on IC
design using programming languages done in universities.  The most obvious
examples would be the chip designs done in Scheme at MIT and Simula at
Caltech (where I was) in the late 1970s and early 1980s, as part of their
general work on "chip compilers".  There were many specialized layout
languages developed.  I wrote one called GAP in C at CMU, using concepts
from another language GAP developed by Gary Tarolli at Digital.  Based on
that experience I even wrote an internal CMU memo titled "IC Design is NOT
Like Programming", essentially arguing that for many things schematic
capture is better, a layout editor is better for cell design, etc.  There
were a number of commercial products in the early 1980s, such as the FLEX
cell system, among the earliest of module generators; Silicon Compilers, 
Caltech spinoff, etc.  In my opinion they died mostly because they were
before their time, and their inefficiency was not affordable given silicon
costs of the day.

My belief is that trying to embed everything in one language is just partof
the eternal quest for the silver bullet.  Fred Brooks wrote an IEEE Computer
article on silver bullets (with a cool werewolf picture) many years ago. 
One obvious issue that is not mentioned is that the software and hardware
developers are rarely the same person.  We don't expect the circuit and
layout designers to use the same tools.  What is needed are tools
appropriate to each problem domain, but which enable integration, co-design,
co-simulation, co-verification, etc.  One language for everything is another
way of saying lowest common denominator.

    - Duncan M. (Hank) Walker
      Texas A&M University                  College Station, TX

         ----    ----    ----    ----    ----    ----   ----

From: Mark Natale <mnatale@ati.com>

Hey John,

What is up with Petri Nets for such design?  I thought they were supposed
to be the way to future design languages and the Holy Grail of formal
design.

    - Mark Natale
      ATI

         ----    ----    ----    ----    ----    ----   ----

From: "Scott Sandler" <scott@novas.com>

Hi John,

Regarding your comments in the recent Industry Gadfly, going "back to the
future" might not be such a bad thing if it means free simulators for
everyone!  Even if you can't do anything with C or C++ that you can't do
with Verilog or VHDL, wouldn't having simulation capacity limited only by
the number of workstations be a benefit?

Of course, as an EDA vendor I like the idea that budget might be freed up
for tools other than simulation...

    - Scott Sandler, Marketdroid
      Novas Software

         ----    ----    ----    ----    ----    ----   ----

From: Lois DuBois <ldubois@batnet.com>

John,

Here's another alternative.

  Sunnyvale, Calif. USA; Meylan, FRANCE & Malmo, SWEDEN -- October 18, 1999
  Arexsys S.A., the leader in architecture exploration systems, and
  Telelogic AB (Stockholm Stock Exchange:TLOG), the leading supplier of
  visual tools for system design and test, as well as software development,
  today announced the development of an SDL to VHDL interface that enables
  electronic system designers to use Arexsys' ArchiMate to perform
  architecture exploration of system designs developed with the Telelogic
  Tau toolset.

SDL, is the International Telecommunication Union's specification and
description language. It's a formal graphical notation that is expected to
be integrated into the system-level design language (SLDL) which the
electronics industry is currently developing.

    - Lois DuBois, PR droid
      Cayenne Communication                    Portola Valley, CA

         ----    ----    ----    ----    ----    ----   ----

From: [ Life Under The Big Top ]

I have seen one project that did a decent job of the C to logic conversion.
The process was to:

     1 compile to remove global variables, 
     2 convert all code from procedural to functional form
     3 perform a laziness scan and infer laziness where it'll help
     4 convert functional elements to a data-driven model
     5 compact and infer equivalent and/or shared elements
     6 map the data flow into computational elements

1,2,3 came out of functional language research as I remember.  4 was the
first half of an optimized functional language compiler.  5 was a student
hack and essentially a creative use of diff!  6 existing and not very good
EDA component.

In a general sense, the difficulty with the system wasn't the conversion
or even the efficiency of the result.  When you parallelize the design
after stage 4 above, you essentially have lots of little independent
execution engines.  You can either keep them separate and use a huge
amount of silicon to achieve blindingly fast parallel performance,
or you can start to mux data into a shared data engine and demux the output.
This saves most of a data engine's space, but there is going to be an
execution speed penalty unless you can prove low engine utilization.

Evaluating the data engine's utilization is essentially the halting problem.
For a project that implements a repetitive task, the problem is solvable
in an automated way, simply by running the code and making timing notes.
In this way, it did a spectacular job of filters, FFTs, viterbi etc.

Something that was more free-form (such as a parser) made for ugly logic.
The problem with it was when you didn't like the output.  There were
so many intermediate languages along the way that it was almost a
research topic in its own right to be able to review what was happening.

    - [ Life Under The Big Top ]

         ----    ----    ----    ----    ----    ----   ----

From: Anna Ekstrandh <annae@extremepacket.com>

John,

A comment on your latest Gadfly:

You are mentioned a future Verilog++.  Well, isn't that what the guys at
Co-Design are thinking about with their Superlog language?  It's a definite
HW development language, but with some of the features out of C/C++.

I don't have any experince personally with the language.  But I've listened
to the sales pitches of all the companies mentioned in your article as well
as from Co-Design.  Co-Design seems to have thought a little bit more
closely about HW design issues, such as synthesis, etc.  Right now you
can synthesize Superlog only through a translation into Verilog, and I have
no idea how that would work quality wise.

    - Anna Ekstrandh
      Extreme Packet Devices Inc.                 Ottawa, Canada


( ESNUG 335 Item 10 ) --------------------------------------------- [11/3/99]

Subject: ( ESNUG 334 #1 )  Faster Verplex Also Does Transistor Comparisons

> Anon please.  I like seeing letters promoting Verplex in ESNUG.  They
> encourage Synopsys to improve Formality's capacity and speed, and for
> Avanti to do the same with Chrysalis.
>
> Verplex is winning yesterday's war by doing only RTL and gate-level
> comparisons.  They don't discuss it, but Synopsys has transistor-level
> equivalence checking in Formality called Transit.  It lets you compare
> your source RTL with your post-layout transistor netlists (in CDL
> format.)
>
>     - [ Ground Control To Major Tom ]


From: [ Big Brother Is Watching ]

John,

Verplex already has an RTL-to-schematic (actually a SPICE netlist)
comparison plug-in called LTX.  Their's is shipping, in use, and publicized.
I don't have a need for it, but the people I have talked with love it.
Guess that's why Synopsys is working on their version (as is Cadence in the
Affirma area).  Again, anon.

    - [ Big Brother Is Watching ]

         ----    ----    ----    ----    ----    ----   ----

From: "Duncan Walker" <walker@cs.tamu.edu>

Hi, John,

The IBM Verity tool (see an old IBM Journal of R&D for a description) has
provided transistor-level equivalence checking for several years.  If you
use a standard cell library, you don't need more than gate-level checking,
because it is trivial to extract from layout back to gates.  End-to-end
checking is achieved by checking the cell implementations against their
logic description, which are usually small enough that brute force will do.
The main benefits of transistor-level checking are avoiding this separate
cell checking procedure in designs using custom cells, and permitting the
end-user to check whether their vendor's library is correct.

    - Duncan Walker
      Texas A&M University                 College Station, TX

         ----    ----    ----    ----    ----    ----   ----

From: Scott Evans <scott@sonicsinc.com>

I used to be a Verplex customer and the comparison to transistor-level was
something they had in their system early on.  I don't know if they advertise
it or have abandoned it. We had plans to verify this capability but never
got around to it.  We did use the gate to gate and RTL to gate capability
for several chips and had similar good experiences as expressed by [ Big
Brother ].

    - Scott Evans
      Sonics, Inc.                            Mountain View, CA

         ----    ----    ----    ----    ----    ----   ----

From: "Sean W. Smith" <sean_smith@mindspring.com>

John,

Verplex also makes an product called Tuxedo-LTX that goes to the transistor
level.  Tuxedo LEC is the RTL to Gates.  They realize that many people don't
need the transistor capability so they sell it as an add on rather than
force you to buy it.  The bottom line at the moment is that they have
superior technology and the best equivalence checker on the market.

    - Sean Smith



 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)