Editor's Note:  Heads up!  The registration to HDL Con 2000 goes up by
  $50 if you don't register by today (Friday, Feb. 25th).  HDL Con is the
  conference resulting from the shotgun wedding of the two olde archrival
  OVI (Verilog) and VIUF (VHDL) conferences.  And even though it's said
  that OVI is the "man" in the HDL Con "family" (because how the Verilog
  wars went), it's very much a bi-lingual conference.  ( www.hdlcon.org )

  Also, SNUG's on-line registration ( www.snug-universal.org ) closes on
  Monday, Feb. 28th, too.  I've heard that SNUG'00 already has 408
  designers pre-registered, so it will definitely have the critical mass
  to be very interesting.

  Both conferences are happening at the San Jose Double Tree Inn (the old
  Red Lion Inn off of 101.)
                                             - John Cooley
                                               the ESNUG guy

( ESNUG 344 Subjects ) ------------------------------------------- [2/00]

 Item  1: ( ESNUG 343 #11 )  Vera Really Needs Multi-Dimensional Arrays
 Item  2: ( ESNUG 343 #2 )  Texas Instruments' 5-Way Physical Synth Shootout
 Item  3: Anyone Know Of An LVS Tool That Can Generate DEF Output Files?
 Item  4: Upcoming Verisity/Specman/"e" User Group Meeting On April 12 - 13
 Item  5: ( ESNUG 335 #1 )  The PhysOpt-With-Cadence-Backend Matrox Tapeout
 Item  6: ( ESNUG 343 #9 )  Vera & Specman Are A Waste Of User Time & Money
 Item  7: ( ESNUG 342 #2 343 #1 )  More On The "set_dont_touch_network" Bug
 Item  8: ( ESNUG 335 #9 )  Hey!  C++ Is The HW Design Language Of Choice!
 Item  9: Ouch!  We Got Burned 'Cause Design Analyzer Doesn't Support TCL!
 Item 10: Synopsys Doesn't Support SDF COND (Even Though They Claim They Do)
 Item 11: Wally Accidentally "Outs" Mentor Into The Physical Synthesis Race
 Item 12: 2-D VHDL Arrays Don't Make It When DC 99.10 Writes Out Verilog
 Item 13: Overloading Verilog System Tasks (Like $finish & $monitor) In VCS

 The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com


( ESNUG 344 Item 1 ) --------------------------------------------- [2/00]

Subject: ( ESNUG 343 #11 )  Vera Really Needs Multi-Dimensional Arrays

> I'm a happy Vera user but I have 3 big complaints: ... The Vera language
> doesn't support multi-dimensional arrays.  This has huge implications on
> how one can code a testbench.
>
>     - [ Not Another Elf ]


From: Tom Symons <tsymons@level1.com>

Hi, John,

True multi-dimensional arrays in Vera would be nice, but you can get the
same result pretty easily by using an array of objects, as in:

    class second_dimension {
       bit [31:0] two[];
    }

    second_dimension one[size];

    // initialize the objects
    for (i=0; i<size; i++)
      one[i] = new;

    // ready to roll
    one[j].two[k] = value;

Note the use of the associative array in the second_dimension object.  That
way you can reuse the same object for multiple 2-D arrays of different
sizes.  Also note that you can extend the same concept for arrays of any
dimension.

Not quite as tidy as a built-in 2-D array, but hardly a huge problem to
work with.  Besides, when you are using a 2-D array, its very common
that you want more than one field associated with each element anyway,
which would have to be coded as above.

    - Tom Symons
      Level One Communications                    Sacramento, CA


( ESNUG 344 Item 2 ) --------------------------------------------- [2/00]

Subject: ( ESNUG 343 #2 )  Texas Instruments' 5-Way Physical Synth Shootout

> I am absolutely positive that when some benchmarks have completed, we
> will definitely hear about the results from the winning vendor.  Since
> these benchmark boasts have not materialized yet, we can only conclude
> that no-one is in boasting position, yet.  The moment you find one, John,
> could you please make sure to publish it in ESNUG?
>
>     - [ One Of The EDA Boys ]


From: Anthony Hill <hillam@dal.asp.ti.com>

Hi, John,

I know your ESNUG readers have been clamoring for details.  Here's what
we've seen so far.  I work for one of the DSP design groups at Texas
Instruments, and we've been evaluating Physical Compiler (PhysOpt) for the
past couple months.  Generally when I see these sort of reviews, I am
left with an underlying scepticism about the actual test case and whether
the design complexity was sufficiently difficult to challenge the tool.
So we're going to give your readers the details of our evaluation and let
them decide.

The testcase we use to stress new tools/flows is a cache + controller block
consisting of about 30k nets and 8 large RAMs placed in a long rectangular
floorplan.  There are placement regions for the standard cells between some
of the RAMs and along most of the width of the block.  Total RAM area is 
~~70% of the block area, and the RAMs are essentially blockages for all but
the top-level of metal.  The target clock frequency for this block is
200MHz in a 0.25um 5-metal layer process.

This testcase has historically been our most difficult for getting timing
convergence.  Wire-load models are essentially useless for any routes which
must go over the RAMs and global/detail routers have often found themselves
lost when trying to route over/around the RAMs.  As the RAMs have ports on
two opposing sides, you can imagine the routing headaches.  The last time
we taped-out this block (2 months ago) we went through 13 ECOs and a
number of manual tweaks to get the performance where we needed it.

We handed off our gate-level netlist to the local Synopsys AEs right before
Christmas.  They came back with a placed netlist on the first week of
January.  It had routability problems along a very thin standard cell
region between a RAM and the block boundary.  (It has a 256-bit bus which
has to do a 90-degree turn over top of it.)  We had known about that problem
a priori and had needed to add a blockage in our baseline Avanti P&R
environment as well.  We thought PhysOpt might be smart enough to avoid
placement in that region, but it was not.  We added the blockage for the
next run.

The second netlist which the local AEs handed off was routable and timing
looked good.  There is a known architectural speed path in this design.
The critical path in the placed netlist generated by PhysOpt was that long
architectural path -- you couldn't do any better.  PhysOpt also fixed all
timing problems for paths within the critical_range of that architectural
path.  One impressive thing about the PhysOpt netlist was its routability.
It was easier to Avanti route the PhysOpt-placed netlist in Apollo II than
it was to Avanti route the Avanti-placed netlist. (!)

Synopsys has since handed off the tool to us and we've been running
PhysOpt in-house.  One feature we were particularly interested in was the
RTL->placed gates flow.  We've been able to run small testcases, but since
we do some rather exotic stuff using some funky DesignWare and clock
gating, we couldn't use it from the RTL level.  So, until Synopsys
integrates clock tree synthesis into PhysOpt and allows gated clocks at
the RTL level, it means we'll stay in gates->placed-gates mode for now.

Another problem we think most design teams will have with the flow is scan
testing.  The handling of tieoff cells using a 'compile -scan' flow is a
problem as the tieoffs 'go away' after placement.  (PhysOpt does not have
a concept of an 'ideal_cell' -- yet.)

So the bottom line is this: PhysOpt promises to save us a couple X off our
design cycle time.  (I'll let the Synopsys marketers argue about the
magnitude of X.)  The timing convergence for this testcase was quite
impressive.  It also fixed nearly all of our design rule constraints in the
first go (which helps hot-carrier reliability, noise, etc.)  We were also
quite impressed with the correlation between Synopsys' global route
estimates and our back-end detailed route results.  (Historically, we've
generally seen pretty good correlation between groute and droute with our
Avanti backend.)

Some PhysOpt drawbacks: the routing tech file is not very robust and still
shows some discrepancies (against Simplex) when routes go over routing
channels or through low routing-density regions.  Also, PhysOpt does not
currently handle non-minimum width metals (which probably doesn't affect
most customers, but does impact the wire spacing/sizing crowd and those who
have to do a lot of wire sizing for electromigration).

We've also sent this testcase to other physical synthesis vendors.  Politics
won't let me name names, but here's our non-specific experiences.

One of them punted, claiming that they only wanted to do our design flat and
that it was unroutable.  Another had some pretty impressive results, but
their clock skew numbers were out in left field, and their power consumption
was quite high.  That vendor is working to enhance their tool to resolve
those issues.  Time will tell how competitive they will be to PhysOpt.  A
third vendor (who's offering really amounts to a tool which they've had in
the market for some time) claimed good results, but upon analysis it turned
out their timing engine was spitting out bad results everywhere.  The fourth
large player in this area didn't even warrant an evaluation.  (We've
evaluated their 'synthesis' tools in the past and found such poor results,
it wasn't worth our time.)

> Why is there no mention of the different timing engines with the Synopsys
> flow?  For instance, Design Compiler uses DesignTime, yet PrimeTime is the
> sign-off engine, and I'm not sure what the timing engine is in PhysOpt or
> Chip Architect.
>
>     - Donna Rigali
>       Cadence

Another great feature of using PhysOpt (maybe I'll get a free shirt for this
one, John) is that you have essentially the same timing engine throughout
your design flow.  We've never had any major timing consistency issues
between PrimeTime and DesignTime (although PT seems to handle tieoff
propagation differently than DT in some cases).  This solves a lot of
problems we've had in the past (and are still having) getting agreement
between 'other' timing engines and PrimeTime.

   - Anthony Hill
     Texas Instruments                            Dallas, TX


( ESNUG 344 Item 3 ) --------------------------------------------- [2/00]

From: Ori Chalak <ori.chalak@spd.analog.com>
Subject: Anyone Know Of An LVS Tool That Can Generate DEF Output Files?

Does anyone know of an LVS tool that can generate DEF ?

    - Ori Chalak
      Analog Devices Israel                      Herzlia, Israel


( ESNUG 344 Item 4 ) --------------------------------------------- [2/00]

From: Hans-Juergen Brand <hans-juergen.brand@amd.com>
Subject: Upcoming Verisity/Specman/"e" User Group Meeting On April 12 - 13

Hi John,

Since there's been so much discussion recently on ESNUG about verification,
I thought your readers might like to know that Verisity has its first Users
Group meeting scheduled in California for April 12 and 13.  It's supposed
to be very technical and most of the presentations are from users.  You can
register from their website, http://www.verisity.com/

    - Hans-Juergen Brand
      AMD Saxony Manufacturing GmbH               Dresden, Germany


( ESNUG 344 Item 5 ) --------------------------------------------- [2/00]

Subject: ( ESNUG 335 #1 )  The PhysOpt-With-Cadence-Backend Matrox Tapeout

> We were able knock off about 3 to 4 weeks in our layout process by using
> the new PhysOpt flow.  In addition, our flow became more streamlined. 
> That is, we already had a flow in place using Design Compiler and
> Primetime to specify timing constraints and to run back annotation.  Our
> annotated db could then be directly fed to PhysOpt without having to
> translate everything back into the Avanti database for each iteration
> like we did with our old design flow.
>
>     - Bob Prevett, Design Engineer
>       NVIDIA                                       Santa Clara, CA


From: David Romanauskas <dromanau@matrox.com>

Hi, John,

I liked Bob Prevett's review in ESNUG 335 #1 of PhysOpt.  I see he used it
in a predominantly Avanti backend tool flow.  I'm about to leave Matrox and
join a new start-up, so I thought I'd send you a review of what it was like
to use PhysOpt in a mostly Cadence backend tool flow.

First off, I used PhysOpt from the very early alpha code days, so I got to
watch a lot of the specific commands I know change dramatically over time.
Our goal with PhysOpt was to save design cycle time without sacrificing
performance (area/timing).

Our designs were being fabbed in a 0.18u process using a COT flow for a
multi-million gate chip.  The chip involved multiple clocks with speeds up
to 360 Mhz and datapaths up to 256-bits wide.

Our Old DC Design Flow
======================

Here's the generic design flow we had prior to PhysOpt:

   Synthesis (Design Compiler)
          |
   Floorplanning (Cadence LDP)
          |
   Detailed Placement (Cadence Qplace)
          |
   Routing (Cadence WarpRoute)
          |
   RC extraction (Simplex QX)
          |
   Static Timing Analysis (PrimeTime)
          |
   Back-Annotate timing into DC
          |
   Return to synthesis (using custom wireload models)

We would usually iterate 5 to 15 times through this flow before we got the
timing closure in our specs.  Our final synthesis pass would include:

      - Clock tree insertion (Cadence CTGen)
      - Scan Insertion (Sunrise)
      - Routing (Warproute)
      - RC extraction (Simplex QX)
      - timing analysis (PrimeTime)

Once we had a fully routed netlist with everything included we would begin
an iterative ECO process to insert repeaters (buffers) for the paths that
went between the blocks.  This was to take care of long, heavily loaded
nets that were not necessarily having timing problems.  We had adopted a
method to budget interconnect times between blocks that helped avoid
inter-block timing problems, so the repeaters were mainly added to assure
signal that the signal had good transition.


Our New PhysOpt Design Flow
===========================

Our new design flow with PhysOpt was almost the same except with less work
involved.  PhysOpt just dropped into our existing flow replacing the
detailed cell placement.  Little else changed, except for the dramatic drop
in the number of iterations required to converge on timing.

Floorplanning was significantly easier in the PhysOpt flow, too.  Normally
we would floorplan regions down to very small modules.  This time we only
required a top level floorplan and the placement of the hard macros such
as RAM blocks.

Synthesis at this stage became a standard 2-step DC compile.  Examples:

      physopt -effort medium -congestion
      physopt -effort high -incremental

Routing and timing was extracted using Warproute and Simplex QX.  For our
final synthesis we added clock trees and scan chains.  At this point we
still performed an IPO cycle to fix the paths between modules and the new
scan chains for hold fixing) , but there was no need to buffer the internal
module paths since PhysOpt had taken care of this.

For our final synthesis we added clock trees and scan chains at the module
level and then performed an incremental PhysOpt run to fix any small timing
problems that may have appeared.

At this point the chip is assembled and we performed repeater insertion
between blocks as done in our old flow.


What We Liked
=============

The two flows may appear similar but there were some important differences.
The new PhysOpt flow only required ONE pass through P&R to achieve timing.
In our old flow, we would iterate 5 to 15 times to get timing closure and
it involved moving data though 8 different tools.  With the new PhysOpt
flow we completely avoided this type of running back and forth between
back-end and front-end trying to converge.  This easily saved us 4 weeks.

Design data was easier to exchange.  I can't stress that too much.  We
reduced the number of times we had to pass different files and formats
between tools, since it only occurs when going to clock tree insertion
and scan reordering now.  We liked staying in a DC based (TCL) environment
with just a few extra commands.  It made PhysOpt easy to learn and easy to
use for us.  Because of this, I estimate it would take a designer familiar
with back-end tools no more than an afternoon to get up and running with
PhysOpt.

The other thing we also liked was the fact that we no longer had to spend
a lot of time generating custom wire-load models.  We used to generate a
unique model for each and every module within the design, but now only took
the time to generate one for the whole chip.  We found that the initial
synthesis results with this wire load model provided a good enough starting
point for PhysOpt to complete the job and close timing. Since PhysOpt knows
the exact placement of each instance, it uses a quick global route to
estimate the wire delay.


Gotchas
=======

As expected for early alpha code, there were some glitches.  Synopsys
addressed all of those issues to our satisfaction.

We would have preferred to do clock-tree synthesis and scan-chain
re-ordering within PhysOpt.  Synopsys agreed these were necessary
features.

Doing synthesis and placement takes a lot of CPU time.  Most of our blocks
ran in 2-24 hours, but we had one large nasty block which took over 80
hours to complete because we didn't take a true hierarchical approach with
it.  (This block was over 1/4 of the entire chip in our multi-million gate
design.)

PhysOpt is exceptional, and really helps close in on timing quickly when all
the block information in context of the chip is represented.  However, be
VERY careful you don't throw out the results you get hierarchically.

To understand this clearly, John, you need to understand the approach we
took.  From a placement view, we developed all our blocks hierarchically
in the top level of the chip.  We then converged each block with local
routing within the block itself, then later stripped off all the routing
and then rerouted the entire chip flat.  Ouch.  What we saw with this flat
routing approach were that the global routes over modules sometimes
interfering with the local routes of those modules, throwing the estimates
that PhysOpt made on those nets off (sometimes by more than 1mm!).  Ouch.

This, of course, created some very painful timing closure difficulties for
us since for some nets the loads were greater than expected.

Our lesson learned was that a fully clean hierarchical methodology is the
only way to go if you want to use PhysOpt successfully.  This involves
providing visibilty of the top level routes to PhysOpt during its modular
run by pushing any nets that pass over the module into it to act as routing
obstruction.  When the module is routed, Warproute will respect these
obstructions, and the chip can then later be assembled keeping all routing
intact and avoiding the necessity to flatten and reroute from scratch.

The upside, once we learned this, was that we met our performance target
after the first pass.  That big 1/4 chip block only needed some minor
timing adjustments after applying the flat routing.  This final result
emphasized to us, John, that true hierarchy is the only way to go.

The other important caveat with PhyOpt is that one must be very careful on
how you assemble the chip.  After using PhysOpt to place each of the main
modules we let Warproute try to route the entire design.  This created many
headaches with the paths between blocks!!!  Warproute is not a Top Level
router, nor has it claimed to be one, so don't misuse it this way with
PhysOpt output or you'll regret it.

Next time, my plan would be to use PhysOpt in a completely hierarchical
fashion.  First synthesize and place, and then route all of my modules.
I'll then assemble it into a full chip using a real top level router.

When we began the project in early 1999 the RTL2placed-gates feature in
PhysOpt wasn't ready.  The group here at Matrox hopes to use this feature
in the future on upcoming projects.

    - David Romanauskas, Design Engineer
      Matrox                                   Montreal, Canada


( ESNUG 344 Item 6 ) --------------------------------------------- [2/00]

Subject: ( ESNUG 343 #9 )  Vera & Specman Are A Waste Of User Time & Money

> What are people's experiences with using the Linux version of Vera 4.1.3
> along with VCS 5.1?  Do they yield the same results (and performance)
> as their Unix versions?  We'll be doing our own evaluation, still it
> would be nice to know ahead of time how successful we'll be.
>
>     - Sherri Al-Ashari
>       Corvia Networks                          Sunnyvale, CA


From: Ernst Bernard <Ernst.Bernard@icn.siemens.de>

Hi John,

Sherri may be wasting her time.

I've followed your discussion about testbench environments like Vera and
Specman with much interest.  Quite a number of technical details concerning
these tools was provided and other approaches mentioned.  Good reading.

There is no question that the approach Vera and Specman pursue has quite a
number of benefits depending on the application.  However, testbench
designers would have to undergo the painful process of learning a new
language and mastering new semantics.  Having managed this, one can't be
sure of the alleged benefits for a reasonable amount of time.  In the
seemingly un-stoppable advent of high-level design and SoC, design
verification methodology will very probably change quite soon causing the
adoption effort not to pay off.  Also, the semantics of those proprietary
languages (Vera, Specman, QuickBench) are quite ambiguous w.r.t. timing
compared to well understood HDLs like Verilog or VHDL.

In our ASIC design center we continue therefore to stick to real HDLs (for
us, it's VHDL), though we took a quite close look at Vera and Specman.  For
complex chip-level testbenches, we use an in-house VHDL-based  environment.
It may lack the convenience of a comfortable GUI but offers a set of very
useful synchronization mechanisms which can't be found in any other
approach.  Furthermore, the features of our generators and analyzers can't
be matched by commercial offerings.  And I dare to say that in the field of
telecom, directed testing is still more productive than pseudo-random.

Besides this high-level approach, a lot of verification work is done at
block level where the main issue is quickly creating a testbench following
a common methodology.  There one doesn't want to bother with the subtleties
of any language, but one wants to create rapidly effective tests.  For this
we make use of a tool called BestBench by Diagonal Systems (Switzerland)
which enables a designer to rapidly generate analyzers and generators in a
hierarchical and systematic way but frees him/her from fumbling heavily with
VHDL.  The benefits of this tool for block level testbenches are surprising
and we didn't find them in any other tool (e.g. QuickBench).

Both approaches enable the designer to quickly generate powerful testbenches
and allow him to work in his or her usual environment.

I'd be glad to hear in ESNUG from other designers their opinion about
standard versus proprietary testbench languages.  We don't think they're
cost effective.

    - Ernst Bernard
      Siemens AG                               Munich, Germany


( ESNUG 344 Item 7 ) --------------------------------------------- [2/00]

Subject: ( ESNUG 342 #2 343 #1 )  More On The "set_dont_touch_network" Bug

> I want to thank Kayla for her observation on this issue.... I thought I
> was going crazy!!!  I've been compiling some blocks for a customer for a
> few months now and just recently I've noticed that since 99.10 that every
> once in a while it would NOT correctly listen to my set_dont_touch_network
> on a clock in a structural design....  The structural had 2 clocks, named
> CLK and CLK32FC.  The CLK32FC clock was correct while the CLK clock
> (redundant, I know...) was BUFFERED!  BUT, when I compiled this
> structural manually, the issue went away....  huh?
>
>     - Gzim Derti
>       Intrinsix Corp.                          Rochester, NY


From: Kayla R Klingman <kayla.r.klingman@tek.com>

Hi, John,

The following is directly from the Synopsys Support Center on logid 100418:

   The problem with set_dont_touch_network not working correctly in 99.10
   has been recently reported.  The STAR number for your reference is
   95299.  Here are the 2 possible Workarounds for the problem.

   Workaround #1

   Do NOT specify set_dont_touch_network at all.  Starting with DC 99.05,
   DC will infer the clock net as ideal.  By default, Design Compiler
   treats all clock networks as ideal nets.  Ideal nets are networks of
   nets that are free from max_capacitance and max_fanout design rule 
   constraints.  (For more info see command set_ideal_net)

   No buffers are added on the clock network. 

As a Side Note: ideal_net attribute by default does not propagate through
gating logic.  So if you had a gated clock, you would have to explicitly
set the ideal_net attribute at the output of the gate.

   Workaround #2

   There is another workaround provided by the R&D team.  You can keep the
   set_dont_touch_network in your script and set:

                     compile_map_for_delay = true

   This is a hidden variable, so use it with caution.  

I chose Workaround #1, since I am phobic of hidden variables that have no
documentation in SOLD.  I noticed a new Solvit aritcle that says to use
Workaround #1, but doesn't mention the hidden variable.

I'm told they have fixed this bug in 1999.10-5 plus Star 95299 and 93201.
I have not been able to confirm this since our DesignWare libraries go toes
up with this release.

    - Kayla Klingman
      Tektronix, Inc.                           Somewhere, Oregon


( ESNUG 344 Item 8 ) --------------------------------------------- [2/00]

Subject: ( ESNUG 335 #9 )  Hey!  C++ Is The HW Design Language Of Choice!

> I shook my head in disbelief when I first read of the recent push to
> make C++ the basis for high-level synthesis tools.  C++ is totally
> unsuitable to this.  ...  C++ is a cumbersome, complicated language with
> myriad subtle pitfalls.  The language does so little to protect people
> from writing patently invalid programs that I can't image it's going to
> help people stay within the "synthesizable subset".   Large-scale
> structure is difficult to express and can be casually circumvented.
>
>     - Nick Okasinski, CAD Software Designer
>       SGI High Performance Microprocessor Group


From: Bill Steinmetz <bills@cisco.com>

I'm dumbfounded by some of the recent editorials in ESNUG regarding "C++
as an HDL language sucks".  This statement could not be further from the
truth.  In fact, reading such neanderthalic rhetoric has incensed me to
write to ESNUG.  Let me first refute some of the comments made and then
highlight points that make "C++ the HDL language of choice".  I've also
added a C++ up-down counter example to provide support to some of
my claims.  Let me further clarify that my comments regarding C++ HDL
language are derived from my experience with the CynApps library (Cynlib).

 
> "I agree with Geir," John Reynolds of Intel quickly replied, "Evaluate
> some of the tools and you will quickly see how restricted you are and the
> things you cannot model.  The observation that one engineer had around
> here when discussing the C/C++ vs. HDL argument was that the more you
> restrict C++ by using class templates, etc., the more you shave the
> language so that you can synthesize it, the more funky crap you add into
> the language to simulate concurrency already found in HDLs, the more your
> 'language' approaches a Verilog or VHDL!"

What is wrong with this?  CynApps provides a library of macros that
makes the language look very much like Verilog.  Designers, familiar with
Verilog, can learn to use the tool quickly.  New designers can leverage
old Verilog constructs as well as take advantage of powerful C++
constructs.  Designers can have the best of two worlds.


> I still think it's a dumb idea...  I was in a conference call yesterday
> with the folks from Synopsys talking about this "SystemC" stuff.  Some
> parts I like, but others are just too much trouble -- you'd might as well
> just learn VHDL or Verilog (so my theory is confirmed, once again :).
>
>     - John Reynolds
>       Intel

The problem with remarks like the one above is that they strictly focus
on the design effort.  Does anyone read white papers on ASIC design?
Today's ASIC design projects are bottlenecked by verification.  C++ is
the perfect language for verification.  In fact,  many groups already use
C++/Verilog co-simulation environments.  The problem is that this process
is slow because C++ communicates to Verilog through slow PLI, socket calls.


> They are creating a new language, Superlog, which is essentially an
> evolution of Verilog combined with C.
>
>     - Anders Nordstrom
>       Nortel Networks 

The problem with Superlog is that it's a new language.  Most designers
know C today.  Why not take advantage of a universal language like C++
and use it for both its ability to look like Verilog as well as its 
verification and design extensibility capabilities?

Here's a simple Cyn++ up-down counter example, used in a design contest
sponsored by you, John, in 1995, to show how C++ can resemble Verilog.

   Module up_down(In<1> clk,  In<1> up,  In<1> down, In<9> data_in, 
                  Out<1> parity_out, Out<1> carry_out, 
                  Out<1> borrow_out, Out<9> count_out) 

    Uint<10> cnt_up, cnt_dn; 
    Uint<9> count_nxt; 
    Uint<1> load;
                               
   Always( Posedge(clk) ) { 
      cnt_dn = count_out - 5; 
      cnt_up = count_out + 3; 
      load = 1; 

      switch( (up,down) ) { 
        case(0): 
            count_nxt = data_in; 
            break; 

        case(1): 
            count_nxt = cnt_dn; 
            break; 

        case(2): 
            count_nxt = cnt_up; 
            break; 
          
        case(3): 
            load = 0; 
            break; 
      }
                      
      if( load) { 
          parity_out <<= CynRedXor(count_nxt); 
          carry_out <<= up&cnt_up(9); 
          borrow_out <<= down&cnt_dn(9); 
          count_out <<= count_nxt; 
      } 
   }
   EndModule 


Now a summary of reasons why to use C++ has a HW design language:

  1) No new langauge to learn.  Most engineers already know how to
     program in C/C++.

  2) With the proper macro libraries, the language looks just like
     Verilog, making transisition from Verilog to C++ easy.

  3) One language for both design and verification.

  4) Simulation speed by eliminating the PLI bottleneck.

  5) Free simulations!!

  6) Simply, a well structured, powerful, mature language.

The only reasons that come to mind against C++ is the maturity level of the
synthesis tools as well as integration of C++ with other EDA  tools.  The
first is already being solved.  The second will evolve over time.

    - Bill Steinmetz
      Cisco Systems


( ESNUG 344 Item 9 ) --------------------------------------------- [2/00]

From: Gregg Lahti <gregg.d.lahti@intel.com>
Subject: Ouch!  We Got Burned 'Cause Design Analyzer Doesn't Support TCL!

Hi, John.

Oh Great!  We get our entire synthesis environment setup for TCL-based
scripting (DC, PT, etc) and find out that Design Analyzer doesn't support
TCL.  This really blows chunks, considering there is no elegant way to
automagically determine if you would like TCL or DC-flavored usage in your
~~/.synopsys_dc.setup file.  Hence any DA usage must now have a separate
kludge to get the library paths, environment setups, etc to work!  Yuck!

Rumours say that Synposys is re-writing the GUI interface of Design Analyzer
to be TCL-friendly, but that isn't available now when I need it.  Now why
can't Synopsys release tools (DA & DC) that at least work together in a nice
fashion?

    - Gregg Lahti
      Intel Corp                                    Chandler, AZ


( ESNUG 344 Item 10 ) -------------------------------------------- [2/00]

From: [ Norman de Plume ]
Subject: Synopsys Doesn't Support SDF COND (Even Though They Claim They Do)

Hi John, (please keep me anon)

I'm developing a Synopsys .lib model for a complex RAM cell that doesn't
have all the traditional built-in latch functions.  Because of this, the
.lib has several SDF COND branches, which I haven't been able to simulate
properly.  I don't know if many of your other readers use SDF COND branches,
John, but DC 99.10 was the first release to support SDF COND TIMINGCHECKs
(sort of).  This is from Synopsys:

  This is regarding "conditional timing checks can not be read into
  simulation tool (Verilog XL)".

  I have duplicated your problem, and I have filed a bug report on your
  behalf.  The bug report number is STAR xxxxx.  You are absolutely
  correct.  The conditional timing checks written out by the tool does
  not follow the SDF v2.1 specification.

  The workaround is to modify the resulted SDF manually or use PrimeTime
  to generate the file.

Just thought I'd pass this along to your readers.

    - [ Norman de Plume ]


( ESNUG 344 Item 11 ) -------------------------------------------- [2/00]

From: John Cooley <jcooley@world.std.com>
Subject: Wally Accidentally "Outs" Mentor Into The Physical Synthesis Race

My jaw dropped when I saw this buried on page 10 of the Feb. 14th issue of
EE Times in what was supposed to be a Mentor "Stream View" anouncement:

   Mentor president and chief executive Wally Rhines said Tuesday (Feb. 8)
   that the company is going to jump into the physical design market
   with its own timing-driven placement tool called TeraPlace.  Mentor
   quietly purchased TeraPlace two years ago from CLK CAD and since then
   has been busy making the tool ready for prime time.

   TeraPlace, developed by Chung-Kuan Cheng, professor of computer science
   and engineering at the University of California at San Diego, was used
   as the placement engine of SVR's SonIC 3.0.

   Rhines, who tipped the tool direction at last year's Design Automation
   and Test Europe conference in Munich, declined to give technical details
   of the revamped TeraPlace but said the tool incorporates fast synthesis
   technology, embedded SST Velocity static timing analysis integrated with
   the CLK CAD placement technology.  ...

   Rhines said the tool, which is running at beta customer sites, will fit
   into the current physical design flow and will run with logic synthesis
   and perform timing driven placement, leaving the routing to traditional
   P&R tools from Cadence and Avanti.

Wally had accidentally mentioned TeraPlace at the "Stream View" press event
and quick-on-the-uptake Mike Santarini of EE Times caught Wally's error.
(How I know this is when I phoned Wally about TeraPlace, he said: "I can't
tell you any details about it because I wasn't supposed to anounce it then.
My TeraPlace marketing people are sort of mad at me right now.")  I later
confirmed this error with TeraPlace marketing guys.

My congrats on catching a great scoop, Mike!  Cool.

    - John Cooley
      the ESNUG guy


( ESNUG 344 Item 12 ) -------------------------------------------- [2/00]

From: Robert Wood <rwood@spacebridge.com>
Subject: 2-D VHDL Arrays Don't Make It When DC 99.10 Writes Out Verilog

One of our designers has reported a problem with version 99.10 of Design
Compiler.  The problem occurs when writing out Verilog from VHDL source.
It occours on 2 dimensional array signals, I believe.  Has anyone else met
this problem.  Somehow, given as it's a wierd construct to start with,
I'm a) not surprised, and b) all alone?

    - Robert Wood
      Spacebridge Networks


( ESNUG 344 Item 13 ) -------------------------------------------- [2/00]

From: [ A Synopsys VCS CAE ]
Subject: Overloading Verilog System Tasks (Like $finish & $monitor) In VCS

Hi, John,

I'm a VCS CAE and I thought I'd share one trick of overloading Verilog
system task with your readers. 

As you may be aware, the Verilog language has many predefined system tasks;
some of which are non-standard or part of legacy code.  Not all simulators
support all such tasks, but in VCS there is an easy way to get rid of such
tasks, or to enhance standard system tasks (like $display, $finish,
$monitor) by overloading them.

Take, for example, $disable_warning.  This is a non-standard task (there are
many such tasks like $scope, $showvars, $showscopes, etc.).  To run your
design  with this task in the design create a .tab (table) file with the
following line:

    disable.tab:
    ==

    $disable_warning

    ==

Then run it by adding -P disable.tab option as follows:

    vcs -P disable.tab <other older options>


Have you ever run into a problem with your old testbench where you just
can't figure out which $finish is terminating your simulation?  Wouldn't it
be nice if each $finish could output an custom message so that it would be
easier to debug?  There is an easy way of doing that just by overloading
the $finish statement in your design where you can see which condition
triggered the $finish (or which $finish amongst the 100's of $finish was
actually  executed).   Here's how:

    finish.v
    ===

    //Comment and uncomment altenatively one of the $finish for this demo

    module finish;

    lower l1();

    //initial #10 $finish;

    endmodule

    module lower;

    initial #10 $finish;

    endmodule

Here's where I create my_finish:

    finish.c
    ==
    #include <stdio.h>
    #include "acc_user.h"
    #include "vcsuser.h"

    my_finish()

    {

    s_location s_loc; 
    p_location loc_p = &s_loc; 

    handle h1;

    h1 = acc_handle_tfinst();

    acc_fetch_location(loc_p, h1);  /*get the filename and line_no*/
    if (! acc_error_flag)           /* On success */ 
    io_printf ("Object located in file %s on line %d \n",
                loc_p->filename, loc_p->line_no);
    tf_dostop();            /* You can do a tf_dofinish also here or
                               call your own routine */

    }

Here's the finish table itself:

    finish.tab

    ==

    $finish call=my_finish acc+=r:*


Compile as follows:

    vcs -P finish.tab finish.c finish.v -R

The output of above should look something like:

    Chronologic VCS simulator copyright 1991-1999
    Contains Synopsys proprietary information.
    Compiler version 5.1_Beta3; Runtime version 5.1_Beta3; Jan 10 17:19 2000

    Object located in file finish.v on line 11 
    $stop at time 10 Scope: finish.l1 File: finish.v Line: 11
    cli_0 > 

You can use this type of overloading in many other Verilog system tasks.  If
you run into a situation where your simulation testbench outputs too many 
$monitor messages, here is an easy way of supressing the same just by 
overloading the $monitor system task using VCS .tab file.

Create a monitor.tab file (or add the following line to existing tab file)
with one statement:

    monitor.tab

    ==

    $monitor


Then link this file with the rest of your VCS compile options as follows:

    vcs -P monitor.tab <rest of the old options>

and run simulation.  Those pesky monitor messages are gone.  This method is
a bit better than the simv -l simv.log > /dev/null method.  This is because
VCS will actually run faster as it doesn't have to do any job when it hits
$monitor.

In summary, use of tab file to overload (or replace) the Verilog system
task is very easy and it's use is limitless. 

    - [ A Synopsys VCS CAE ]



 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)