Editor's Note: "Movies I hate that EVERYONE loves?"  That's easy.
  "Titanic" -- a movie about a terribly self-absorbed woman ("Rose") who was
  out to take advantage of anyone around her & disguised her selfishness as
  "emotional angst" and "being trapped by circumstance".  Rose loves seeing
  herself as a "victim" to justify duping some rich guy into proposing
  marriage to her & giving her a luxary cruise across the Atlantic.  When she
  tires of him, she takes up with some guy on the boat who catches her fancy
  and takes him for a ride playing him off her fiancee.  In the end, even
  the new guy dies for Rose's shennanigans.  I know many women loved this
  film & thought it was "romantic" -- I guess that's the heart of romance for
  many women -- seeing what they can get out of men.  For me, it was one
  lame movie -- just like that "Bridges of Madison County" where wifey gets
  _bored_ with loyal hubby so she flings with wandering photographer and that
  "English Patient" movie where wifey flings with an English playboy
  because she's _bored_ with her loyal hubby.  And all of these sleasy,
  cheesy movies get nominated nominated for Oscars & are big box-office
  hits?!!  Geez!  (I dunno...  I guess it's just a guy thing...)

                                           - John Cooley
                                             the ESNUG guy

( ESNUG 300 Item 1 ) ---------------------------------------------- [10/7/98]

Subject: ( ESNUG 299 #2 )  Crappy Chip Express Documentation Cost Us

> Chip Express is, IMHO, composed of rank amateurs and completely
> mis-documented their vector format.  Now we need to spin another 2 days
> worth of effort to fix our test bench to re-generate the vectors in a
> completely different mode than what they originally told us.  Arrgh.
> Probably don't want to post this w/ my name on it, but I figured you'd be
> interested.  I will make damn sure that my company never uses them again.
>
>   - [ "FedEx, They Ain't." ]


From: "Keith Klingler" <keith_klingler@truevision.com>

John,

I recently did a design with Chip Express for which the design handoff was
very smooth. I had no problem with the test vectors.  Perhaps ["FedEx, They
Ain't"] could give a few details about what was mis-documented.  This
information may prove useful to those who, like me, plan use Chip Express in
the future, notwithstanding the humble opinion of ["FedEx, They Ain't"].

  - Keith Klingler
    Truevision, Inc.


( ESNUG 300 Item 2 ) ---------------------------------------------- [10/7/98]

From: Gary Cook <gc@adv.sonybpe.com>
Subject: What's The Technique To Handle Hierarchical Capacitance Problems?

John,

I was just wondering if you could answer a quick question for me.  I'm having
problems with violating capacitance constraints on nets in my hierarchical
design.  Performing 

     report_constraint -max_capacitance -all_violators

gives a number of violators, some of which I have resolved by doing set_load
on those ports ... others just won't budge whatever I do.  I've tried
characterizing and re-reporting, characterizing and re-synthing but nothing
works.  Can you tell me what the recommended procedure is to resolve these
types of violations ????

  - Gary Cook
    Sony Broadcast


( ESNUG 300 Item 3 ) ---------------------------------------------- [10/7/98]

From: "Shankar Hemmady" <hemmady@gurutech.com>
Subject: Some Honest Comments On The Current State Of The Test World

John,

I have been a software developer (non-EDA), an EDA tool developer as well
as a chip verification/test engineer.  It is very hard for me to take sides
without seeing all the three areas which now get amalgamated in high-level,
IP-based, deep submicron design with an emphasis on time-to-market,
time-to-volume etc.  (Did I get all the buzzwords in one sentence?)

Here's my take on the current state of the test world:

Fault Simulation 
----------------

In my opinion, stand-alone fault simulators will not stay for too long in
the current chip development process.  There are several companies that
use fault simulation as their only manufacturing test tool.  For today's
million gate chips, getting a decent (90%+) stuck-at-fault coverage with
functional vectors alone is a rather tedious and clumsy process.

It is almost impossible to think of a future with several embedded cores
(either third party, or legacy blocks within a vertically integrated company)
and tons of memory, FIFOs, control logic, long data paths, mixed-signal
blocks, etc., where functional test vectors will contribute over 60%
coverage.  Also, it should be noted that easy to test core logic suddenly
becomes difficult to test when it is embedded within a larger system-chip.

My recommendation for Fault simulation tools is use them frugally. Use them
only when you have no other choices -- where your cost, speed or other
constraints force you to have a non-scan, non-BIST, non-testable design.  Or
you have legacy blocks which you do not want to touch for pragmatic reasons
such as not messing them up and missing the market window.  Or you have
simple chips where it is easier to get a good grip on the manufacturing
coverage without using any DFT techniques.  (Other than large memory chips
and certain programmable logic, I can't think of many such chips.)  Even in
such cases, try to see if you can make some use of other DFT tools, as
little as you may want to use them, to alleviate some of the major test
bottlenecks you will face with functional test alone.

Which tools should you use for Fault Simulation?  Use the ones that are
easiest to use with your current functional test development environment.
If you use Verilog, try to go for Verilog-based tools.  If you use VHDL in
a particular flavor, see if your simulation vendor also has a fault
simulation solution.  The speeds of FS vary significantly depending on the
algorithms they use.  But more often, the problem of setting up the FS is
harder than the raw speed of FS.  Therefore, it is important to ensure that
you can fault simulate your vectors as easily as you can create them.  I do
not worry about "full-timing" fault simulation and other esoteric
capabilities.  In my experience, it does not contribute a whole lot except
good press.  Most of the folks I know really use them for stuck-at fault
testing, and generally use them on synchronous or almost-fully-synchronous
logic.  Even those that buy full-timing, mixed-level fault simulators end
up using them at the gate-level (possibly with RTL or testbench wrappers)
in the functional test mode.

I won't recommend any vendor names for FS tools since companies that seem to
make money on these tools seldom last for a long time, and seem to drop out
of the business over time.  If you intend to use substantial amount of scan
or other DFT logic, you may want to consider using the Fault Simulators in
the DFT environment.  You may have to work exclusively in a gate-level,
non-timing environment.  But it may be worth the effort if your scan logic
is substantial.


Scan-Based Test & DFT Tools 
---------------------------

These are currently the most popular type of manufacturing test tools.  One
of the earliest successful set of "Test Synthesis" and ATPG tools was sold
by Synopsys -- Test Compiler.  Although much complained about, for a
disinterested design team and an equally disinterested test/product
engineering team, IMHO, Synopsys Test Compiler tools did a reasonably good
job while using full scan.  They have had their problems in more complex
chip test schemes such as dealing with designs using low levels of scan.

Historically, Sunrise became the first tool in this area which was able to
market the idea of "partial scan and 100% coverage for dummies".  After a
while, the story was prudently changed to "almost full scan that works really
well".  Having been a part of the engineering team and applications team at
Sunrise in my past life, I am quite happy to say that the tool has worked
quite well in most chip design/test configurations and has withstood the
most esoteric demands of design engineers.

But it has never been an easy to use tool.  In many cases, I found that the
Synopsys Test Compiler was easier to use if all that I wanted was full scan
in a "testable" design enviroment (little asynchronous logic, plenty of
controllability and observability hooks etc).  But like a good old pickup,
Sunrise has a lot of raw power, which if harnessed properly could deliver
good coverage.  The first time around with Sunrise has usually been support
intensive for most customers.  Personally, I have used Sunrise tools the
most and being conversant with many of the idiosyncracies of partial scan
and the tricks you can use with Sunrise helps a lot.


Mentor's Fast Scan and Flex test tools, in my experience, seem to need less
support than the Sunrise tools.  Online documentation and tool maturity
with some of the larger chips has made these tool quite a bit robust and
easy to use.  My experience with Mentor's tools is limited.  But I certainly
like to use them and they do a good job.


I have never participated in a competitive benchmark of any of the above
test tools.  To a large extent, it surprises me to see some customers worry
about the raw speed of running the tools and getting a high coverage with
minimal test logic. There are several competing criteria that a customer
*should* worry about:

  * What are your test goals? High stuck-at-fault coverage? Some IddQ
    coverage? Any speed testing requirements? What kind of tests do your
    customers want/require? What can your semiconductor vendor do for you?
    Can you get support from a third-party test vendor or a test expert?

  * What are your design and market constraints? Time to market?
    Manufacturing cost? Quality, reliability, fault-tolerance? Leveraging
    existing designs to create larger more complex chips? Is this just
    a sample production run or is it a high-volume production run? Would
    your customers rather have an expensive/slower but reliable chip,
    or are they simply looking for "cheaper, better, faster" chips? Who is
    paying for bad chips? You or your customer ;-)

  * What are your test constraints? What kind of testers do you want to use?
    Will they have scan features? Who will do the testing -- you or your
    semiconductor vendor?  What kind of board testing will be employed on
    this chip?

  * What kind of test methodology do you think is suitable for your chips?
    Can you afford to use scan in most of your blocks? How much test
    overhead can you live with? Do you intend to use BIST for your regular
    structures? Have you looked at BIST for any embedded legacy blocks? Have
    you thought about how you will take care of your mixed signal/analog
    blocks?  Do you have a good handle on some of your logic simply by
    leveraging your functional vectors? Do you have the time to use your
    functional vectors well? Do you intend to use test logic for easing
    board test issues -- such as boundary scan?

  * How test savvy are you? Do you have the time, the money and the interest
    in learning all about the test problems you face currently and you are
    likely to face in the future? How much support can you expect from your
    EDA vendor, semiconductor vendor? How is your current test methodology
    going to affect any future spins of the chip? Are your current chips
    likely to shrink to cores or reusable blocks of embedded logic in a
    future design?

The reason I pose these questions is, more often than not, the speed of the
tool is secondary to the design/test environment that you are faced with.
You must give more importance to these points and ask your EDA vendor,
semiconductor vendor, test/tester vendor relevant questions before jumping
to conclusions based on some numerical benchmarks.  You *must* perform your
own evaluations and/or look for other chip design/test teams that use
methodologies akin to yours (most often other groups within your own
company, or other companies in your segment of the market).


BIST tools
----------

This area is still maturing.  Many large customers seem to have begun using
memory BIST for large embedded memories.  Most semi-vendors seem to have a
BIST solution for memories as well. Worries over BIST overhead should become
fewer as the sizes of the memories and their usage in difficult to test
areas within larger system chips increases.

Logic BIST and BIST for analog blocks seems to be in its very early stage of
real usage.  Although I have heard of success stories from a couple of
vendors, I am yet to hear anything independently from design/test groups.

IMHO, BIST will become very important over time. It is another story whether
you will need to buy expensive tools to do it. Many of the tools insert BIST
logic at the RTL level (unlike scan tools which usually operate at the
gate-level or during the synthesis process). It is very likely that hard and
soft core vendors will provide testable logic over time using some amount
of scan and some BIST logic. At this time, much of the IP/core business
has many other problems to tackle such as legal, business and verification/
design methodology issues. It is my feeling that manufacturing test issues
will pop up over time as one of the impediments to the proliferation of
leverageable blocks of logic, therefore making it essential to add some
high-level testability early on.

Among BIST vendors, Logic Vision and Mentor seem to be the leaders, along
with half a dozen or so smaller players.


Boundary scan tools
-------------------

These are often sold in conjunction with scan tools or BIST tools.  You 
should certainly consider your board test environment and/or your customer's
test requirements before using boundary scan.  I would simply look for any
specific JTAG or BSDL compatibility issues you may be concerned about.


There are other test tools such as test vector translation tools (TSSI);
virtual test or test environment emulation tools (IMS, Teradyne), mixed
signal test tools.  I could spend some time discussing their strengths
and deficiencies.  But it may not be useful to most ESNUG subscribers.
What do you think?

  - Shankar Hemmady
    Guru Technologies                        Cupertino, CA


( ESNUG 300 Item 4 ) ---------------------------------------------- [10/7/98]

From: Frederick Hinchliffe 2nd <fredh@tdf.com>
Subject: Synopsys VSS Only Supports VHDL 1987?!  No VHDL'93?  It's 1998 NOW!

John,

One of our customers popped up with a problem that was so outlandish I could
scarcely believe it.  We delivered to them a virtual component (aka "core")
written in VHDL RTL version 1993.  After a few days they came back
and said their VSS system could not process it because *it only comprehends
VHDL 1987*!  They were given these systems on very favorable terms by
Synopsys as an inducement to by other Synopsys software.  The customer did
not want to use a beta version of 1993 VSS that Synopsys has for the
obvious reasons of nervousness, and they could not switch to Model Tech
because they were too far down the road with the rest of their system in VSS.

So a good fairy backed down our model to 1987 for them.

Is this for real?  Did Synopsys really do that to them?!?!

  - Frederick Hinchliffe
    Technical Data Freeway                         Concord, MA


( ESNUG 300 Item 5 ) ---------------------------------------------- [10/7/98]

From: sgolson@trilobyte.com (Steve Golson)
Subject: Experiences, New Scripts, & Benchmarks with DC'98

John,

Here are some notes on my early experiences with DC'98 (1998.02-2).

Compared to DC 3.4b, with *no* changes to scripts and constraints, I'm
getting cycle time reductions of about 12%, from 13.46ns to 11.80ns.
The area increases about 9% overall. Total run time reduced 22%, from
25.5 hours to 20 hours. (Some modules ran much faster.)

This design has

 35717 lines of Verilog
   197 Verilog source files
   193 dc_shell scripts, conventional bottom-up module compilation
   273 'compile' commands
 ~450k gates


set_input_delay on clocks
-------------------------

DC'98 supports applying set_input_delay on clock inputs. In previous
versions this input delay was ignored. Now, the delay is added to the
clock network delay, but only when you propagate your clocks (using
set_clock_skew -propagated).

So be careful if you do something like

   create_clock -period 10 find(pin,CLK)
   set_input_delay 4 -clock CLK all_inputs()

because you will inadvertently set an input delay on port CLK. You
will get some informational TIM-112 and TIM-113 messages when you do
this, but only if you propagate your clocks.

You will get some UID-402 warnings if you try to set the input delay
relative to another clock (or no clock).

This feature is very useful when you have a skew-control PLL and you
are trying to back-annotate onto your clock network. You have to
propagate your clocks to correctly model clock skew, but how do you
account for the PLL insertion delay compensation? The PLL is
effectively acting as a *negative* delay. So we can model this
as follows:

    create_clock -period 10 find(port,CLK)
    set_clock_skew -propagated find(clock,CLK)
    set_input_delay -3.82 -clock CLK find(port,CLK)

In this case 3.82 is the delay from PLL clock input, through the clock
buffer tree, to the PLL reference input. This is the delay that will
be removed by the skew-control PLL. You have to get this number via
timing analysis, and it will be different for min/typ/max conditions.

Generally you try to have your PLL reference delay be near the
midpoint of your clock buffer tree skew distribution. This means that
some flops will be clocked *before* the external clock edge, in effect
an overall negative clock insertion delay for that flop. The timing
report for such a flop shows a negative clock network delay:

  Startpoint: q_reg (rising edge-triggered flip-flop clocked by clk)
  Endpoint: q (output port clocked by clk)
  Path Group: clk
  Path Type: max

  Point                                    Incr       Path
  -----------------------------------------------------------
  clock clk (rise edge)                    0.00       0.00
  clock network delay (propagated)        -0.82      -0.82
  q_reg/CK (DFF)                           0.00      -0.82 r
  q_reg/D (DFF)                            1.29       0.47 r
  U9/Y (BUFX1)                             0.08       0.55 f
  q (out)                                  0.00       0.55 f
  data arrival time                                   0.55

  clock clk (rise edge)                   10.00      10.00
  clock network delay (propagated)         0.00      10.00
  output external delay                   -8.00       2.00
  data required time                                  2.00
  -----------------------------------------------------------
  data required time                                  2.00
  data arrival time                                  -0.55
  -----------------------------------------------------------
  slack (MET)                                         1.45


Back-annotation with SDF
------------------------

SDF files specify delays with triplets that give the min, typ, and max
values for a particular delay. For example

   (23:34:45)

would represent a minimum delay of 23, typical of 34, and a maximum of 45.

In previous versions of Design Compiler, when you back-annotate the
delays from an SDF file using the read_timing command, you could
control which values of the triplet were annotated using four dc_shell
variables:

  sdfin_fall_cell_delay_type
  sdfin_fall_net_delay_type
  sdfin_rise_cell_delay_type
  sdfin_rise_net_delay_type

Setting these to "minimum", "typical", or "maximum" (the default)
would select the particular triplet element.

DC'98 has a nifty new command 'set_min_library' which allows compile
to optimize for max delays and min delays simultaneously. You can also
specify different wire loads and operating conditions to be used for
min and max delays. This is great for compile, but how does it affect
back-annotation?

'read_timing' no longer uses these variables. Instead there are two
new options for read_timing. Here is what the man page says:

          -min_triplet min_triplet_name
                         Specifies which value from the SDF delay
                         triplet should be annotated for minimum
                         delay analysis. By default, no delays
                         are annotated for minimum delay analysis
                         and as a result, the same delay values
                         that were annotated for maximum delay
                         analysis are also used for minimum
                         delays. The legal values for
                         min_triplet_name are "none" (the
                         default), "minimum" (use the leftmost
                         delay triplet), or "typical" (use the
                         middle delay triplet).

          -max_triplet max_triplet_name
                         Specifies which value from the SDF delay
                         triplet to annotate for maximum delay
                         analysis. By default, the rightmost
                         delay value in each triplet is annotated
                         for maximum delay analysis. The legal
                         values for max_triplet_name are
                         "maximum" (the default), "typical" (use
                         the middle delay triplet), or "none"
                         (don't annotate delays for maximum delay
                         analysis).

It also understands the names "max", "typ", and "min".

Here's how to get both max and min timing to use the typical triplet:

  read_timing -min_triplet "typical" -max_triplet "typical" your_sdf_file

and how to get both max and min timing to use the maximum triplet:

  read_timing -min_triplet "none" -max_triplet "maximum" your_sdf_file

But you cannot get both max and min timing to use the minimum triplet!
This is a bug (or a misfeature, take your pick).

My silicon vendor sends me an SDF file that has three values per
triplet, corresponding to best case, typical, and worst case timing
(over voltage, temperature, and process). So I need to do max and min
timing analysis using *each* element in the triplet. The new
read_timing syntax doesn't allow it. The -min_triplet and -max_triplet
options should *each* allow "minimum", "typical", "maximum".

If you try and use a pre-DC'98 script that has these variables,
read_timing will automatically assume the appropriate values for
-min_triplet and -max_triplet. Here is a snippet from a DC log file
that back-annotates maximum (worst-case) delays:

  sdfin_fall_cell_delay_type = "maximum"
  "maximum"
  sdfin_fall_net_delay_type = "maximum"
  "maximum"
  sdfin_rise_cell_delay_type = "maximum"
  "maximum"
  sdfin_rise_net_delay_type = "maximum"
  "maximum"
  read_timing -load_delay cell -context verilog file.sdf
  Information: Reading 'maximum' values for 'maximum delay analysis'.
               (SDFN-16)
  1

You get similar results from "typical".  But if you try "minimum":

  sdfin_fall_cell_delay_type = "minimum"
  "minimum"
  sdfin_fall_net_delay_type = "minimum"
  "minimum"
  sdfin_rise_cell_delay_type = "minimum"
  "minimum"
  sdfin_rise_net_delay_type = "minimum"
  "minimum"
  read_timing -load_delay cell -context verilog file.sdf
  Error: 'minimum' is not a valid triplet name for max. (SDFN-20)
  0

Wonderful. My old scripts are now broken.

So how do I annotate minimum triplet values onto maximum delays?
Write a perl script!

Here is a perl script that modifies an SDF file to copy the leftmost
triplet element onto the other two elements. This allows me to do what
I want, which is to get the minimum triplet annotated onto both min
and max delays.


#! /usr/local/bin/perl
#
#     min_only
#
# When run on an SDF file, this script
# copies the min (leftmost) delay triplet value to
# the typ (middle) and max (rightmost) values
#
# If you have an empty line in the middle of a triplet,
# this script won't work! So go write your own perl.

$/ = "" ; # paragraph mode for multi-line matches

$rnum = '[e\-\+\.0-9]+' ; # real number

while (<>) {

    s?
        (\(\s*)    # $1 (
        ($rnum)    # $2 min value
        (\s*:\s*)  # $3 :
        ($rnum)    # $4 typ value
        (\s*:\s*)  # $5 :
        ($rnum)    # $6 max value
        (\s*\))    # $7 )

        ?$1$2$3$2$5$2$7?gsx ;

    }continue{

    print $_;

    }

###### end of perl script min_only


So now my dc_shell script looks like:

  sh min_only original.sdf > /tmp/hacked.sdf
  read_timing -min_triplet "min" -max_triplet "max" /tmp/hacked.sdf
  sh /bin/rm /tmp/hacked.sdf

and this gets the min triplet value annotated onto both the min and
max delays.


More back-annotation with SDF
-----------------------------

Be especially careful when you propagate your clocks! Because of the
new simultaneous min/max timing capability, when running a setup
check, the source clock is propagated with max conditions and delays,
while the destination clock is propagated with min conditions and
delays!

Conversely, if you are running a hold check, the source clock is
propagated with min conditions and delays, and the destination clock
is propagated with max conditions and delays.

This is almost certainly *not* what you want. So:

1. If you are using the min/max timing feature, do *not* propagate
   your clocks

or

2. When you back-annotate timing and propagate your clocks, make sure
   the same values are annotated onto max and min delays. Use the
   read_timing examples I gave earlier, and use the perl script if
   necessary.

I haven't tried it, but there is a hidden variable to workaround this
problem:

   remove_min_max_pessimism = "true"

Be sure and do an update_timing after you set this variable.  See
Solvit article Synthesis-304 for more information.

  - Steve Golson
    Trilobyte


( ESNUG 300 Item 6 ) ---------------------------------------------- [10/7/98]

Subject: ( ESNUG 299 #5 ) DC 98.08 - 3.4b Double "ANDs" W/ Resets Problem

> We are seeing that Design Compiler is adding double "AND" gates when it is
> not necessary.  The created gate level netlist will has two AND2 gates at
> the end of the path and both AND2's will have reset as one of the inputs.
> 
> 
>    +----+ sel_d1
>    |FLOP|-----------------+
>    +----+                 |
>                           |
>    +----+ vishnu_d1[0]  |\|
>    |FLOP|---------------| \
>    +----+               |M |      +----+       
>                         |U |------|    |     +----+
>    +----+ vishnu_d2[0]  |X |      |AND2|-----|    |     +----+
>    |FLOP|---------------| /    +--|    |     |AND2|-----|FLOP|
>    +----+               |/     |  +----+   +-|    |     +----+
>                                |           | +----+
>    +----+ soft_reset_10        |           |
>    |FLOP|----------------------+-----------+
>    +----+    


From: William Liao <wliao@vadem.com>

Hi, John,

I compiled the given code from this bug, and I did not have any problems.
In my case Design Compiler replaced the 2-1 mux and the two AND2 gates
with a complex gate 2-2 OR-NAND (two 2-input OR followed by one 2-input
NAND).  I had no double gated ANDs that this user complained about.

  - William Liao
    Vadem

         ----    ----    ----    ----    ----    ----   ----

From: [ Stuck Up Ganges Creek Without A Paddle ]

Hello John:

Please keep me anonymous.

I'm the engineer who submitted the original problem.

While debugging the above problem, we manually deleted the second AND
gate and did a report_constraint -all_violators. The thinking behind doing
so was to check if Design Compiler was adding the second AND gate to fix
some design rule.  No violations were reported by report_constraint.

We then added the following two lines to our synthesis script just
before the compile command.

        set_cost_priority -delay
        set_max_area <Integer>

where <Integer> is a number smaller than the area number reported in our
previous synthesis run.  This solved the problem!

We still don't understand, though, why Design Compiler adds an additional
AND gate and then removes it during Area Optimization.

Thanks for your help.

  - [ Stuck Up Ganges Creek Without A Paddle ]


( ESNUG 300 Item 7 ) ---------------------------------------------- [10/7/98]

Subject: ( ESNUG 299 #12 ) Exporting From Synopsys dc_shell To EPIC PowerMill

> I am using EPIC's PowerMill to estimate power usage for netlists which I
> have created through Synopsys dc_shell.  I would like to export the
> pre-layout parasitic values being calculated for each net from Synopsys's
> wire load models so that I can use them in the PowerMill run.
>
> I haven't been able to find a direct way of doing this, however.  The
> closest "approved" method I've seen of getting the parasitic values out
> from the dc_shell is using the "write_parasitics" command, which generates
> an SPEF-format file.  Unfortunately, the only format for a parasitics file
> which PowerMill accepts is DSPF, which is apparently incompatible.
>
> Is there an easy translation available for SPEF to DSPF, or is there a
> freeware/shareware/commercial product available which can do this?  (Or
> any other solution to this problem?)
>
>   - Kim Flowers
>     TransLogic Technology, Inc.


From: Kim Flowers <kimf@translogic.com>

John, I just wanted to follow up on my post to ESNUG.  One of my coworkers
ended up basically "grepping" (or using Perl) to extract out the capacitance
values from the SPEF netlist and construct a PowerMill script with
"add_node_capacitance" commands.  This still doesn't address possible
resistance values, which can have a significant effect (depending on the
situation), but we're living with it right now.

  - Kim Flowers
    TransLogic Technology, Inc.

         ----    ----    ----    ----    ----    ----   ----

From: Chris Jacobs <chris.jacobs@analog.com>

John,

An alternative way to get pre-layout parasitic information from dc_shell is
through the "report_net" command.

Running this command with no options will list the capacitance for each
net, but it lumps the net capacitance and the gate capacitances together
(ie. C = Cwire + Cgate).  
 
After this report is created, a simple script can be used to extract the cap
values.  PowerMill can then accept the capacitance data in a variety of
ways.  Two possible ways are:
 
   A. Via a configuration command syntax of:
            add_node_cap nodeName(s) capValue

   B. Via standard SPICE netlist syntax of:
            Cxx node1 node2 capValue
 
You then need to tell PowerMill to disable it's internal gate capacitance
calculations to prevent counting the gate capacitances twice (since DC
includes it in the report_net report).  This can be done via the following
configuration command:
 
                 set_ckt_nogatecap value

This command affects the entire circuit. "value" can be set to one of:

  1  =  only gate capacitance will be set to 0.
  2  =  only gate overlap capacitance will be set to 0.
  3  =  both gate and overlap capacitance will be set to 0.  <-- this is what
                                                                 you want
  - Chris Jacobs
    Analog Devices, Inc.                                  Wilmington, MA


( ESNUG 300 Item 8 ) ---------------------------------------------- [10/7/98]

Subject: ( ESNUG 299 #6 )  The Synopsys/Mentor "Reuse Methodology Manual"

  [ Editor's Note: The ">"s below are from Michael Keating, the author of
    the "Reuse Methodology Manual" (by Kluwer Academic Publishers); the new
    replies below are from Janick Bergeron of Qualis Design.  - John ]

Author Mikael Keating wrote:

> The biggest problem a technical author faces is that engineers don't read.
> I was taught this in a tech writing class many years ago and certainly
> re-learned it with the RMM.  The first version (Synopsys Design Reuse Style
> Guide) which we put together early in 1997, was twice the size of the RMM,
> with a great deal more material, especially in design for test and
> chip-level implementation.  The trouble was, I couldn't get anyone to read
> it!  Five hundred pages was just too thick; people weren't even cracking it
> and reading the first page. 

From: "Janick Bergeron" <janick@qualis.com>

Actually, they do read a lot.  Anybody who has mastered Unix and EDA tools
had to read an awful lot.  They simply do not want to wade through long
winded paragraphs to find the few pearls of wisdoms on how to do things.
They like easy to find and follow pointers with some explanations (if they
happen to initially disagree with them) or the risks involved in breaking
them.  That way, they don't have to read all 500 pages: only the headings,
then read on for more details as required.

I think your chapter 5 was by far the best written for engineers and
probably the one that will be referenced the most.  If only the other
chapters were written in the same style!


> Janick states that the book sins in making all the guidelines very DC
> centric.

Where did I say "all"? *Some* of the guidelines involved DC shell commands
and scripting - can't be more DC centric than that.

I would recommend you structure guidelines according to applicability (such
as we have done in our own web-based reuse document) general coding (e.g.
comment, indentation), HDL coding (e.g. signals vs variables, assignments),
RTL coding (e.g. synchronous design), Verilog coding (e.g. avoiding race
conditions), VHDL coding (e.g. 87 vs 93) and DC coding/synthesis (e.g.
defining clocks).


> Behavioral models and testbenches are critical (and show up at the top of
> the flow diagrams) for a large class of designs, especially those with
> algorithmic complexity. (...)   On the other hand, behavioral
> models are not useful for another class of design (like a bus interface)
> where the cycle-by-cycle behavior is the critical design factor.

I definitely didn't get "critical" out of the book.  And I disagree with
your assessment of their usefulness for cycle-by-cycle interfaces.  Here's
a Verilog behavioral model for read/write cycles on a i386SX bus.  It's a
fully synchronous bus with specific cycle-by-cycle behavior.  Yet the model
is completely behavioral, simulates faster than an equivalent RTL model,
faster to write, easier to debug and can be interfaced to any similar bus
models for another bus interface to quickly create a behavioral model for
a bridge.

And it does not convey any intellectual property.

We feel that a behavioral model should be part of *all* IP components
delivered.

// Note: all delays values should REALLY be parameters...

time LAST_RDY;
always @ (RDY) LAST_RDY = $time;
time LAST_D;
always @ (D) LAST_D = $time;

//
// Read Cycle
//

task READ;
   input  [23:1] ADDR;
   output [15:0] RDAT;
begin
   @ (posedge CLK);
   A   <= #4 ADDR;
   RW  <= #4 1'b0;
   ADS <= #4 1'b0;
   @ (posedge CLK);
   
   @ (posedge CLK);
   ADS <= #4 1'b1;
   repeat (2) @ (posedge CLK);
   while (RDY !== 1'b0) begin
      repeat (2) @ (posedge CLK);
   end
   if ($time - LAST_RDY < 19) begin
      $write("WARNING: Setup violation on RDY at %t\n", $realtime);
   end
   if ($time - LAST_D < 9) begin
      $write("WARNING: Setup violation on D at %t\n", $realtime);
   end
   RDAT = D;
   fork
      begin
         #4;
         if ($time - LAST_RDY < 4+19) begin
             $write("WARNING: Hold violation on RDY at %t\n", $realtime);
         end
      end
      begin
         #6;
         if ($time - LAST_D < 6+9) begin
              $write("WARNING: Hold violation on D at %t\n", $realtime);
          end
      end
   join
end
endtask

//
// Write Cycle
//

assign #4 D = DATA;
task WRITE;
   input [23:1] ADDR;
   input [15:0] WDAT;
begin
   @ (posedge CLK);   
   A    <= #4 ADDR;
   RW   <= #4 1'b1;
   ADS  <= #4 1'b0;
   @ (posedge CLK);
   DATA <= WDAT;
   @ (posedge CLK);
   ADS <= #4 1'b1;
   repeat (2) @ (posedge CLK);
   while (RDY !== 1'b0) begin
      repeat (2) @ (posedge CLK);
   end
   if ($time - LAST_RDY < 19) begin
      $write("WARNING: Setup violation on RDY at %t\n", $realtime);
   end
   #4;
   if ($time - LAST_RDY < 4+19) begin
      $write("WARNING: Hold violation on RDY at %t\n", $realtime);
   end
   @ (posedge CLK);
   DATA <= 8'hZZ;
end
endtask


> Again, where he says Verilog simulators are as troublesome for portable
> code as VHDL, this has simply not been our experience.  We test our code on
> VSS, Vantage, MTI, Leapfrog, XL and VCS. Consistently our greatest problems
> are with the VHDL simulators, though we have had some problems with the
> Verilog ones.  With Verilog, the basic key is to stick to fully synchronous
> design, and follow the blocking-non-blocking rules, and you'll run into few
> problems - except for real bugs in the simulators

If all you are porting are RTL models, then of course you won't get problems.
But what about the 1.5-2.5X amount of behavioral code that makes up the
testbench?  In *every case* where we had to port behavioral code from XL to
VCS (or vice-versa), it broke down (the code was not written by Qualis mind
you :-).  One of our engineers is right now tracking down why a simulation
that works just fine under VCS won't work at all under XL (although every
thing compiles fine...).  We even had testbenches that did not work if we
did not use the proper +turbo options in XL.  Some other behavioral model we
have seen simply could not be ported without a significant amount of
rewriting because XL and VCS resolve relative hierarchical names differently.

A nice thing to use is the shared access warning in the MTI simulator: it
catches some of the race conditions (i.e. multiple writes to a reg in the
same simulation cycle).  If anybody is interested in writing a "lint" tool
that will do some structural and dataflow analysis on behavioral Verilog
code, I have a set of rules to look for to detect race conditions and
questionable constructs...


> The real problems that real chip teams are facing using the IP are
> incredibly basic (...)   But I have to question Janick's assumption that
> there are some design groups that have solved all the technical problems
> in reuse.

If those problems are so incredibly basic, why is it so hard to believe that
some groups have overcomed them?  I claim that (and know) groups follow
almost every technique you outlined in your book, yet little reuse happens
there.  My point was that the content of the RMM is not a *sufficient* (and
sometimes not a necessary) step to enable reuse.


> There are at least two unsolved technical problems: the problem of
> verifying a design to 100% levels of confidence (...)    There is
> a clear need for IP that has no bugs; why reuse code you end up having to
> debug.  But there is no way, today, to produce a significant design with
> (provably) no bugs.

That can *never* be achieved.  Nor it is necessary.  We succesfully design
right-first-time ASICs today - no one claims these ASICS to be bug free or
verified at 100% confidence.  Why not right-first-time IP?


> and the high cost of design for reuse. (...)   And the cost of designing
> for reuse often exceeds 2x the cost of developing a design for a single
> use.  This is a serious liability when engineering teams decide what to
> make reusable.

Here you seem to be arguing against reuse...  Everybody knows that reuse
saves in the long run.  We also have customers who claim that reuse actually
makes design *cheaper* (when designing their own IP) because it forces them
to compartimentalize functionality, write better specification (especially
of the interfaces) of more manageable blocks, partition then allocate
designs more effectively, reduce the number of interfaces, verify at a level
where there is more visibility using common interfaces (thus reusing
testbench components as well), perform system-level verification, etc...

All items you outline in your book but actually implementing them are more
cultural and management issues than technical.

  - Janick Bergeron
    Qualis Design Corporation


( ESNUG 300 Item 9 ) ---------------------------------------------- [10/7/98]

From: "Malhotra, Sanjeev" <sanjeev.malhotra@intel.com>
Subject: What Do You Think About Summit Vs. Renior?

John,

I was reading your article on FSM compilers and also some good things you
had to say on Summit.  Since that article is rather old, I would want to
know if you have any recent data on Visual VHDL ( Graphical tool from Summit)
versus Renoir (similar tool from mentor). 

Renoir came bit late in this game but their tool looks pretty slick. 

I would appreciate any inputs on this front. 

  - Sanjeev Malhotra
    Intel



 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)