> Now, you might ask "Why not do the changes under the hood when writing
  > out the files?" ...  Doing under the hood changes is not a good thing
  > to do precisely because it is done under the hood. ...
  >
  >     - Steve Hoeft (in ESNUG 406 #8)
  >       Synopsys, Inc.                         Mountain View, CA


  From: David Simmons <user=simmod domain=taec.toshiba cot mom>

  Hi, John,

  In my office, we have a little game we like to play when we get a
  Synopsys presentation.  Once the presentation begins, we place secret
  bets and then watch the clock to see how long the Synopsys person can
  talk before saying the phrase "under the hood".  

      - Dave Simmons
        Toshiba


( ESNUG 407 Subjects ) ------------------------------------------ [02/26/03]

 Item  1: ( ESNUG 405 #6 ) 0-in, Averant Solidify, Real Intent, Tempis Fugit
 Item  2: ( ESNUG 405 #2 ) Rectilinear Blocks In PhysOpt/Apollo/Astro Flows
 Item  3: I Can't Get My VCS Linux Farm To Run Faster Than Sun Workstations
 Item  4: ( ESNUG 404 #12 ) Separate PrimeTime Extract_Model Mix/Max Delays
 Item  5: ( ESNUG 405 #11 ) TetraMAX Is Extremely Picky About UDP Coverage
 Item  6: ( ESNUG 405 #5 ) PrimeTime & DC Results Are Machine Dependant??!
 Item  7: ( ESNUG 339 #5 ) Semantic Design Offers A Verilog Obfuscation Tool
 Item  8: Newbie Problems Using Hierarchical model in Cadence Spectre Sim
 Item  9: What Will synopsys_translate_off / synopsys_translate_on Do Here?
 Item 10: ( ESNUG 395 #11 ) Denali IP For DDR/SDRAM Interfacing And MMU's
 Item 11: Newbie Trouble Linking Hierarchical Design In Behavioral Compiler
 Item 12: We At Tensilica Have Had Good Experiences With Forte' Perspective
 Item 13: An Ex-PhysOpt User Asks Is Cadence FE-Ultra/PKS Up To Snuff Now?
 Item 14: ( ESNUG 406 #2 ) Synopsys "aman" vs. Lazy Man (lman) Man Script
 Item 15: ( ESNUG 406 #9 ) Using Mentor Fastscan With Synopsys DFT Compiler
 Item 16: User Unhappy About Mentor Changing The Eldo Licensing Mechanism
 Item 17: ( ESNUG 406 #12 ) A Zero WLM May Not Be Best For DC Runs At 0.13

 The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com


( ESNUG 407 Item 1 ) -------------------------------------------- [02/26/03]

Subject: ( ESNUG 405 #6 ) 0-in, Averant Solidify, Real Intent, Tempis Fugit

> We used Averant's Solidfy briefly a year ago.  Their property language was
> very powerful, but the tool failed to deliver results.  It is very easy to
> write complex properties for easier checks with the language.  This made
> the tool spend days without any results.  Finally the design engineers
> gave up on using Solidify.
>
> The reason I want to be anon is because, I am evaluating Real Intent and
> 0-In automatic checks. I am afraid that my opinions might be not based on
> all facts.  But at the same time, I would like to know what others are
> thinking.
>
>     - [ Curious George ]


From: Anders Nordstrom <wolf=andersn pack=ellipticsemi wrought prom>

Hi John,

Curious George learnt one of the issues with formal tools very quickly.  It
is annoying to debug thousands of warnings or false errors.  It is not
enough with one tool to get the full benefit out of Assertion Based
Verification (ABV).  The result from a formal engine is limited by the
quality and completeness of your properties.  If your properties or
constraints specifying legal inputs to a block are too restrictive, you're
limiting the state space the tool can explore and, as in Curious George's
Averant example, the tool will not find any violated properties.

On the other hand if your properties are less restrictive and allow illegal
input combinations, it is very easy to incorrectly find a property violation
that you have to debug.

I have used 0-In's tool suite which uses both simulation and formal
verification and it has worked very well.  I have found real bugs without
having to debug false errors from the formal engine, 0-In Search.

On my previous chip, 0-In Search found a bug in a FIFO controller where the
last word of an Ethernet frame was overwritten if a new frame arrived at
the same time as a down stream FIFO was full and the frame size was
incorrect, but the CRC for the frame was correct.  A formal tool would never
have been able to explore enough of the state space to find this bug if the
constraints had not been debugged and verified in simulation.

Right now I am working on a new design at Elliptic Semiconductor.  We
currently use 0-In's Checklist and Check tools.  Checklist finds simple
problems like un-driven logic, incomplete sensitivity lists and un-connected
ports, but the real power of the tool lies in the more complex checks for
multiply driven registers, duplicate case items and clock domain crossings.
(Of course I can find all these things in simulation but I will be spending
a lot more time debugging them there.)  Checklist also checks the syntax of
the assertions in my design so I know they are correct when starting my
simulations.  We plan to use 0-In Search in the future but right now we're
busy writing assertions and simulating them and we will do that as long as
we find bugs in the design and our assertions need refinement.

On a related note, if you can get them, pre-written properties for standard
protocols are a great time saver.  0-In has monitors for a few standard
protocols that work for both simulation and formal analysis.  It is enough
work to write a bus interface; writing a checker for it can take as long as
the actual design.  I guess that this is a pretty large market.  Synopsys
and Denali have pre-written checkers as well.  If you only want to run
simulations you can mix standard interface monitors from different vendors,
but then you won't get the benefit of using them as constraints for formal
analysis.  If you plan to do formal verification as well, your best bet is
to get the monitors from the same vendor as the formal tools. 

    - Anders Nordstrom
      Elliptic Semiconductor                     Ottawa, Canada

         ----    ----    ----    ----    ----    ----   ----

From: James Lee <goldfish=jml bowl=asicgroup naught brawn>

Hi, John,

In response to the "Averant Solidify vs Real Intent vs 0-in vs Verplex vs
Tempus Fugit" question, I have looked at most of these tools.

We use a simple white board approach to eval these tools.  We look at the
spec for the design unit we are interested in checking, then we write a list
of properties or assertions we want to  address.  We then reference the
automated checks that the tool implies from the RTL and cross them off.  We
attempt to code the remaining checks in the property language.

Going through the tools in the order [ Curious George ] used:

  Averant Solidify -- I have not looked at it in about 2 years.  Two years
  ago it lacked "implied intent" or "automatic assertions".  Their property
  language was awkward to use and we had difficulty with every property we
  attempted to write.

  Real Intent Verix -- We have the most experience with Verix from Real
  Intent and find that typically 2/3 to 3/4 of the properties we write on
  the initial list are covered by "implied intent" or "automatic
  assertions".  The list of automatic checks increases with each release.
  I give the tool a big thumbs up since the implied checks do the vast
  majority of the work.  My pet peeves with Verix are - Default Cases, if
  you have a default case that can never be entered but do not set every
  thing to "x" but instead set everything to the default state you get a
  message that the default can not be entered.  We would prefer to be warned
  when the default is possible to enter.  State depth, if you have a state
  machine with a counter that delays states for a large count is causes
  state explosion and the tools dies (see my example later).  Verix also
  has issues with intra-assignment delays on non-blocking assignments.
  These yield warnings during the compile stage.  Our style guide requires 
  delays on all non-blocking assignments used for sequential logic.  Just 
  like the default case, we want to reverse this message and warn when one
  does not exist.  They also lack Verilog 2001 support.  We are rapidly
  updating our style guide to use ANSI port declarations and these are not
  yet supported by Verix.  In fairness to Verix, let me point out that I
  have seen many other tools including Cadence's HAL that warn about the
  delays and all the formal tools die with state explosion.

  Tempis Fugit -- We looked at Tempus Fugit in its pre-Alpha days so I do
  not have up-to-date information on it.  However, the strength in the tool
  lies with the "IP" they have created for checking standard bus protocols.
  The strength in these types of tools is in two basic camps.  The automated
  checks the tools imply from the RTL and the "IP" available with the tools 
  in terms of "canned" checks for known protocols

Since the ESUNG readers are always hungry for the dirt, let me give you a
test case that will break all of these tools. 

  The design: a UART with built in bit rate generator.

  The properties: the data that goes in parallel should come out serially
  in the proper bit order.  Add a parity check for good measure.  The 
  length of all the bits should be the same (i.e. it should not be possible
  to change baud rate while sending a byte.)

This will break each of these verification tools due to the bit rate
generator causing state explosion.  The tools will handle the 10 or so
states of the shift register, start and stop bits, and parity, but the
counter feeding that for the bit rate will, in my experience, make each of
these tools give up and not see the state machine.

We may later re-evaluate Tempus Fugit, Verplex, Spyglass and Solidify.  And
I hope my competitors will continue to use only simulation to verify their
designs!

    - James Lee
      The ASIC Group                             Fremont, CA


( ESNUG 407 Item 2 ) -------------------------------------------- [02/26/03]

Subject: ( ESNUG 405 #2 ) Rectilinear Blocks In PhysOpt/Apollo/Astro Flows

> I read the ESNUG 395 #6 question regarding "How to deal with Rectilinear
> Block Designs in PhysOpt/Jupiter/Apollo Flows", and the corresponding
> feedback.  The corresponding feedback in ESNUG 396 #3 was not directly
> dealing with the PhysOpt/Jupiter/Apollo tools.  I was wondering if you
> were aware of any fully user-debugged rectilinear block flows using
> PhysOpt, Jupiter, Apollo and/or Astro tools?  If so, can you please let
> me know?
>
>     - Raghav Yerramreddikalva
>       Intel


From: Jay Pragasam <goose=jlk flock=brecis plot guam>

Hi, John,

I am not sure how exactly the Intel guys try to handle it, but here what I
did in my rectilinear block.

So they don't seem to have a problem creating a rectilinear floorplans in
Apollo.  Now once you create it, the PDEF that you write out from Apollo
will have a boundary, pin and macro locations.  The boundary still is
rectangular.  It doesn't matter.  But the pins are placed inside the
boundary along the rectilinear periphery which would be inside the
rectangular boundary.  This is acceptable to PhysOpt.  Now the main task
is to restrict the number of placeable sites inside the block.  For a
shape with one piece of rectangle chopped off from the right corner,
here is what the PDEF looks like:

 ( SITE ROW_255 core "2740.0 573040.0" "H" "180-mirror" "660.0" "3155.0" )
 ( SITE ROW_256 core "2740.0 579200.0" "H" "0" "660.0" "3155.0" )
 ( SITE ROW_257 core "2740.0 585360.0" "H" "180-mirror" "660.0" "3155.0" )
 ( SITE ROW_258 core "2740.0 591520.0" "H" "0" "660.0" "3155.0" )
 ( SITE ROW_259 core "2740.0 597680.0" "H" "180-mirror" "660.0" "1429.0" )
 ( SITE ROW_260 core "2740.0 603840.0" "H" "0" "660.0" "1429.0" )
 ( SITE ROW_261 core "2740.0 610000.0" "H" "180-mirror" "660.0" "1429.0" )
 ( SITE ROW_262 core "2740.0 616160.0" "H" "0" "660.0" "1429.0" )

As you can see, 3155 sites represent the bigger chunk and 1429 sites
represent the smaller chunk.  So if there are no placeable sites, PhysOpt
will not place standard cells in that area effectively making it
rectilinear.  There is one more precaution that you have to follow.  When
we created the floorplan in Apollo, I added routing blockage in all layers
on top of the chopped off area.  If this is not done, PhysOpt assumes that
it has routing channels available in the chopped off area and uses those
tracks for congestion calculation and the design would become overly
congested or unrouteable after placement.

Getting back into Apollo after PhysOpt shouldn't be an issue at all.  All
I do is read the standard cell placement into the floorplan that was
created in Apollo.  Since PhysOpt's placement are absolute coordinates,
they can be read into Apollo without any hassles -- of course with a data
format translation in between.

    - Jay Pragasam
      Brecis Communications                      San Jose, CA

         ----    ----    ----    ----    ----    ----   ----

From: Raghav Yerramreddikalva <1=raghav.yerramreddikalva 2=intel hot yon>

Hi, John,

The main obstacle we faced was loosing the rectilinear shape going from
Apollo/Jupiter to PhysOpt and back.  For whatever reason, when we PDEF-in
the placement back from PhysOpt into Apollo, we were loosing the shape and
it was reverting back to the regular rectangle/square.  If Jay can shed
light on any special options/settings we need, that will be very useful.

    - Raghav Yerramreddikalva
      Intel

         ----    ----    ----    ----    ----    ----   ----

From: Jay Pragasam <engine=jlk caboose=brecis not pomme>

Hi John,

I started this question about the implementation hurdles using rectilinear
floorplans for blocks in hierarchical designs.  Thanks to all for sharing
their views.  We went ahead and implemented the chip with rectilinear
blocks, though we put a lot of man-hours modifying and tuning the existing
rectangular flow and now the chip is back working.  Here is a summary based
on my experience.

   1. We had to restrict the shapes of some blocks to be rectangular that
      were instantiated multiple times.

   2. Jupiter did not pose too many problems in handling rectilinear
      shapes though we manually fixed the locations of timing critical
      pins to overcome Jupiter's timing blinded pin assignment.  We had to
      bug Synopsys (Avanti then) to fix some code and release private
      versions for specific failures in pin generation.

   3. Initially we planned to push down power from the top into the blocks,
      but Jupiter wasn't friendly enough in letting us do that when it
      encountered multiple instantiated blocks.

   4. Our routing methodology is a little different from the conventional
      methods.  Apollo came up with bizarre track definitions for this
      methodology which posed terrible headaches to debug.  Finally I wrote
      my own Scheme scripts to define tracks based on the block specs.

   5. The placed database from PhysOpt was unroutable in Apollo because
      PhysOpt does not look at a rectilinear shape in its true form.
      Rather it looks it as a rectangular block with blockages.  So PhysOpt
      used the tracks from the chopped off area for routing congestion
      estimation which was infact unavailable tracks during actual routing.

   6. No issues in parasitic extraction.

   7. No issues in GDSII dumping.

Finally, will I think "rectilinear" for my next chip.  Absolutely.  It adds
an extra dimension to our perspective in looking at the fullchip floorplan.
It certainly opened up a lot of possibilities to best utilize the available
area.  :)

    - Jay Pragasam
      Brecis Communications                      San Jose, CA


( ESNUG 407 Item 3 ) -------------------------------------------- [02/26/03]

From: Philip Strykowski <1st=philip.strykowski last=mindspeed jot awn>
Subject: I Can't Get My VCS Linux Farm To Run Faster Than Sun Workstations

Hi, John,

I would like to post a question regarding VCS simulations.  I am working
to set up a Linux farm as an addition to our Sun compute farm.  However,
during testing of our simulation runs, I have not been able to get
any type of substantial speed-up that everyone seems to be talking 
about.  Our Sun machines are Solaris 2.7, 333-450 MHz, and the Linux 
machines I am testing are Red Hat 6.2/7.2, 1 GHz Intel with 1 GB of ram.
Now since the simulation images aren't exceeding 150 MB, this doesn't
seem like a memory/swap issue.  I have compiled with the VCS option +prof
to turn profiling on, and the vast majority of time is spent in execution,
so it doesn't seem like a PLI/external problem.

I recently ran the simulations on Sun machines, the 1 GHz Linux machines,
and a new 1.9 GHz Intel.  The Sun 400 MHz was the slowest, the 1 GHz ran
about 15% faster, and the 1.9 GHz about 25-30% faster.  Now according to
most theories, the sims on the Linux machines should be at least 2x as fast
as the Sun.  We have pretty standard installations of Solaris and RedHat,
so I'm wondering if there are tweaks or features that need to be implemented
to get this working.  Any help/advice is greatly appreciated.

    - Philip Strykowski
      Mindspeed Technologies                     Massachusetts


( ESNUG 407 Item 4 ) -------------------------------------------- [02/26/03]

Subject: ( ESNUG 404 #12 ) Separate PrimeTime Extract_Model Mix/Max Delays

> I have a PrimeTime question that I hope you can help me with.
>
> In "extract_model" is there a way to generate model for min or max delay
> and not both in the same file?
>
>     - Shervin Hojat
>       Sun Microsystems


From: Ruma Sen <beginning=r58740 end=email.mot thought qualm>

Hi John,

If Shervin wants to separate the max and min delay for sequential and
combinational arcs, he can use the "-arc_types" option with extract_model:

  extract_model -arc_type {max_seq_delay max_combo_delay ...} -output ...

If he wants separate models for different operating conditions, he can use
"set_operating_condition" for WC & BC and extract_model separately.

    - Ruma Sen
      Motorola                                   India


( ESNUG 407 Item 5 ) -------------------------------------------- [02/26/03]

Subject: ( ESNUG 405 #11 ) TetraMAX Is Extremely Picky About UDP Coverage

> the Verilog library reader in Formality is the same one used by TetraMAX.
>
>     - Steve Golson
>       Trilobyte Systems                          Carlisle, MA


From: Howard Landman <playboy=howard magazine=riverrock.org>

That's good news to me.  I had some experience with the TetraMAX parser 
a year or so ago and it was extremely picky about many library issues, 
especially coverage of all possible conditions in UDPs.  The error 
messages were occasionally a bit hard to understand, but it found many 
subtle problems in large, hand-coded FF models which other tools 
completely ignored.  I fixed all of them, and was happy to do so.  Our 
main TetraMAX jockey wasn't exactly displeased either.  :-)

UDP coding can be tricky and having a good "lint" tool is essential to 
achieve high quality reliably.  Based on my experiences, I would demand 
that any library (whether from internal group or external vendor) have a 
TetraMAX-clean version, whether I was planning to use the tool on this 
chip or not (as long as I already had it around and didn't have to pay 
for a copy just for that).

In fact, I would recommend that Synopsys make the parser available 
widely and cheaply (especially to library vendors) so that everybody can 
do this.  And improve the error messages a little while you're at it, 
OK?  :-)  It's very good, but that would make it better.  Imagine that 
you are going to have to use the message to fix the problem ... there 
are cases where you want more detailed information.

Oh, BTW, if you know of any contract work, I'm available now.

    Howard A. Landman
    Riverrock Consulting                         Fort Collins, CO


( ESNUG 407 Item 6 ) -------------------------------------------- [02/26/03]

Subject: ( ESNUG 405 #5 ) PrimeTime & DC Results Are Machine Dependant??!

> Wait a minute.  The original question was about PrimeTime, not DC.  DC's
> optimization algorithms are pretty complex, and seem to include a "wall
> clock" element.  Running the same scripts on the same machine with the
> same loading will usually produce the same results.  But change the
> machine to a faster or slower model, even with the same OS, and you may
> well get different results.  I have seen this.  Switch architectures and
> all bets are off.
>
> PrimeTime is a different animal.  It's just a big spreadsheet (sorry, PT
> developers - you know what I mean).  You should get the same results no
> matter what machine you run it on.  Take one of the failing traces from
> one of the runs, and report that exact path on the other machine with
> -input_pins, etc. turned on.  If you're running with SDF, you can easily
> track down the culprit because all the numbers should have "*" and you can
> go look the numbers up in the SDF file to see who's right!  If you're
> running with parasitics, first verify that the LIBRARY models are the same
> (the .db files).  If so, you may have to call up Synopsys.
>
>     - Paul Zimmer
>       Cisco


From: Jay Pragasam <cow=jlk herd=brecis brought prawn>

Hi John,

I also have some experience with DC/PhysOpt results being machine dependent.
I faced this long time back and checked it up with Synopsys.  This is the
response that I got from them:

     Hi Jay,

     This is regarding your question on "Optimization based on machine
     speed".  The PhysOpt QOR (quality of results) is not machine-speed
     based!  If that is what you are seeing, then it is a bug that needs
     to be reported.  Let me know and I can try it out on a testcase here.

     The quality of results (i.e timing, area, wns, tns) depends largely
     on the options you use with the "physopt/compile_physical" commands
     and of course, the constraints set on the design.  Bad or unreasonable
     constraints can give bad results.  Regarding the number of passes; the
     default PhysOpt run, which is timing-based, will execute coarse
     placement, and 4 runs of detailed placement (apart from the
     optimization steps).  A "physopt -congestion" will execute several
     passes of placement, based on the hotspots in the design.  (You would
     see phrases like "Adjusting PLacement..." in the log, for congestion-
     based PhysOpt runs). 

     Of course, if you relax the constraints for the "x" paths you
     mentioned, that will affect the optimization results, as the design
     "cost" gets changed (same as compile cost in DC).

     The runs and qor should not be machine-dependent.  Let me know if
     this answers your question, or you need more details.

Paul Zimmer suggests that DC/PhysOpt results could be machine dependent,
but not PrimeTime results.  I hear from Synopsys that even DC/PhysOpt
results can not be machine dependent though they certainly are architecture
dependent, but I've seen consistently better results on Linux machines than
Solaris machines.

    - Jay Pragasam
      Brecis Communications                      San Jose, CA

         ----    ----    ----    ----    ----    ----   ----

From: [ The Winchester History Mouse ]

Subject: RE: ESNUG Post 400, Item 7

Hi, John,

We have seen similar issues in the past, specifically with differences in
results not only across platform but also between 32 and 64 bit versions
in PrimeTime 2000.11.  The following is the response from Synopsys R&D when
we reported the 2000.11 issue and also acknowledges (I think!) issues with
2001.08. I quote:

    "Regarding this problem, hp32 system result is correct.

     This is quite interesting case.  As you said, there is no difference
     in the result except 2000.11-SP1.  But as you can see in the
     directory 2001.08HP (32 and 64) result are same with HP64 of
     2000.11-SP1 and 2002.03 HP(32 and 64) are same with HP32
     of 2000.11-SP1.  So even though there is no difference in one version,
     2001.08 and 2002.03 are different with each other.  Actually this
     difference comes from the get_timing_path. 

     Some paths are missing when this command is used in 2000.11-SP1
     HP64 and 2001.08 HP32/HP64.

     These paths can be verified it's existence using report_timing command.
     This problem is not happened in case of SUN platform though all
     versions and HP platform no longer from 2002.03."

We have just migrated to PrimeTime 2002.09 and our in-house QA shows it's
now consistent across Sun & HP, and 32 & 64 bit.

Please keep me anon.

    - [ The Winchester History Mouse ]


( ESNUG 407 Item 7 ) -------------------------------------------- [02/26/03]

Subject: ( ESNUG 339 #5 ) Semantic Design Offers A Verilog Obfuscation Tool

> So, what I'm looking for is a utility that:
>
>  1.  Obfuscates references (but with some controllability so
>      it doesn't mess with libraries).
>  2.  Has some controllability to the way it renames things.
>  3.  Can generate a cross reference table.
>  4.  Can obfuscate RTL and netlist with the same cross reference
>      table.  (So "reg a;" in RTL will be "reg a_reg" in the
>      netlist).
>
> Anything like this out there?
>
>     - Tomoo Taguchi
>       Hewlett-Packard                              San Diego, CA


From: Ira Baxter <goat=idbaxter farm=semdesigns clot bon>

Hi, John,

Having just introduced some commercial obfuscators, we were fishing around
DeepChip and found these two requests ESNUG 319 #5 and ESNUG 339 #5.  While
we do not have obfuscators available for Verilog and VHDL at this moment,
but we are very close to having them.

    http://www.semdesigns.com/Products/Formatters/Obfuscators.html

This is by virtue of already being able to read/parse/unparse both
languages.  Obfuscation is quite a minor addition.

    - Ira D. Baxter, CEO
      Semantic Designs Inc.


( ESNUG 407 Item 8 ) -------------------------------------------- [02/26/03]

Subject: Newbie Problems Using Hierarchical model in Cadence Spectre Sim

> I am using the Cadence Spectre simulator.  I have made a matrix of size
> 20x30 whose each point is a cell view.  This cell is a small circuit, made
> up of resistors and capacitor connected in parallel.  Also these unit
> cells are connected with each other in regular fashion in order to form
> the matrix.  Problem is when I simulate, I need to choose points voltage
> of which I want see on the waveform window.  I am not able to choose point
> inside the unit cell. It just does not select any point on ciruit
> schematic which is lying inside the cell(small ciruit) boundary.
>
> Please let me know how can I do this? This will be great favour to me.  I
> am stuck at this point. It seems very trivial but is very important to me.
>
>     - Ajay Kumar Mishra                          India


From: Jimmy Blue <boat=jimmyblue harbor=hotmail spot fawn>

Ajay,

Change your schematic so there are wires, not just solder dots connecting
the symbol pins.  The graphical selection only works on wires.

    - Jimmy Blue

         ----    ----    ----    ----    ----    ----   ----

From: Ajay Kumar Mishra <computer=mishraka network=iitk.ac.in>

Thanks but problem still remains.

Say there's a wire connecting resistor and diode.  This combined connection
is connected in parallel with capacitor.  This forms unit cell view.  Now,
if I want to know the potential at the point of connection between resistor
and diode which are connected in series.  I tried to do it.  But it does not
let me choose it on schematic.  When I try, there forms a square of dotted
lines around unit cell.  It lets me choose only the point which is at the
boundary of the cicuit -- in other words, pins that are connected to
external current source.

Would you please let me know, if it is possible to choose such point whose
output can be observed.

    - Ajay Kumar Mishra                          India

         ----    ----    ----    ----    ----    ----   ----

From: Andrew Beckett <player=andrewb team=cadence sought mom>

Ajay,

So it sounds as if you have instances of sub-cells in your schematic, and
you want to be able to probe inside those?  If that's the case, you'll need
to descend down into those cells and probe there.

                   Design->Hierarchy->Descend

I think off the top of my head.  If that's not it, I don't have a clear
picture of what you've done from what you've described.

    - Andrew Beckett
      Cadence Design Systems Ltd


( ESNUG 407 Item 9 ) -------------------------------------------- [02/26/03]

From: Steven Jorgensen <penny=stevej pot=rosemail.rose.hp shot tom>
Subject: What Will synopsys_translate_off / synopsys_translate_on Do Here?

Hi, John,

OVL places synopsys_translate_off and synopsys_translate_on within the
module.  Will Synopsys Design Compiler optimize out signals attached to the
ports on those modules?  I am thinking that if I attach an intermediate
signal to an OVL module that DC will not optimize out that signal because
it is attached to a module port.  This would leave me with less than
optimal gates.

    - Steven Jorgensen
      Hewlett-Packard                            Roseville, CA


( ESNUG 407 Item 10 ) ------------------------------------------- [02/26/03]

Subject: ( ESNUG 395 #11 ) Denali IP For DDR/SDRAM Interfacing And MMU's

> We are looking for IP in the field of DDR/SDRAM memory interface and MMU.
> Do you know of anyone in this business?
>
>     - Nir Sever
>       Zoran                                      Israel


From: Mike McKeon <celtics=mike nba=denali knot john>

Hello John,

We offer customizable memory controller IP for DDR SDRAM, FCRAM, RLDRAM, and
SDR/DDR SDRAM applications.  Each core is customized to match the exact
application for the ASIC in order to achieve 100% bus utilization.  Our
Databahn customization process is accessed via an online interface at the
http://www.eMemory.com site.  Our browser-based interface creates a machine
readable specification file (SOMA) that drives the generation of the RTL
from a single Verilog code base, which can then be simulated online to
validate performance.  Once your performance targets are met, verification
routines, synthesis scripts, static timing scripts, and documentation are
generated.

Our Databahn IP is library independent and covers solutions from .18 um to
.11 um technologies in DRAM device frequencies from 100-250 MHz (200-500 MHz
data rate.)  We have first silicon with 5 Databahn cores in 3 different ASIC
processes.  There are currently over 25 Databahn licensees with controllers
in active development.

    - Mike McKeon
      Denali                                     Palo Alto, CA


( ESNUG 407 Item 11 ) ------------------------------------------- [02/26/03]

From: Lotfi Guedria <alpha=lotfi.guedria omega=cetic.be>
Subject: Newbie Trouble Linking Hierarchical Design In Behavioral Compiler

Hi, John,

I am having some trouble with my design since I can not successfully to link
it because of unresolved references created by Behavioral Compiler.  I run
the Behavioral Compiler (compile_systemc , bc_time_design, and schedule
commands) on my behavioral submodules.  Everything goes in right.  But
after elaborating the Top, the link command fails.  The Behavioral Compiler
creates designs associated to submodules hierarchy ( group1_0, group 3_1,
loop_18 ... and so on) and some of these automatically created designs have
the same name while belonging to different submodules.  That results in
design name conflicts and causes the linker to attempt to link the reference
to the wrong design.  (Error LINK-1)  How to avoid this error?  I tried to
use command "rename_design" without success.  Any help would be very 
ppreciated.

    - Lotfi Guedria
      Cetic                                      Belgium


( ESNUG 407 Item 12 ) ------------------------------------------- [02/26/03]

From: Alain Raynaud <stripper=alain nightclub=tensilica pot mom>
Subject: We At Tensilica Have Had Good Experiences With Forte' Perspective

Hi, John,

It is well-known that we use Vera here at Tensilica for our complex
testbenches.  We just added Forte' Perspective to our environment, because
it provides us with unique solutions for coverage and analysis.

There are many things things we'd like to measure in our design, as to
how well we exercise all kinds of features of the chip.  I call this
functional coverage, as opposed to the RTL coverage  tools that don't
know what the design intent is (they only know what the RTL says).  Forte
has done it with exactly the features I wanted:

  - temporal expressions, so that I can express easily what I care about
  - built-in coverage objects that are flexible enough that I can observe
    whatever I want
  - high-level transactions to raise the abstraction level from the RTL

To give you a quick example of the power of the Forte tool, let's say I'm
looking at my processor memory bus, with reads and writes flying around.
I'm really monitoring two values: the address and the page of the words
being accessed.  Also, I really care about writes followed by reads.  I'm
curious to know what kind of mix we get when both transactions are to the
same page.  Perspective will let me find out with only 2 lines of code:

  coverdef Result {
  dimension page;
  dimension addr;
  dimension addr;
  }

  match( Bus.WRITE(addr1, commonpage) --> Bus.READ(addr2, commonpage) ) {
    Result.contribute(commonpage, addr1, addr2);
  }

Some notes on the code -- the interesting part is the "match" statement;
not only can I put any temporal expression, I can also extract data
values, and have implicit constraints within the expression.  In my
example, "commonpage" constraints the temporal expression to only match
whenever both reads and writes are to the same page.

At the end of the simulation, Perspective will let me explore the coverage
results and look for red flags: did we hit every page? Are the addresses
spread across the page? We actually found real problems this way!

    - Alain Raynaud
      Tensilica                                  Santa Clara, CA


( ESNUG 407 Item 13 ) ------------------------------------------- [02/26/03]

From: Mark S Wroblewski <clown=markwrob circus=attbi got gone>
Subject: An Ex-PhysOpt User Asks Is Cadence FE-Ultra/PKS Up To Snuff Now?

Hi, John,

Before being laid off from Cirrus, we used PhysOpt in our 0.25 and 0.18 um
flow to cover timing closure.  Our flow still had open holes with regards
to floorplanning and power supply design, implementation, and analysis.
Just before I left, we started evaluating Cadence FE Ultra with an eye on
replacing PhysOpt with PKS.  My question is what is the list's experience
with FE Ultra and PKS?  Do these tools do the jobs I mentioned well?
What are the gotchas?

First, a few words about our flow in use, which I refer to below as "the old
tools": DC for RTL-to-gates; PhysOpt in gates-to-placed-gates mode; Cadence
CTgen for clock tree generation;  Cadence SE for Sroute (power & ground) and
Warp route (clocks & signals); Synopsys Arcadia for 2.5D extraction;
PrimeTime for SDF creation and analysis; back-annotation to PhysOpt for IPO.
Floorplan and power & ground implementation was done either with scripts
driving Silicon Ensemble, or a Perl script which built a layout DEF
directly, since the last floorplanning tool we had, Cadence Design Planner,
had already been orphaned by CDN and never gave us all that we really needed
anyway.

Here's some examples of power issues we've dealt with in the past with much
hard work and time spent, and for which I'm looking to FE Ultra to handle:

  - Notching power rings: with the old tools (CDN DP) we were using, putting
    down a rectangular power ring around a core was easy; getting the ring
    to have notches around cells in the corner like PLLs took a great deal
    of hand work.  Eventually, we wrote a script to drive SE sroute to do
    this, but it took awhile to get it written right, and it must be
    rewritten for each new chip.  Now I see CDN FE Ultra can do this
    automatically if the user selects to "exclude selected blocks" when
    routing the ring.  Or at least Cadence says it can.  Can it?

  - Multiple layers on rings: to reduce the area required for supply rings,
    we used multiple layers.  We also intermingled the nodes in these ring
    stacks, so for example the outer of two rings would be stacked as VDD,
    GND, VDD on 3 of 5 routing layers, and the inner of the two rings would
    be stacked as GND, VDD, GND on some other 3 of 5 routing layers.  Strips
    across the middle on two layers vertically and one layer horizontally
    would tie everything together and deliver the supplies to the row metal.
    We were able to work this by hand, but the old tools (CDN DP, SE Sroute
    in "automatic" usage scenarios) couldn't cope.  Does FE Ultra do any of
    this effectively?

  - Power supply design and analysis: Our old way of design and analysis
    for the power supply metal was an MS Excel spreadsheet.  What I really
    was looking for was a tool that studied the placed netlist and helped me
    beef up or trim down the power supply grid.  FE claims to do this.
    What's the truth?  And what kind of clock trees does it assume?  Zero
    skew?  Useful skew?  Or does it use a netlist with clock trees inserted?

  - Ring macros and other special cases: SE Sroute does a decent job of
    connecting row metal to ring macros (e.g., RAMs, register files) in most
    cases but coughs sometimes where high congestion exists.  (For example,
    where a via was dropped to get from the macro's internal supply to the
    ring around the macro.)  Unfortunately, this happened often enough that
    we couldn't ignore it, so more hand fixing.  How is FE with this today,
    as I understand it uses a new version of SE's Sroute for most heavy
    lifting?

On PKS vs Physical Compiler: Two and a half years ago when we were looking
at physical synthesis, Physical Compiler was ready and garnering design
wins, while PKS required lots of workarounds to produce anything close to
useful results.  So we chose PhysOpt and haven't looked back, taping out a
number of designs.  Two and a half years is alot of time to get problems
fixed.  Is it enough time  for PKS to be ready?  Details?

    - Mark Wroblewski
      ex-Cirrus and looking                      Lafayette, CO


( ESNUG 407 Item 14 ) ------------------------------------------- [02/26/03]

Subject: ( ESNUG 406 #2 ) Synopsys "aman" vs. Lazy Man (lman) Man Script

> I have long enjoyed the use of both Synman (ESNUG 188 #2) and Synapropos
> (ESNUG 291 #6) for finding and viewing man pages.  Thank you, Larry
> Fiedler and Peter Kamphuis.  I have two things to offer in return.
> 
> As a tidbit, the following bash function allows one to get the man page
> for Synopsys warnings like so:
> 
>                           warnman UIO-12  ...
>
> ... I call it lman, as in Lazy Man.  Of course, that's man as in the
> command, not me, really.
>
>     - Steve Ehlers
>       Conexant                                 Austin, TX


From: Steve Golson <city=sgolson state=trilobyte yacht yon>

Hi John,

There is (usually) a script in the Synopsys syn/bin directory called "aman"
that does this.  It should automatically be on your search path (it's in
the same directory as dc_shell).  So to get the man page for a warning
message simply type

      aman OPT-101
      aman OPT-102

Two of my favorite error messages.  :-)

I say "usually" because a recent DC release omitted the aman script, so I
had to copy it from an old release.  Hopefully this was an isolated
occurrence, and aman has returned in all its glory.

    - Steve Golson
      Trilobyte Systems                          Carlisle, MA


( ESNUG 407 Item 15 ) ------------------------------------------- [02/26/03]

Subject: ( ESNUG 406 #9 ) Using Mentor Fastscan With Synopsys DFT Compiler

> Our Mentor AE has informed me that Mentor provides a utility which
> will convert a DCxP generated STIL file into Fastscan dofiles and
> testprocs.  My first question is whether or not anybody out there has
> experience either directly with this flow, or with a similar one
> involving using DCxP in conjunction with Fastscan, or other "foreign"
> ATPG tool?  My second question is how do people feel about DCxP's STIL
> file generation capabilities?  Any "tricks" one should know about
> before basing a design flow on it?
>
>     - Rich Conlin
>       Paradigm Works, Inc.                       Andover, MA


From: Gary Gebenlian <nairobi=garyg kenya=plxds aught psalm>

Hi John,

This is in response to Richard Conlin's inquiry into running a block level
DFT Compiler flow along with a Fastscan ATPG flow.

At Plexus we have used this exact flow on a very large designs, and found
the interface to be very smooth.  The procedure we used was as follows:

  (1) Run scan insertion with DFT Compiler and write out a STIL protocol
      file using the command "write_test_protocol -f stil -out netlist.spf"

  (2) Use Mentor's test protocol conversion utility "stil2mgc" to convert
      this STIL file into a Fastscan native test procedure file with the
      command:

        "stil2mgc -stil netlist.spf -tpf netlist.tpf -dofile atpg.do"

      In addition to writing out a Fastscan test procedure file, this
      utility also generates a Fastscan dofile which defines the same
      clocks, pin constraints, and scan chains that were specified in
      DFT Compiler.

  (3) Invoke Fastscan with this dofile and perform a quick and compact
      test pattern generation with "create patterns -fast" (a command
      introduced in the latest 2002_4.10 release).

One point to remember is that if any "set_test_isolate/set_test_assume"
statements were used in DFT Compiler to constrain known values on
internal nodes, these will not be preserved by default when moving to
Fastscan (or any other ATPG tool).  Typically, the "test_setup" procedure
in the Fastscan tpf file will have to be editted to supply the
intitialization cycles necessary to set up the internal nodes in the
block to the assumed values.  Alternatively, the internal constraints
can be forced in Fastscan as follows:

        add primary input -cut ublock/umodule/ff_reg/Q
        add pin constraint /ublock/umodule/ff_reg/Q C1

Good luck.

    - Gary Gebenlian
      Plexus Design Solutions, Inc.              Sudbury, MA


( ESNUG 407 Item 16 ) ------------------------------------------- [02/26/03]

From: [ I Wear My Sunglasses At Night ]
Subject: User Unhappy About Mentor Changing The Eldo Licensing Mechanism

Hi John,

Please keep me anon.

We've been using Mentor's electrical simulator "Eldo" fo a long time now.
Until recently (version 5.7), a user could run several Eldo simulation jobs
simultanuously on a given machine using a single key - one key par 
(user+machine).  I call this a user-based licensing mechanism.

In the newer versions of the Eldo simulator (version 5.8), each simulation
job checks-out a key, even if it is the same user on the same hostid.
I call this a job-based licensing mechanism.

This change is very important as this leads to buying as many licenses as
you run jobs in parallel compared to the previous mode where you needed one
license per user of the tool.  The first licensing mode allowed users to
take advantage of multiprocessor workstations.

The price of the simulator did not change.

I have several questions now:

  - Is this a license terms violation?
  - How do other simulators (HSPICE, Spectre, etc.) behave?  User-based
    or job-based?
  - Is Eldo really faster than hspice?

Thank you for your help.

    - [ I Wear My Sunglasses At Night ]


( ESNUG 407 Item 17 ) ------------------------------------------- [02/26/03]

Subject: ( ESNUG 406 #12 ) A Zero WLM May Not Be Best For DC Runs At 0.13

> Both Tim Lantz and Gregg Lahti have seen a benefit from using a zero
> wireload model with tightened timing constraints to encourage DC to
> produce a design that is more suitable for physical synthesis
> downstream.  We have independently found that same strategy at
> Monterey and it works pretty well in most cases.  The tighter clock
> cycles seem to encourage Design Compiler to pick faster architectures
> (just be sure to restore your actual timing constraints before entering
> into physical synthesis.)  Using zero wireload models keeps synthesis
> from wasting time on pointless buffering and sizing.
>
>     - Benny Winefeld
>       Monterey Design Systems                  Sunnyvale, CA


From: Jay McDougal <paris=jay_mcdougal france=agilent scott calm>

Hi John,

I couldn't help but sigh a little when reading the ESNUG 404 #16 and
ESNUG 406 #12 follow-up.  Remember my SNUG paper back in '96?

An excerpt from the conclusions section:

  "High Performance Design 

   Optimistic wire load models helped to achieve higher performance. 
   The default wire load model from the library led to a 62 MHz design
   after routing.  By using a more optimistic wire load model in the 
   40th to 70th+ percentile range, we were able to achieve 62 Mhz 
   with a much smaller area.  With the addition of IPO the optimistic
   wire loads did even better and we were able to achieve a routed
   design running at 68Mhz using a zero wire load model."

This was prior to the availability of physical synthesis.  

Until 0.13 um we found that zero wire load models during RTL to gate
compiles continued to give the highest performance (and lowest area)
even using PhysOpt or PKS physical synthesis.  However, with 0.13 um
designs we have found that a wire load model closer to 50th percentile
gives the best results.  The cell area and structure given with zero
wire loads seem to be to far off given the increasing contribution of
wire delay to total delay.  

Accuracy of wire loads is an issue that is now well discussed and
fortunately widely seen as impossible/irrelevant.  The real issue 
is what effect the model you chose has on the downstream tools/results. 

Another interesting topic I foresee in the near future is very similar to
the problem with wire load models.  It is correlation of linear multiplier
based capacitance estimation used in physical synthesis (Steiner and/or
global routed) vs. the actual extracted capacitances with the effects of
true 3D extraction (wide varieties of layer usage, signal spacing/density,
and route over/unders).  If you do a comparison in 0.13 um of extracted
caps vs. those predicted by linear multipliers you will find the variances
are quite large (especially in high density/congested designs) and look
sort of similar to wire load models vs. extracted data back in 0.25 um.

As we move to 90 nm and lower I predict we will start to have similar
convergence issues between routed and extracted timing that we now have
with WLM vs. placed.   Maybe this leads to custom linear/edge cap
multiplier models just like custom WLMs of the past?  In fact, this is
exactly what is being done with estimate_rc in PhysOpt.  Eventually, I
think we will need something closer to a real extract during physical
synthesis instead of linear multipliers.

    - Jay McDougal
      Agilent                                    Corvallis, OR


============================================================================
 Trying to figure out a Synopsys bug?  Want to hear how 16,288 other users
  dealt with it?  Then join the E-Mail Synopsys Users Group (ESNUG)!
 
     !!!     "It's not a BUG,               jcooley@TheWorld.com
    /o o\  /  it's a FEATURE!"                 (508) 429-4357
   (  >  )
    \ - /     - John Cooley, EDA & ASIC Design Consultant in Synopsys,
    _] [_         Verilog, VHDL and numerous Design Methodologies.

    Holliston Poor Farm, P.O. Box 6222, Holliston, MA  01746-6222
  Legal Disclaimer: "As always, anything said here is only opinion."
 The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com




 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)