Editor's Note: I've heard through the rumor mill that Cadence paid between
  $200 million and $300 million (partially in Cadence stock) for Silicon
  Perspectives.  Wow.  For a company that was grossing ~$10 million, that's
  a 20X to 30X premium!  My congrats go to the new SPC millionares!

  Now you might ask why would Cadence pay such a high price for SPC?  From
  my viewpoint, it's a pretty easy decision.  PKS, as a product, just hasn't
  caught on with users yet.  (Compare those 250+ PhysOpt tape-outs to PKS'
  13 tape-outs and you'll see what I mean.)  Today's 'power user' flow:

            1.) Verilog simulation (VCS or NC-Verilog)
            2.) Design Compiler for RTL-to-gates
            3.) Silicon Perspectives for netlist block floorplaning
            4.) Synopsys PhysOpt for gates-to-placed-gated optimization
            5.) Avanti for CTS and final legal route

  Sometimes Cadence Silicon Ensemble is used in step 5, but even Gary Smith
  of DataQuest agrees that the power users generally prefer Avanti.  Buying
  Silicon Perspectives lets Cadence back into the 'power user' design flow.

  Now the big questions are: is Cadence going to quietly poison SPC/PhysOpt
  interoperability in an attempt to force customers to switch to PKS?  How
  soon is Hidden Dragon coming out and will it be roughly on par with SPC?

  An awful lot of money is riding on the answers to these questions...

                                             - John Cooley
                                               the ESNUG guy

( ESNUG 381 Subjects ) ------------------------------------------ [11/08/01]

 Item  1: How We Get Around DC's Crazy Naming Styles For Our PhysOpt Runs
 Item  2: Help!!  ECO's Can't Be Re-Timed In Our Arcadia/PrimeTime Flow!
 Item  3: ( ESNUG 380 #9 ) Users On The New DFT/PhysOpt Flow For 2001.08
 Item  4: ( ESNUG 380 #1 ) Conflicting Stories On Mosys RAM Soft Error Issue
 Item  5: ( ESNUG 380 #7 ) Synopsys Appreciates User Frustration With Presto
 Item  6: Tuxedo LTX With Verplex LEC Failed To Find Our Dynamic Gliches
 Item  7: ( ESNUG 380 #2 ) Synchronicity, Clearcase, Cliosoft SOS, Perforce
 Item  8: 5 Cadence SKILL/Virtuoso/DFII 4.3.4 Migrating To 4.4.6 Questions
 Item  9: PhysOpt Tcl Script To Simplify Stitching Together Top Level Blocks
 Item 10: Seeking User Experiences With New VHDL To Verilog Transation Tools
 Item 11: ( ESNUG 380 #4 ) Formality 2000.11 Is A Year Old; Look At 2001.08
 Item 12: ( ESNUG 380 #5 ) I Accidentally Used DFT Compiler, Not TetraMax
 Item 13: ( ESNUG 380 #13 ) I Found A Laborious Way To Handle Resets In DC
 Item 14: ( ESNUG 380 #6 ) Parasitics And PrimeTime In Hierarchical Mode
 Item 15: ( ESNUG 380 #12 ) But Our SysAdmin Says Suns Are More Reliable!
 Item 16: How Do I Detect When 2 Events Occur In The Same Verilog Time Tick?

 The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com


( ESNUG 381 Item 1 ) -------------------------------------------- [11/08/01]

From: "Gregg Lahti" <gregg.lahti@corrent.com>
Subject: How We Get Around DC's Crazy Naming Styles For Our PhysOpt Runs

Hi, John,

We've experienced some oddities with DC 2000.11 and 2001.08-1 when going
into our PhysOpt flow (gates-to-placed-gates).  DC appears to keep multiple
names (or references) of particular items within the database for nets and
ports and may write out a different name to a Verilog file vs a .db file.

What may be called port "foo" in Verilog could be referenced as "nm156"
in the .db file.  Your chances of finding the right one is about as lucky
as hitting a home run into the right-field pool off Schilling or Johnson.

Case in point: our flow does some heavy grouping so that we can clusterize
chunks of logic (100-150K gates in size) that PhysOpt can handle.  Steps
go as follows:

  1) Once we clusterize the netlist, we then write out the entire
     netlist and the clusters into separate .db and .v files.
  2) We run the full netlist through PrimeTime to apply our budget
     constraints and then write out a constraints script for each cluster.
  3) Then we go back into DC for the individual clusters, source the
     constraint script, and then write out an annotated .db for PhysOpt.

If we use the .db's created from the step (1) clustering, PrimeTime will
apply the constraints and produce a constraint script that is completely
different if we had used the Verilog netlist.  The resultant constraint
script will cause copious errors (incorrect port names, incorrect net names)
if I apply it against the .db file in DC.  If I use the Verilog netlist in
the PrimeTime step (1) and the DC step before PhysOpt, things are fine and
I can write out a properly constrained .db file for my PhysOpt run.

Apparently, this is a known issue by Synopsys R&D.  Our AE wrote a Solvit
article (MISC-323 I think) which had the following to get the port names
written out correctly:

   set change_name_solve_bus_members false
   define_name_rules eq -equal_ports_nets
   report_names -rules eq
   change_names -rules eq
   report_names -rules verilog
   change_names -rules verilog

Unfortunately, it didn't solve all of the issues of using the .db file, but
it did save us on name changes through the grouping phase.  We wound up
adding these variables in our *dc.setup file to make the grouping easier
to swallow:

   set hdlout_internal_busses true;
   set change_names_dont_change_bus_members true;
   set verilogout_no_negative_index true;

That last variable is odd.  In DC 2000.11-SP2 and later the default value was
set to false which resulted in Verilog arrays containing negative numbers
(i.e. foo[1:-3]).  Why anyone would want this feature is beyond me.

End result for us was to use the Verilog generated netlist for the grouping
and everything else when going into the PhysOpt run.  This makes it painful
for carrying around constraints and more work to get a timing budget through
PrimeTime, but it works.

    - Gregg Lahti
      Corrent Corp                               Tempe, AZ


( ESNUG 381 Item 2 ) -------------------------------------------- [11/08/01]

From: "Eyal Landesberg" <eyall@zoran.co.il>
Subject: Help!!  ECO's Can't Be Re-Timed In Our Arcadia/PrimeTime Flow!

Hi John,

In our sign-off procedure we use Arcadia and PrimeTime.  We use Arcadia
for RC extraction, and Primetime reads the DSPF file generated by
Arcadia for static timing analysis.

In our regular flow, the input to Arcadia are the LEF and DEF files.
When we're doing a manual ECO, the data of the ECO is in GDS2 format
only.  To extract the data, we need to run LVS on the GDS2 data and then
invoke Arcadia.  The output DSPF generated by Arcadia (in this case) is
in transistor level, while Primetime annotates only gate level DSPF!!!

This is a very severe limitation!  The timing on the ECO nets cannot be
verified.  Solutions/work-arounds will be appreciated.

    - Eyal Landesberg
      Zoran                                      Israel


( ESNUG 381 Item 3 ) -------------------------------------------- [11/08/01]

Subject: ( ESNUG 380 #9 ) Users On The New DFT/PhysOpt Flow For 2001.08

> I want to alert your readers to the recent changes in the DFT Compiler /
> PhysOpt flow in the new 2001.08 release.
>
>     - Vandana Kaul
>       Synopsys, Inc.                             Mountain View, CA


From: Mark Wroblewski <markwrob@colorado.cirrus.com>

Hi John,

I read with great interest Vandana's pointers on the differing flows for
DFT/PhysOpt in ESNUG this morning.  Send her my thanks as a Synopsys
customer.

    - Mark Wroblewski
      Cirrus Logic                               Broomfield, CO

         ----    ----    ----    ----    ----    ----   ----

> Adding A Lockup Latch To The End Of Scan Chains
> -----------------------------------------------
> The "set_scan_configuration" command has a new option in 2001.08 which
> allows customers to add a lockup latch to the end of the scan chain.
> The command syntax to enable this capability is:
> 
>     "set_scan_configuration -insert_end_of_chain_lockup_latch true"
> 
> The default value for this option is false.


From: Paul Schnizlein <paul@agere.com>

Hi, John,

For us, the end of a scan chain is a flop that is also a primary Output,
so we don't need a MUX.  Why would one want a lockup latch at the end of a
scan chain?  Well, we know what those things are used for, shifting reliably
across different clock domains.  But I associated the "end of the chain" as
the primary scan out, and where's the other domain after that?

    - Paul Schnizlein
      Agere Systems                              Austin, TX

         ----    ----    ----    ----    ----    ----   ----

From: Paul Fletcher <paul.fletcher@motorola.com>

Hi, John,

I read Vandana's entry in ESNUG 380 #9.  It was very good.  ESNUG is one
of the best things that Synopsys has going for it and it is great when
the Synopsys AEs contribute.

I have a question on a small section of Vandana's entry:

  "When starting with a design that has the scan chains stitched, use the
   command "set_scan_state scan_existing" instead of "set_scan_state
   test_ready" to indicate the design is scan chain stitched, not just
   scan replaced.

   The command "report_test -state" can also be used when starting with an
   existing scan replaced .db file to verify the scan state of the design."

If you are starting with a scan stitched design and you set the scan state
to scan_existing, will PhysOpt know to ignore the scan chain connections in
it's placement?  This seems a very important issue; if PhysOpt does not
ignore these connections then the placement will be non-optimal.  PhysOpt
should be able to do this because it has all the information it needs.  It
knows the scan chains are stitched and it knows what nets are involved.
However if PhysOpt does not use this information when it does the placement
it is not doing as good of a job as it could.

I would be interested in knowing what PhysOpt does in this case.

    - Paul Fletcher
      Motorola                                   Chandler, AZ


( ESNUG 381 Item 4 ) -------------------------------------------- [11/08/01]

Subject: ( ESNUG 380 #1 ) Conflicting Stories On Mosys RAM Soft Error Issue

>  b) MoSys hasn't addressed the soft-error issue, so make sure you leave
>     plenty of room for (non-MoSys) RAMs for parity and ECC (especially
>     if your application is byte-wide).  (It's a shame MoSys didn't
>     adjust their form-factors for this).
>
>     - [ Captin Krunch ]


From: [ "Kitty", from Monsters, Inc. ]

John,

Funny he should mention that.  I was just talking to a MoSys field guy,
and he recommends ECC, for exactly that reason.  I'm not sure that the guy
would want me to talk about _exactly_ what he said, since he indicated that
the product had considerably worse than expected performance versus alpha
particles.  Please make me anon if you print this.

What I was told was that the performance versus soft errors was so bad as
to make the devices useless for their nominal purpose.  When I suggested
that parity connected to a reset circuit would be good enough for most
applications, he grunted and said that you really want to use ECC.  I didn't
ask for details, like error/second numbers, but the implication was pretty
clear.

I am _not_ an expert on MoSys's technology, just have bumped into it a few
times.  The word on the street is that MoSys uses an unusual, low-transistor
design, which allows them to get a lot more bits onto a given chip.  The
word is also that there are more problems with soft errors, which figures.

    - [ "Kitty", from Monsters, Inc. ]

         ----    ----    ----    ----    ----    ----   ----

From: "Mark-Eric Jones" <mejones@mosys.com>

Hi John,

I was surprised in your last ESNUG to see a letter from someone [ Captin
Krunch ] who said that MoSys isn't addressing the SER issue.  MoSys is
VERY aware of the soft error issue, and takes a stand that our 1T SRAM has
great SER characteristics!  As a matter of fact,  I gave a paper at Design
Con 2001 regarding this very issue.  Could you please put it in your
DeepChip.com download section so users can read it?  Here is an excerpt
from our Design Con paper:

  "So what are the reasons for the very good SER characteristics of
   1T-SRAM technology compared to traditional SRAM?  Firstly, the
   relatively high capacitance in the bit cell helps ensure a high
   critical charge is required to upset the cell.  In addition, the fact
   that MoSys' architecture divides the memory into a very large number
   of small banks (typically many hundreds of banks in a normal macro)
   results in a memory with very short bit lines for sensing the memory
   cell contents.  This results in large internal margins during
   sensing combined with less bit line charge collection area.  Also
   the exclusive use of p-channel structures in the memory core array,
   protected by an n-well helps improve the SER characteristics of
   1T-SRAM memory."

Also, in an article that Anthony Cataldo wrote on "SRAM soft errors cause
hard network problems" I stated "the failure in time (FIT) of MoSys' 1T
SRAM is below 1,000 and will stay that way down to 0.13 um, while SRAMs
are on track to hitting 10,000 FITs at 0.15 um.  Here's the link:

          http://www.eetimes.com/story/OEG20010817S0073

MoSys does take the soft error issue seriously and our 1T SRAM has good SER
characteristics.

    - Mark-Eric Jones, VP
      MoSys                                      Sunnyvale, CA


( ESNUG 381 Item 5 ) -------------------------------------------- [11/08/01]

Subject: ( ESNUG 380 #7 ) Synopsys Appreciates User Frustration With Presto

> We experienced issues with the 2001.08 version as well.  We were getting
> some core dumps and such related to Presto.  To get around this issue in
> the short term we: "set hdlin_enable_presto false" in our .syn* setup
> file.  This resolved our issue for now by making the Presto stuff not
> run.  Maybe that will help other users?  Obviously if they rely on Presto
> features, this won't be a good solution...
>
>     - John Pane
>       Teradyne


From: Priti Vijayvargiya <priti@synopsys.com>

Hi, John,

I would like to point out that most of these issues are related to Presto
Beta releases and have been resolved in 2001.08 production release (see
below).  We just released a few weeks ago a patch version 2001.08-1
addressing some critical customer issues (this is not to be confused with
2001.08-SP1 which is coming out soon.)   Here's the status of the Presto
issues discussed so far in ESNUG:

  ESNUG 380 #7  - Gathering further information regarding the issue
  ESNUG 379 #5  - Resolved in 2001.08 release 
  ESNUG 378 #4  - Resolved in 2001.08 release 
  ESNUG 371 #14 - Released a new manual with 2001.08 release
  ESNUG 370 #9  - Resolved in Power Compiler patch for 2001.08-1 release 
  ESNUG 364 #2  - Resolved in 2001.08 release 
  ESNUG 357 #4  - Resolved in 2001.08 release
  ESNUG 357 #4  - Will be resolved in 2001.08-SP2 release

Presto has demonstrated 33% higher capacity, averaged 5x faster run times
and better QoR.  In addition, it supports a subset of the new IEEE 1364
Verilog-2001 Standard while maintaining compliance with the Verilog LRM.

Thanks for keeping us honest.  :)

    - Priti Vijayvargiya
      Synopsys, Inc.                             Mountain View, CA


( ESNUG 381 Item 6 ) -------------------------------------------- [11/08/01]

From: "Dhrubajyoti Kalita" <dhrubajyoti.kalita@intel.com>
Subject: Tuxedo LTX With Verplex LEC Failed To Find Our Dynamic Gliches

Dear John,

We are trying to verify our standard cells by ensuring that vendor provided
Verilog models are equivalent to the SPICE netlist.  Using Tuxedo LTX, we
tried extracting a Verilog model from the SPICE netlist and then verifing
equivalency between the extracted Verilog model and golden Verilog model
by running Verplex LEC.

This flow failed to detect a previously known dynamic glitch caused by a
pass transisitor in one of the standard flops.  It appears that the only way
to detect this kind of glitch is through SPICE simulation.

I would like to know if there's an automated way to generate exhaustive
input stimulus to verify functionality of sequential cells.  For
combinational cells, this can be easily automated, but I could not find
an automated way to generate test input for sequential cells.  Any user
input will be highly appreciated.

    - Dhruba Kalita
      Intel


( ESNUG 381 Item 7 ) -------------------------------------------- [11/08/01]

Subject: ( ESNUG 380 #2 ) Synchronicity, Clearcase, Cliosoft SOS, Perforce

> I'm interested in some revision control software.  I searched DeepChip.com
> site in detail on this issue.  Synchronicity seems to have a lot of issues.
> Clearcase seems hard to work with if you have not worked with it before.
> I saw some articles on SOS from Cliosoft but these were really old postings.
> Our company is looking for a tool that will support backend layout process
> also.  I'd like to know which version control tool is a good choice today.
> 
>     - Vijay Govindarajan
>       Quicksilver Technology                     San Jose, CA


From: Anders Nordstrom <andersn@nortelnetworks.com>

Hi John,

I am currently using ClearCase for revision control and I have used
DesignSync from Synchronicity in the past.  I had several technical issues
with DesignSync such as a corrupted database and being unable to access the
latest version of files.  This was over a year ago.  I am sure Synchronicity
fixed these issues but it caused me to switch to ClearCase at the time.

Both ClearCase and DesignSync are like RCS in that you check out a file in
order to edit it and then check it in so that others can view and edit it.
Where I see a major difference is how the files you see and can access are
updated.

In DesignSync, you get the files to your work area, or in their terminology,
you populate your work space.  That is you have to manually update the files
you want to access to a certain version.  You can populate with the latest
version or by a label you have set on a previous version of files.  One
advantage is that you control when you want the files you access to be
updated so your simulation will not crash because someone checked in a file
with a syntax error.  A drawback is that you can potentially have files that
are way out of date and if you want to make sure you are (for example)
simulating with the latest version of all files, you have to execute a
populate command before every simulation.  This adds to my simulation time.

In ClearCase, files are automatically updated to a version you specify.  I
find ClearCase very easy to use but before you'll be productive with it you
must understand the concept of a view.  It is not difficult, just their way
of describing the files you see.  A view is all the files of a certain
revision that you can see and edit.  It includes private files on your local
disk as well.  You can set your view to the latest version of all files or
to all files with a certain label or all files as they looked a particular
day.  When you set a view, files are automatically updated for you if someone
else change a file in the same view.  This means that if you set your view
to latest you will never work with an out of date database unless you choose
to do it.  If you don't want files to change, set the view to an earlier
version that is known to be good.

Another advantage with views in ClearCase is if several designers are working
on the same problem.  If I have a VCD file with some changed files in my view
and someone else is going to work on them as well, they only set their view
to be the same as mine and now we see the same files.  No need to copy files
back and forth!

In the past, we handed off gatelevel netlists to our ASIC vendors for layout,
so I can not tell how well either tool works for layout files.

    - Anders Nordstrom
      Nortel Networks Ltd.                       Ottawa, Canada

         ----    ----    ----    ----    ----    ----   ----

From: "Joseph Yoon" <jyoon@nortelnetworks.com>

Hi John

I found Cliosoft's SOS program to be an easy to use RCS and have used it to
manage our design files for our recent microchip.  Based on my tutorings of
co-workers, it took on average 30 min for them to learn how to use the
basics of SOS.  Technical support from Cliosoft was great.  All of my
questions were handled promptly.

One thing to note though, I've only used SOS to manage text files.  I've
never tried it on other types of file formats.

    - Joseph Yoon
      Nortel Networks

         ----    ----    ----    ----    ----    ----   ----

From: "Don Monroe" <don_monroe@tenornetworks.com>

John -

Our hardware group have been using Cliosoft for 3 years and haven't had any
major problems.  Our software organization uses Clearcase but they have a
different use model than we do in hardware.

In the hardware group, we NEVER have a file being edited by more than one
person at a time and therefore never have to do a merge.  Also, we never
create branches.  These are the two main reasons the hardware group is
using Cliosoft rather than Clearcase.  SOS (Cliosoft) is much simpler and
costs less than Clearcase.

Our software group, on the other hand, uses MERGE and BRANCHES all the time.
They also make heavy use of ClearMake (parallel) while we in hardware just
use Unix (Linux) Make and LSF.  They also have a full time Clearcase
administrator (which seems to be required.)

    - Don Monroe
      Tenor Networks

         ----    ----    ----    ----    ----    ----   ----

From: Paul Mitchell <paul.mitchell@pobox.com>

John,

I've worked with Synchronicity and Clearcase in the past (as well as plain
old RCS, CVS, VSS etc).  We are currently using an RCS system by a company
named Perforce (http://www.perforce.com).  This is by far the best RCS
system I have ever used.  It is easy to administer, relatively easy to use,
and very well thought out.  It does not have anything built in for backend
support, but it will store any kind of file, and is very easy to use from
within scripts.  In addition, their tech support has been excellent.

    - Paul Mitchell
      ATI Research, Inc.

         ----    ----    ----    ----    ----    ----   ----

From: "Tom Tessier" <tomt@hdl-design.com>

John,

As I explained to a client several years ago when they were looking for
an answer to your this problem. You could stick with RCS, which your
engineers hate; you could switch to CVS and pay me to write wrapper
scripts to make it work; or you could buy Cliosoft SOS and have
everything you need. They chose SOS.

Briefly the client mix that uses SOS are working on 1 Mgate ASIC's, FPGAs
and Systems.  As a testament to the stability of SOS, I have a client who
is managing ~ 10K files using SOS 1.80b which was released over 2 years
ago.  There have not been any bugs.

This same client is migrating other teams who are doing hardware design
over to the 2.40b version of SOS with the management of about ~3K files
and growing.  This hardware design team had several tests which ran on
the PC (Windows 95) and SOS worked seamlessly. The database resides on
an HP-UX server with the clients a mix of PC's, Sun's and HP's.  That
was one of the big differences between SOS and others is they have the
multi-platform approach working.

For my clients I manage repositories from a Linux Server and allow my
clients and other subcontractors access via the SOS client software. 
They can be on any of the supported platforms and integrate with the
database.  This has allowed me to hire subcontractors from around the
world to work on my projects.

The Ease-of-Use cannot be overlooked, this package can be mastered by
anyone in under 30 minutes.  Sure there are operations that take a little
deeper understanding but 90% of the most common operations can be 
learned in this 30 minute timeframe.  For those of you familiar with Tags
ask yourself this question: "What does it take to move a tag on a group
of files with RCS/CVS/ClearCase and others?"  In SOS you select the
group of files and hit the tag button; Pick a tag from the list and
select OK.  Now all the selected files have the same tag.

Another problem that occurs is "It worked last week!"  How do you get
back to that point?  In SOS it has the ability to back up to a date and
time.  This is very powerful in a multiple person environment.

Finally administration doesn't take a full time engineer to manage.  You
do need someone who will take the time to study getting the environment
set-up; about 4 hours in most cases.  You know engineers don't RTFM!
There are lots of options here, but the way that my clients and I have
used it is basically out of the box.  Again a testament to the developers
of SOS thinking through the design and what is needed by HW engineers.

The SOS team has focused on Hardware Design Engineers writing HDL code
and everything that goes with the development of ASIC/FPGA and systems.
They have integration for Cadence and Visual HDL; I just started using
the Visual interface and it works great and supports binary formats.
Hardware Engineers don't want to use software approaches to data
management, SOS provides them a powerful tool to manage data.

    - Tom Tessier
      t2design Incorporated                      Louisville, CO


( ESNUG 381 Item 8 ) -------------------------------------------- [11/08/01]

Subject: 5 Cadence SKILL/Virtuoso/DFII 4.3.4 Migrating To 4.4.6 Questions

> We are going through a migration from Cadence DFII 4.3.4 to 4.4.6.  There
> are some things that are kind of annoying, and others that are just
> broken.  Anyone run into any of these, and have some workarounds?
>
>   1) leSearchHierarchy no longer will find non-valid layers.
>      This is a lightning fast function that cant really be replaced
>      by anything else without a big slowdown.  Has everyone wrote
>      some wrapper for this function that makes all layers valid,
>      runs the function, then puts layers back the way they were?
>      You would need something like this to keep it from breaking
>      SKILL code everywhere its used.  We depend on the fact that 
>      it finds non-valid layers in some of our code.
>
>   2) Minor nit - how can you make the LSW have a grey background
>      for layer colors like it used to instead of a black background?
>
>   3) You can no longer select edges of shapes without changing to 
>      partial select mode.  In 4.3.4 you could pass your mouse over
>      the edge of a shape or the end of a path in full select mode,
>      and it would dynamically highlight it and you could select it.
>      In 4.4 it will do neither unless you change to partial select mode.
>
>   4) Anyone find that leChopShape doesnt always behave right in 4.4?
>      It was bad in 4.3.4 where it would delete shapes in -nograph mode
>      if you didnt have your LSW initialized.  It seems to be worse
>      in 4.4 where it deletes shapes even if you have initialized the LSW.
>
>   5) If you use VersionSync, have you noticed that dbCopyCellView will
>      not always checkout the destination cell and the copy doesnt happen?
>
> Thanks for all your help in advance!
>
>     - Karl Johnson


From: phz@cadence.com (Pete Zakel)

I can answer one of the questions:

   "2) Minor nit - how can you make the LSW have a grey background
       for layer colors like it used to instead of a black background?"

I don't think you can in 4.4.6.  It was changed so that the background of
the pattern matches the background of the design window since that makes
more sense and was the recommendation according to usability testing.  In
4.4.7 a new layout env variable has been added to allow using the LSW
background color instead of the design window background color called
useEditorBackgroundColorForLSW, since not everyone likes the "fix".
Default is t for using design window background.

    - Pete Zakel
      Cadence Design Systems

         ----    ----    ----    ----    ----    ----   ----

>   3) You can no longer select edges of shapes without changing to 
>      partial select mode.  In 4.3.4 you could pass your mouse over
>      the edge of a shape or the end of a path in full select mode,
>      and it would dynamically highlight it and you could select it.
>      In 4.4 it will do neither unless you change to partial select mode.


From: reb@cypress.com

And it still works the old way if your cellViewType is schematic or
schematicSymbol.  Go figure.  Not very palatable to use bindkeys to toggle
cellViewType back and forth.  Somewhere in the binaries (or context files?)
there must be a list of types either eligible or inelegible...

    - Reb
      Cypress

         ----    ----    ----    ----    ----    ----   ----

>   2) Minor nit - how can you make the LSW have a grey background
>      for layer colors like it used to instead of a black background?

From: Grant Erwin <grant@tinyisland.com>

This one came up at work today.  Here's the CDN Sourcelink solution page
text:

  *Problem:
  After upgrading to IC446, the background color of LSW has become 
  black compared to what was grey in previous versions. How do I get 
  back the previous background color?
  
  *Solution:
  To get back the behavior of the previous version there is a LE 
  environment variable,  useEditorBackgroundColorForLSW, available. 
  This allows the user to control whether LSW will take the 
  background color of LE design window as the background color of 
  LSW's LP icon or not.
  
  The user can specify this in ~/.cdsenv or local directory ./.cdsenv by
  setting,
           layout useEditorBackgroundColorForLSW  boolean nil
  or
           layout useEditorBackgroundColorForLSW  boolean t
  
  Default value is t, meaning LSW will take the background color of LSW
  design window as the background color of LSW's LP icon.
  
  If the user specifies layout

           useEditorBackgroundColorForLSW  boolean nil

  in .cdsenv, LSW uses the background color of HI form as the background 
  color of LSW's LP icon.  This will get back to the LSW behavior of the 
  previous versions.

Hope this helps.

    - Grant Erwin                                Kirkland, Washington


( ESNUG 381 Item 9 ) -------------------------------------------- [11/08/01]

From: "Mike Montana" <montana@synopsys.com>
Subject: PhysOpt Tcl Script To Simplify Stitching Together Top Level Blocks

Hi, John,

One problem my PhysOpt customers sometimes face is getting the top level
ports in their logical netlist to align with the top level pads in their
physical design.  PhysOpt makes this association automatically if the
cells in the logical library have the proper attributes.  That is:

  1. Each I/O pad cell has the "pad_cell" attribute
  2. The physical pad on each I/O cell has the "is_pad" attribute

Unfortunately, some libraries do not contain these attributes.  In this
case, the PhysOpt commands will not run and the user will sees:

  Error: Cannot inherit location from pad pin 'reddog_iox/io9/IOBUFx/N01'
         to port 'AD09X' because this cell reddog_iox/io9/IOBUFx is not a
         real pad cell. Please check the library. (PSYN-117)
  Error: Port AD09X has no location. (PSYN-007)

When these errors occur, the user has a couple of options.  Contact the lib
vendor and ask them to add the proper attributes to their libraries.  (Great
long term solution but not necessarily the fastest!)  Or the user can
identify the I/O cells and modify the library to have the proper attributes.
Again, a great solution but what happens if you do not have an ASCII version
of the vendors library?

An AC up in Denver came up with a very fast and efficient way to solve this
problem.  He wrote a tcl script which parses a log file from a PhysOpt run
to identify I/O cells missing these key attributes.  The script's output can
be sourced into PhysOpt and it will then apply all of the necessary
attributes to the libraries.  Here is how the flow works:

  1) Load the design and PDEF into PhysOpt.  Run physopt -check_only and be
     sure to redirect the result to a log file i.e.

	      psyn_shell> physopt -check_only > check_only.log

  2) Examine the log file and see if there are any problems related to top
     level port locations ( i.e. PSYN-117 or PSYN-107).  If so, run the tcl
     script provided in a UNIX shell environment.

              UNIX> is_pad.tcl check_only.log fixio.tcl

  3) Source the resulting fixio.tcl script in your current psyn_shell
     session and your top level ports will now automatically be located by
     PhysOpt based on the top level I/O cells.  Here is a sample fixio.tcl
     script:

        set top_name [current_design]
        current_design [ get_attribute [ find cell bigchip_iox ] ref_name ]
        current_design [ get_attribute [ find cell io0 ] ref_name ]
        set_attribute -type boolean [get_lib_cells \
            -of_objects [find cell IO1Cx ] ] pad_cell true
        current_design $top_name
        set_attribute -type boolean [find pin \
            bigchip_iox/io0/IO1Cx/N01 ] is_pad true

        set top_name [current_design]
        current_design [ get_attribute [ find cell bigchip_iox ] ref_name ]
        current_design [ get_attribute [ find cell io1 ] ref_name ]
        set_attribute -type boolean [get_lib_cells \
            -of_objects [find cell IO1Cx ] ] pad_cell true
        current_design $top_name
        set_attribute -type boolean [find pin \
            bigchip_iox/io0/IO1Cx/N01 ] is_pad true


Two things to remember:

  1) You will need to modify the first line of the script to point to your
     tcl-tk install path.
  2) Once you create the fixio.tcl script, you will need to source it each
     time you start a PhysOpt session.  The script applies the proper
     attributes to the library loaded in memory.  It does NOT modify the
     library.db file provided by the vendor.

The script has been tested using PhysOpt 2000.11 and 2001.08.

    - Mike Montana
      Synopsys, Inc.                             Dallas, TX


#!/depot/tk-8.1/bin/wish8.1
# Script name: is_pad.tcl
#
# Input:  Transcript/logfile of a Physical Compiler run that
#         encounters the PSYN-117 error.
# Output: A script that you can source in your synthesis
#         script. Be sure to source this generated script
#         after you have read in the design and linked it
#         (the key is to make sure the libraries exists in
#         memory).
# Disclaimer: Scripts have been tested for Physical Compiler
#         versions 2000.11 and 2001.08
#
# Example Execution:
#         is_pad.tcl check_only.log fixio.tcl
#

puts "is_pad.tcl processing beginning..."

set debug 0
set verbose 1
set i 0
foreach arg $argv {
    incr i
    if {$debug == 1 } {
	puts "arg = $arg"
    }
    if {$i == 1} {
	set inputfile $arg
    }
    if {$i == 2} {
	set outputfile $arg
    }
}

if {$debug == 1 } {
    puts " inputfile is $inputfile"
    puts " outputfile is $outputfile"
}

if [catch {open $inputfile r} inputID] {
    puts "cannot open file '$inputfile'" ; exit
}
if [catch {open $outputfile w} outputID] {
    puts "cannot open output '$outputfile' file for writing." ; exit
}

while { [gets $inputID line] >= 0 } {
    if {$debug == 1 } {
	puts $line
    }

    # remove tabs, replace with single space
    regsub -all \t+ $line { } line
    # remove beginning spaces from line
    regsub -all {^[ ]+} $line {} line

    set pad_pin_name ""
    set pad_name ""
    set Keyword ""

    # if first word on the line is Error: then parse the line.
    scan $line "%s" Keyword
    if {$Keyword == "Error:"} {
	# Parse the current line
	# look for the error code of PSYN-117 in the line
	set value [ string first "(PSYN-117)" $line ]
	if { $value != -1 } {
	    # if found the error code, then parse out the name of the pad pin.

	    # Note $position_of_pad_pin_name is the length of the error
	    # string to the first ' in the following line:
	    # Error: Can not inherit location from pad pin '
	    set postion_of_pad_pin_name [expr [string first "'" $line] + 1 ]

	    # find the position of the last character in the pad name
	    set len [string length $line]
	    set new_string [string range $line $postion_of_pad_pin_name $len]
	    set i [expr [string first "'" $new_string ] - 1]

	    #strip out the pad name from the line
	    set pad_pin_name [string range $new_string 0 $i]
	    if {$debug == 1 } {
		puts $pad_pin_name
	    }

	    # find the position of the last character in the pad name
	    set i [expr [string first "cell" $line ] + 5]
	    set new_string [string range $line $i $len]
	    set i [expr [string first " " $new_string ] - 1]
	    #strip out the pad name from the line
	    set pad_name [string range $new_string 0 $i]
	    if {$debug == 1 } {
		puts $pad_name
	    }
	}
    }

    # set the appropriate attribute
    if { $pad_name != ""} {
	# now need to traverse the hierarchy to the leaf cell
        # if find hierarchy then set the current design to the leaf cell
	puts $outputID "set top_name \[current_design\]"
	set i [expr [string first "/" $pad_name ] - 1]
	while { $i > 0 } {
	    set name [string range $pad_name 0 $i]
	    puts $outputID "current_design \[ get_attribute \
               \[ find cell $name \] ref_name \]"
	    set i [expr $i + 2]
	    set len [string length $pad_name]
	    set pad_name [string range $pad_name $i $len]
	    set i [expr [string first "/" $pad_name ] - 1]
        }

	puts $outputID "set_attribute -type boolean \
                     \[get_lib_cells -of_objects \[find cell \
                       $pad_name \] \] pad_cell true"
	puts $outputID "current_design \$top_name"
    }
    # if found pad pin name, then output to tcl script to
    # set the appropriate attribute
    if { $pad_pin_name != ""} {
	puts $outputID "set_attribute -type boolean \[find pin \
           $pad_pin_name \] is_pad true"
    }
}

puts "   Completed parsing $inputfile"
close $inputID
close $outputID
exit


( ESNUG 381 Item 10 ) ------------------------------------------- [11/08/01]

From: Daniel Szoke <Daniel.Szoke@nsc.com>
Subject: Seeking User Experiences With New VHDL To Verilog Transation Tools

Hi, John,

We need to do a design involving mixed VHDL & Verilog.  Our first preference
would be to translate the VHDL parts to Verilog (our standard language) for
maintainability.  I would like to question your readers regarding their
experience with translation tools.  (ESNUG dealt with this issue back in
1995 but I expect there have been some advances since then.)  Especially:

   1. What VHDL constructs are/were translated
   2. Quality (readability) of the translated Verilog
   3. Compare synthesis of: VHDL -> gates VS. VHDL -> Verilog -> gates
   4. Compare sim of: mixed VHDL-Verilog VS. translated Verilog-Verilog

Thanks to all.

    - Daniel Szoke
      National Semiconductor                     Israel


( ESNUG 381 Item 11 ) ------------------------------------------- [11/08/01]

Subject: ( ESNUG 380 #4 ) Formality 2000.11 Is A Year Old; Look At 2001.08

>      Chip 2 (Gate2Gate)  4 M gates, flat design:
>
>                                Time            Memory
>                              --------        ---------
>      Avanti Chrysalis 3.0    3596 min        14870 Mbyte
>      SNPS Formality 2000.11   112 min         2399 Mbyte
>      Verplex Tuxedo 2.0.8.a    89 min         2817 Mbyte


From: "Nathan Bailie" <NBailie@amcc.com>

Hi John -

I noticed that [ Mr. Bigglesworth ]'s comparisons in ESNUG 380 #4 benchmarked
Synopsys Formality 2000.11 vs Verplex vs Chrysalis.  That's almost a year
out of date.  I have seen *significant* improvement in Formality 2001.06
(an early release of the 2001.08 version) as compared to last year's
2000.11 version.  I recently used both versions on individual design blocks
with up to 1 million gates of logic in each.  RTL-to-gate comparisons ran
2-3X faster with 2001.06 than with the previous version.  Also, 2001.06
breezed through the checking of large complex portions of logic that had
been trouble for 2000.11.  

Unfortunately, I haven't compared the results with other tools such as
Chrysalis, Verplex, or FormalPro yet, so I can't offer comparative hard
metrics yet.  Of course, speed isn't the only factor in choosing the tools.
I would like to see more information and tool comparisons on the ability
to debug RTL-to-gate miscompares.  I know that Mentor Graphics is claiming
advances in this area with FormalPro.

Still, anyone who was disappointed with the performance of Formality 2000.11
should definitely take a look at the newer 2001.08 version.  

    - Nathan Bailie
      AMCC                                       Raleigh, NC


( ESNUG 381 Item 12 ) ------------------------------------------- [11/08/01]

Subject: ( ESNUG 380 #5 ) I Accidentally Used DFT Compiler, Not TetraMax

> With TetraMax, (via dc_shell) I used
>
>   create_test_patterns -output core.vdb
>   write_test -input core.vdb -output corepatterns.v -format verilog -first
>   1
>
> This generated a huge test that applies patterns at the core inputs,
> toggles clocks, and checks outputs.
>
>     - Don Dattani
>       Zucotto Wireless, Inc.                     Ottawa, Ontario, Canada


From: Don Dattani <dond@zucotto.com>

Hi, John,

To follow up on my ESNUG 380 #5 question, the short answer is (and it came
directly from Synopsys): don't use DFT Compiler to create_test_patterns!!

DFT Compiler (although it uses a separate licence) is invoked through
dc_shell -- which is what I did the first time.  I had mistakenly thought
I was using TetraMax through dc_shell there!  DC directives such as
create_test_patterns are legacy commands (were used in the past and are
still around), BUT it is not easy to set the scan options via dc_shell
that recognizes a design with:

      a) full-scan methodology elements
      b) test logic elements (no scan insertion)

So their recommendation is: I should use the real TetraMax.  TetraMax allows
you to fully specify the nature of the test logic (how to setup the state
in scanable mode for ATPG.)

I have been using TetraMax now, but, there seems to be a TetraMax bug
(that Synopsys is aware of) with its reading in UDP files.  The fun
never stops.

    - Don Dattani
      Zucotto Wireless, Inc.                     Ottawa, Ontario, Canada


( ESNUG 381 Item 13 ) ------------------------------------------- [11/08/01]

Subject: ( ESNUG 380 #13 ) I Found A Laborious Way To Handle Resets In DC

> ... meant for each block being synthesized there were effectively 2 reset
> trees, one with an invertor as its driver.  Consequently it was not
> possible to simply apply the "set_ideal_net" type commands and expect DC
> to not put buffers in the inverted reset tree, as the invertor protected
> the inverted-tree from such constraints.  ...  In the end it WAS possible
> with some fiddling around using commands like clean_buffer_tree to strip
> out all reset buffers, but I still had the problem that mylayout tool had
> to cope with a few dozen invertors midway down the reset network.  ...
>
>     - Jon Harris
>       Siroyan                                    Reading, Berkshire, UK


From: Jon Harris <jharris@siroyan.com>

John, I know you like follow ups.  I did find something on SolvNet which
would help with this issue, but is very laborious.  Essentially you label
*every* sequential 'always' block in your source code(!), create hierarchy
around these in DC, synthesize these to the appropriate flop & reset logic
and set_dont_touch.

Needless to say I haven't gone down this route! 

    - Jon Harris
      Siroyan                                    Reading, Berkshire, UK


( ESNUG 381 Item 14 ) ------------------------------------------- [11/08/01]

Subject: ( ESNUG 380 #6 ) Parasitics And PrimeTime In Hierarchical Mode

> By the way, I would also be really interested in knowing if others are
> able to run PrimeTime in a hierarchical mode using detailed parasitics.
> This involves using read_parasitics -increment to read in each block
> level dnet.spef file and then the top level.  I have tried this and
> the tool gives a load of false errors since it has not gotten the full
> RC network yet.  ... The report_annotated_parasitic command does not
> work well, so you have to write tcl scripts to figure out if all the
> parasitics are annotated. ... The compete_net_parasitic command is also
> error prone and should not be a global command.
>
>     - Bruce Zahn
>       Agere Systems                              Allentown, PA 


From: Al Jacoutot <Al.Jacoutot@SiliconAccess.com>

John,

I believe the "-quiet" option on read_parasitics will give Bruce what he is
after.

Use the -quiet when reading the lower blocks -- in the same way you are
using the -incremental option -- and don't use it with your last annotation.
You will hold off running the annotation report till the very end in this
case.

    - Al Jacoutot
      Silicon Access Networks                    Raleigh, NC


( ESNUG 381 Item 15 ) ------------------------------------------- [11/08/01]

Subject: ( ESNUG 380 #12 ) But Our SysAdmin Says Suns Are More Reliable!

> Here's my contribution to the growing number of Linux/Sun/IBM synthesis
> benchmarks.  Task: synthesize top-level design in IBM cu11, 8,669,551
> cells, 121,179 instances
>
>     usertime   systime   elapsed    cpu             mem/mhz
>     --------------------------------------------------------------
>     19757.0u   83.0 sec  5:33:44    Sun Ultra60     2gb/333mhz
>     15514.5u  335.5 sec  4:30:54    IBM rs6000      4gb/400mhz
>      9833.5u   35.2 sec  2:44:32    Intel P3-1000   1gb/1000mhz *
>
>     * - we used the Abit vp6 motherboard
>
> Interestingly enough, SUN wants $39,000 for 4 gb of memory on their new
> systems, while 4 gb of ECC Registered DDR can be had for around $2000
> (about $500 for a 1gb DIMM).
>
>     - Shannon Hill
>       Tenor Networks, Inc.                     Acton, MA


From: Paul Min <pmin@terayon.com>

Hi John,

I enjoy reading your ESNUG letters.  I found it particularly interesting
reading  about the Linux benchmark results by various users - Scott Evans,
Shannon Hill,  Russ Petersen, et al - in ESNUG 380 #12.  Their benchmark
results showed us  good reasons to look further.  However, it'd would be
great if we can get the  overall picture in term of relialibity and support
(both OS and HW) as well as  the performance/cost comparison with Unix
workstations.  Our SysAdmin says PC hardware is much less reliable compared
to workstations, thus more potential problems when you need the server up
and running months without crashing.  Also, OS and HW come from two
different vendors for Linux+PC, whereas OS and HW of workstation come from
a single vendor (e.g. Sun), which makes it a lot easier for SysAdmins to
manage.

    - Paul Min
      Terayon Communication Systems              Santa Clara, CA


( ESNUG 381 Item 16 ) ------------------------------------------- [11/08/01]

From: [ Curious George ]
Subject: How Do I Detect When 2 Events Occur In The Same Verilog Time Tick?

John, anon, please.

Do you know of any way to detect when 2 events occur on the same time tick
using Verilog-XL or VCS?  For example, I have the following code:

       module mylatch (Q, D, Clk);
       always @(Clk or D)
           if (Clk)  Q = D;
       endmodule

I want to detect the times when posedge Clk coincides with posedge D or
negedge D.  Thanks.

    - [ Curious George ]


============================================================================
 Trying to figure out a Synopsys bug?  Want to hear how 11,000+ other users
    dealt with it?  Then join the E-Mail Synopsys Users Group (ESNUG)!
 
       !!!     "It's not a BUG,               jcooley@world.std.com
      /o o\  /  it's a FEATURE!"                 (508) 429-4357
     (  >  )
      \ - /     - John Cooley, EDA & ASIC Design Consultant in Synopsys,
      _] [_         Verilog, VHDL and numerous Design Methodologies.

      Holliston Poor Farm, P.O. Box 6222, Holliston, MA  01746-6222
    Legal Disclaimer: "As always, anything said here is only opinion."
 The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com




 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)