From: Dave Peeters <peeters@crestmicro.com>

  Hi, John,

  Since you raised a similar issue with your AAA comments in ESNUG 399, I
  thought that I should let my fellow engineers know about this one.

  I have used IEEE Term Life insurance (great rates!) for the last 20
  years.  In 1998, they changed underwriters and instituted a 20% discount
  for non-smokers.  Apparently they sent out a notice that this was going
  to happen and that the DEFAULT for all current insureds was that they
  were SMOKERS.  If you were not a smoker you were supposed to proactively
  notify them.  On the policy renewals that are sent semi-annually there
  is no indication that you are classified as a smoker.  Long story short,
  I reviewed my recent renewal, thought the rates were high and went to
  the web site to get an on-line quote.  There was a big difference in the
  price (about 20%) so I called and found out that since 1998 I have been
  charged the SMOKERs rate!

  I inquired about a refund for the excess premiums - no deal.  If any of
  the ESNUG subscribers signed up for this insurance prior to 1998 then
  they should check to make sure that their smoking status is correct.
  The whole deal seems unfair and I wonder how many others were caught in
  this "default" decision.  It seems that at a minimum the renewal notice
  should indicate this important distinction.

      - Dave Peeters
        Crest Microsystems                     Cerritos, CA


( ESNUG 401 Subjects ) ------------------------------------------- [10/17/02]

 Item  1: Does Either Verilog Or VHDL Give Better Results During Synthesis?
 Item  2: User Benchmarked VCS 7.0 At 2X VCS 6.2 On 4 Million Gate Netlist
 Item  3: ( ESNUG 400 #3 ) Tech Details Of The LSI Logic Monterey Tape-out
 Item  4: User Seeks General Rule-of-Thumb For set_clock_skew -uncertainty
 Item  5: ( DAC 02 #7 ) Sequence Addresses Its VHDL And HP Runtime Issues
 Item  6: We Found Verplex LEC Caught Some Errors That Formality Had Missed
 Item  7: ( DAC 02 #2 ) Synchronicity Was So Bad We Wrote Our Own RCS Tools
 Item  8: Engineers Shifting Away From Verilog/VHDL/SystemC And Towards SoC
 Item  9: EE Times Censorship & Perl vs. Tcl/Tk vs. Awk vs. Python vs. Ruby

 The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com


( ESNUG 401 Item 1 ) --------------------------------------------- [10/17/02]

From: Walter Rau <wrau@Dorsalnetworks.com>
Subject: Does Either Verilog Or VHDL Give Better Results During Synthesis?

John,

Used to be more actively involved in ASIC design, now doing mostly FPGAs
(medium to small scale)...  My limited experience has that Verilog
simulators seem to run faster than VHDL (these were gate-level sign-offs.)
But, does either language give better results during SYNTHESIS??  I would
quantify results as:

     (1) Number of gates
     (2) Speed of design

So, if I took a design, coded it in VHDL and Verilog, ran both through
their respective synthesizers (preferably same vendor such as Synopsys),
would one language achieve better results??  (And yes, I know their are
"good" and "bad" ways to write synthesizable code; let's ignore that.)

    - Walter A. Rau, Jr.
      Dorsal Networks                            Columbia, MD


( ESNUG 401 Item 2 ) --------------------------------------------- [10/17/02]

From: Robert A. Clark <rac@paramanet.com>
Subject: User Benchmarked VCS 7.0 At 2X VCS 6.2 On 4 Million Gate Netlist

Hi John, 

Just wanted to pass on some information about the gate level performance of
the VCS 7.0 product.  We have a large ASIC that was taking 4 hours to
complete a fairly simple diagnostic with the entire design in gates with
the VCS 6.2R13 version.  Here are some of our netlist stats:

   - 85 SDF files being read
   - all gates are SDF annotated
   - 1,744,963 module instances
   -   367,798 combinational UDP instances
   -   264,410 sequential UDP instances
   - 4,039,330 total gates

We tried the VCS 7.0 version, on the same net list, and have consistently
beat the VCS 6.2 release by a factor of 2.  There was also a compile time
improvement of 3X, due to a faster SDF reader that was introduced somewhere
between the 6.2 and 6.2R13 release.  With the "old" SDF reader, it was
taking over 35 minutes to compile our design.  It now takes just over 10
minutes.  The newer SDF reader is also in the VCS 7.0 release.

    - Robert Clark
      Parama Networks, Inc.


( ESNUG 401 Item 3 ) --------------------------------------------- [10/17/02]

Subject: ( ESNUG 400 #3 ) Tech Details Of The LSI Logic Monterey Tape-out

> The 10 additional Monterey tape-outs came from DAC 02 #15 plus a detailed
> Monterey tape-out you'll find in next week's ESNUG post.


From: Jim Jensen <jensenja@lsil.com>

Hi John,

We first starting working with Monterey at the beginning of last year.  We
engaged with Monterey to develop a secondary / backup layout system at LSI.
Our primary layout system (FlexStream) runs off the Avanti Milkyway
database, and we use a combination of Avanti, LSI internal, and other third
party tools in the FlexStream system.
 
We've taped-out 1 real chip using Monterey's tools.  The chip is a 3 million
gate Ethernet channel switch.  It contains 700 K standard cells, ~80 RAMs,
and 2 clock trees - the larger one with 66K flops.  Our approach with this
design was a flat methodology using Monterey's Dolphin system.

The initial floorplanning and some timing critical cell placement were done
using our FlexStream tools.  The output from these tools was then taken into
Dolphin to do place and route, with the results being brought back into the
FlexStream flow for final backend checks and clean-up.  Dolphin required a
lot of hardware for this design - the systems that we ran it on each had
8-12 CPUs and 19-24 GB RAM.  We worked closely with Monterey over the next
few months, and in October, we were getting pretty good results out of
Dolphin.  It was able to place the entire design flat and route it with only
36 routing violations and 2 antenna errors being detected by Dolphin.  This
is very good considering that there are over 700,000 nets.

We had to run a bunch of ECOs to fix the clock tree because the clock
constraints did not completely define the balancing requirements of the
trees.  Initially we had some very unbalanced trees that we had to fix, but
eventually we got the skew down to 0.7 nsec.

Dolphin works well when all constraints and initial conditions are set up
correctly and do not change.  This being a real design, though, that wasn't
the case and it did cause some issues with the Dolphin flow.  The netlist
had some changes, and there were some user errors in setting up the initial
floorplan.  This resulted in our re-running the entire design all the way
through detailed route about 8 to 10 times.

We ran our sign-off DRC tools on the Dolphin results, and found that the
violations detected by Dolphin correlated very well with our DRC tools.  For
the most part, DRC violations that were not caught by Dolphin were due to
setup errors or boundary conditions that it would not be able to detect in
any case.
 
At this point, we ran up against the wall in terms of schedule.  Although we
had achieved very good results with Dolphin and our team was confident that
we could complete the design with Dolphin, the overall consensus favored
finishing the design with our existing flow because we are more familiar
with its ECO capability than Dolphin's.  We decided to use the results that
we had achieved so far with Dolphin, and finish the design with a series of
ECOs using our established ECO flow.  We felt that this would give us the
best chance of finishing the chip quickly.


Montery Issues
--------------

Here are some of the issues that we ran into along the way.

At the time this design was started, Monterey did not have a front-end tool
for Dolphin that would do initial floorplanning.  We had to use our LSI
in-house tools, and come up with a way to transfer the data correctly into
Dolphin.  For the most part this worked, but it was a detailed exercise
which led to many errors.  Now Monterey has IC Wizard for floorplanning
from their acquisition of Aristo.

The flow with Dolphin is very restrictive with respect to making changes
during layout.  It was very difficult to implement a change midway through
the process.  Typically any change meant a restart.

User errors are going to happen.  In an environment you are familiar with
errors will occur, but with the Dolphin layout we were using tools that
were new to everyone.  Besides this, the floorplanning required some
advanced in-house techniques that required special handling.  On several
occasions we had to restart the Dolphin layout due to errors in the initial
floorplan.  Had the Dolphin tool been more flexible, some of these may not
have required a restart.  An example is a section of logic that had pre-
routes.  These were showing up in Dolphin shifted by a few microns from
where they should be, which caused severe routing problems.  We tried to
fix these in Dolphin after the physical optimization, but were unable
to do it successfully.

We should have been analyzing DRC errors earlier.  We didn't realize until
late in the program that Dolphin was not detecting certain DRC errors.
This turned out to be caused by not correctly setting up our technology
rules properly.  It was not detected until we hit a point where we could
not afford any design restarts and had to fix existing problems with an
ECO approach.  The end result was that we were forced into using in-house
tools to complete the routing, even though Dolphin was capable of doing it.

Over-constrained synthesis constraints caused problems with Dolphin, which
needs to see the timing constraints as they would be supplied to an STA
tool.  The size of the initial timing constraint file caused problems for
the Dolphin tools also.  However, Dolphin has a really useful feature called
Constraint Compression Technology that reduced the initial constraint file
by about a factor of 10X.

As mentioned before, Dolphin takes a lot of hardware.  We asked them about
a multi-CPU, shared memory Linux port, but so far they haven't shown us the
roadmap.  However, it's not clear that Linux would help much since we are
using 64-bit Dolphin code on a multi-CPU Sun server and Linux (on Intel
CPUs) is still a 32-bit platform.  The multi-threading in Dolphin really
does a good job of utilizing all available hardware resources.

We spent a lot of time on the clock trees.  For the largest clock, Dolphin
ended up with a skew of between 1.2 nsec and 1.3 nsec after detailed route.
The estimate prior to detailed route was 0.5 nsec.  Essentially, this is
because on sparse designs such as this one, fast paths tend to speed up.
Dolphin over-estimates interconnect delay before final placement, but when
the design was routed, it turns out that the estimates were too optimistic.
We were able to manually identify portions of the clock where we could use
resizing and removal of delay buffers to correct the clock skew.  We got
the skew on the largest clock down to 0.7 nsec.  This was not good enough,
though, and we had to use our in-house tools to massage the clock tree to
get the skew down farther. Eventually the main tree was brought down to
~240 psec of skew.

At that time Dolphin didn't support "set_case_analysis" constructs.  We were
able to get Monterey to implement this feature in time for us to use it.

The timing correlation between Dolphin and our sign-off delay calculator was
close, but there were still issues that had to be addressed through ECOs
even though Dolphin thought that timing was met.  Some of these could be
traced to library issues or to setup errors (such as bad clock tree
definitions).  Regardless, there were still setup/hold/ramptime issues that
needed to be fixed through an ECO process.  These were not totally
unmanageable, but were more numerous than would have been the case with our
FlexStream tools which have gone through extensive correlation efforts.
Monterey is moving toward using OLA libraries, and this would resolve any
correlation issues.  We were unable to try to use this feature on this
design, but what we have seen looks good.


What Needs Work
---------------

Dolphin needs a huge machine with as many processor (up to 12) and as much
memory as you can load on it.

Run times are long - about five days from reading the netlist to completing
final route.

At first, Dolphin crashed a lot.  This improved as time went by.

Timing correlation before and after routing was not as good as what we were
hoping for, particularly on the clock signals.

Dolphin should have automatically done some of the things that we had to do
manually to fix the clock trees.

The reporting is not as helpful as it could be.  For example, there is no
easy way to get a report of wire lengths for all nets, and we had to write
scripts to parse some of the clock reports in order to get any useful
information out of them.


What Works Now
--------------

The command script for Dolphin was very compact - about 220 lines to
initialize the design (effort knob settings, variable definitions, timing
library setup, read input files, and define scan chain) and 100 lines to
run Dolphin (effort knob settings, write reports and output files, detailed
routing regions).

The routing results that we got out of Dolphin were very good, less than 50
routing violations out of almost a million nets.

Antenna fixing during routing worked very well in Dolphin.

The multi-threading works very well.  If you have sufficient HW resources,
you can get very good run times.

Dolphin was able to abstract a .lib timing model for an ARM core.

Dolphin's ability to distill the constraint file down to only the essential
ones is very useful for multi-million gate chips.

The command scripts are very compact which makes the system easier to use
and maintain.

The Monterey R&D team was very responsive fixing bugs overnight and, in some
cases, implementing new features as we needed them.

    - Jim Jensen
      LSI Logic                                  Bloomington, MN


( ESNUG 401 Item 4 ) --------------------------------------------- [10/17/02]

From: Eric Ryherd <eric.ryherd@arc.com>
Subject: User Seeks General Rule-of-Thumb For set_clock_skew -uncertainty

Hi, John,

I'm looking for what the current industry "standard" for clock skew is for
.18um and .13um.  In the distant past, I've used 500 psec.  In the more
recent past of .35um & .25um chips, I've used 400 psec or less.  Now that
I'm down to .18um, we're trying to be conservative and various people think
we need to keep the uncertainty at 400 psec.  But since the clock frequency
is now up to 250 Mhz, this represents 10% of the total timing budget and
seems quite excessive.  In talking with various back-end designers, I've
seen general consensus that 200 psec is a more realistic clock uncertainty
and is still fairly conservative.

So... I am asking the design community if 200 psec is a reasonable clock
uncertainty for .18um?  Is 200 psec still reasonable for .13?  Or should
it scale to 150 psec?

    - Eric Ryherd
      ARC International


( ESNUG 401 Item 5 ) --------------------------------------------- [10/17/02]

From: Ian Gibbins <igibbins@sequencedesign.com>
Subject: ( DAC 02 #7 ) Sequence Addresses Its VHDL And HP Runtime Issues

Hello John,

My name is Ian Gibbins and I am an AE for Sequence Design, supporting our
power products.  I'd like to address two issues in your DAC Trip Report:

  "We have been looking at Sequence PowerTheater and have had nothing but
   problems with it.  I'll be very surprised if we buy it.  From talking
   to a user here our problems are due to our using VHDL and not Verilog."

Addressing the VHDL flow issue is the top priority for Sequence PowerTheater
R&D for the next major release, due out in the middle of November.  We have
worked with our VHDL customers solidly over the past year to identify the
areas of weakness in our VHDL analysis.  The issues that we have seen have
mainly been attributed to coding styles and how they are interpreted by our
inferencing engine.

  "We have been using Sequence tools.  But our recent comparison between
   Synopsys PrimePower and Sequence's PowerTheater showed us that the
   power estimation numbers are close while PrimePower was a lot faster
   than PowerTheater.  Please treat me anon for the power analysis tools.
   I think Sequence will not be happy that we're not buying their tools."

The performance issue mentioned with respect to Synopsys PrimePower was
evident on an HP platform.  We have no customer complaints about
PowerTheater's performance for average power analysis or RTL peak power
analysis on the Solaris platform.  The HP platform issue has been resolved
in the latest release, and now PowerTheater's performance on HP is on par
with that of Solaris.  We are also actively working on improving our
gate-level peak power analysis capability, and will be announcing new
technology in that area in the near future.

    - Ian Gibbins
      Sequence Design, Inc.                      Chinnor, Oxon, UK


( ESNUG 401 Item 6 ) --------------------------------------------- [10/17/02]

From: Urban Jangren <urbanj@QThink.com>
Subject: We Found Verplex LEC Caught Some Errors That Formality Had Missed

Hi, John,

On a recent project we completed a 7 M gate design in 0.13u technology.  The
front-end team used Formality for formal verification & during the back-end
work that I participated in we used Verplex LEC.  I was surprised that we
uncovered some floating signals (inputs to an ARM core) in the netlist that
was not detected by Formality.  I am not sure if this was an operator error
or a lack of capability in the tool, but it confirmed my belief that Verplex
is an easier tool to work with and it produces good results. 

    - Urban Jangren
      QThink


( ESNUG 401 Item 7 ) --------------------------------------------- [10/17/02]

Subject: ( DAC 02 #2 ) Synchronicity Was So Bad We Wrote Our Own RCS Tools

>  "We use Synchronicity for configuration management and issue tracking.
>   It does a solid job of both.  For configuration management, I still
>   wonder if you couldn't piece together equally useful tools using free
>   open source packages, e.g., CVS."
>
>       - John Busco of Brocade Communications


From: Shiv Sikand <sikand@matrixsemi.com>

Hi, John,

I have used DesignSync DFII from Synchronicity and found it to be one of the
worst tools I have ever used in my life.  In fact, it was so bad I wrote my
own interface while at SGI and subsequently open sourced it.  I wrote to you
about this a while ago.  Here is the URL:

       http://public.perforce.com/public/perforce/cdsp4/index.html

A little history lesson:

Cadence 4.3 was based on the Edge database.  The Edge database was not very
amenable to software configuration management because it used a proprietary
'catalog' file which contained the list of cells in the library.  This
catalog was also the source of many corruption issues in the DB because it
was a single access bottleneck in a multi-user environment.  However, this
DB contained a checkin/checkout mechanism and a versioning system.  This
worked well at the cell level, but the configuration feature which allowed
you to manage versions across a set of objects was flawed and never worked
correctly.

In Cadence 4.4, they shifted away from the catalog idea and moved to a 
different architecture that was more amenable to UNIX style manipulation 
like cp and mv. This database was almost like normal files, except for 
one feature: co-managed file sets. i.e. it takes multiple files to 
represent a single 'cellview' in Cadence.  In conjunction with this new 
scheme, they also announed Team Design Manager, TDM, their configuration 
management software.  Unfortunately, this was an overly complex and buggy
piece of software and after a while Cadence decided that they were going 
to outsource this capability and end-of-life TDM.  They released an
abstraction layer called GDM (or Generic Design Manager) which was meant 
to be a general API for managing all Cadence database objects.  However, 
it too is fundamentally flawed since it has no error handling mechanism 
nor a communication channel to allow for error recovery and messaging. 
It was not intended for customer use, but rather for third party 
integrations.  The Spectrum Design Services folks came up with an RCS 
integration which they called CRCS.  This was the time that Synchronicity 
first showed up to the party.  They built a GDM integration for their 
DesignSync tool and later a stripped down 'free' version called Version 
Sync.

The Synchronicity revision engine is based upon RCE, Walter Tichy's (the 
author of RCS) follow up.  This had a particular claim to fame: the Byte
Differencing Engine or BDE that they claimed allowed incremental storage 
of binary files.  Apart from the storage format, RCE is still just really
only RCS.  Synchronicity wrapped it around a http protocol as a means to 
enable easy access thru the enterprise intra and extra nets.  However, 
their extremely poor implementation of this system meant an extremely 
slow, unresponsive and very buggy system that needed expensive UNIX 
servers to run.  Coupled with the flaws in GDM, this was a nightmare.  I 
used it at three separate companies and was involved with replacing it 
at all three. However, Sync had a very good marketing department and 
sold it to Intel, Lucent, Philips and a bunch of other players.  These 
companies were big Cadence houses and bought it because no-one else was 
providing a solution at the time.  Clio entered the market much later.

The horrors of DesignSync forced me to consider writing my own.  I 
convinced SGI/MIPS management (then headed by Lavi Lev, who is now 
ironically the VP at Cadence responsible for IC Tools) to allow me to do 
so.  We purchased a GDM license from Cadence who were extremely helpful 
and sympathetic to our cause at all time.  After the work was finished 
and proved to be a success, SGI and Cadence agreed to allow this 
interface for Cadence, using the Perforce SCM system to be Open Sourced. 
Perforce Software is now the copyright holder and the interface is 
available under the BSD license.  I avoided the GDM pitfalls by using the
Skill trigger model to manage the data, providing a NULL operation GDM 
server.  The GDM piece is Cadence proprietary and is available as a 
binary.  However, since it performs a NULL operation, it never changes. 
In addition, through the use of Perforce's atomic transactions I was 
able to avoid the use of extra marker files to handle the co-managed set 
problem. This results in an ultra-high performance integration with a 
very rich feature set, possibly the richest and fastest available today.

My whole point is this: Synchronicity is so bad that I spent hundreds and
hundreds of hours coming up with an alternative and I make the interface
available for free. In addition, I have recently completed a paper on
using Perforce for Hardware Design.  This is available at

           http://www.perforce.com/perforce/hardware.html

Anyway, enough of my rambling.  I've probably gone on way too much about 
my own stuff but I thought that this would be a good way to prove that I 
suffered and then actually did something about it and made it available 
for no cost rather than cry at EDA vendors.

    - Shiv Sikand
      Matrix Semiconductor


( ESNUG 401 Item 8 ) --------------------------------------------- [10/17/02]

From: Stuart Sutherland <stuart@sutherland-hdl.com>
Subject: Engineers Shifting Away From Verilog/VHDL/SystemC And Towards SoC

Hi, John,

Following is a tidbit that your readers might be interested in...

First, thank you for sending out the reminder a couple of weeks ago on the 
deadline for submitting paper proposals for DVCon.  Prior to your reminder, 
we had received about 25 paper proposals.  It speaks very highly of the 
effectiveness of ESNUG and caliber of readers to note that in the days 
following your reminder, we received 70 additional paper proposals!

As the technical program chair for DVCon (formerly HDLCon) the past three 
years, I've noticed that the topics we receive in paper proposals is a 
direct reflection on where engineers are spending most of their time.  A 
couple of years ago, most of the proposals submitted were on various 
aspects of modeling hardware with Verilog or VHDL.  As I tallied the topics
of this year's 95 paper submissions for DVcon, I was surprised to find that
less than 20% of papers are directly related to modeling with HDLs, and 
about 5% are related to SystemC.  Nearly 50% of the paper proposals we 
received have to do with verification, and another 25% with System-On-Chip 
design.  We also received several proposals for panels, everyone dealing 
with some aspect of design verification.  Just thought you'd like to know
about this change in what's interesting engineers these days.

    - Stu Sutherland
      Sutherland HDL Inc.                        Tualatin, OR


( ESNUG 401 Item 9 ) --------------------------------------------- [10/17/02]

Subject: EE Times Censorship & Perl vs. Tcl/Tk vs. Awk vs. Python vs. Ruby

> It's sort of like having a venereal disease.  When you do crisis
> intervention consulting (like I do) for chip design projects, you get to
> see everything.  And one of the most embarrassing problems I've found at
> client sites is when some of their key engineers on a troubled project
> really don't know Perl.  (You may balk and exclaim that this is impossible
> because Perl scripts are the bread and butter of chip design, but that's
> what makes this problem so embarrassing.)  Like a doctor treating VD, one
> has to treat this problem very discretely -- which means I give those
> clients my sequential list of recommended reading so they can quickly and
> quietly learn Perl at home.
>
>     - from "Monkey See, Monkey Do"  (original text)


From: Dave Chapman <dave@goldmountain.com>

Hmmm, John.  Learning Perl is sort of like getting the clap.  Never actually
thought of it that way.  I'll have to consider this one very carefully...

    - Dave Chapman
      Goldmountain Consulting                    Sebastopol, CA

         ----    ----    ----    ----    ----    ----   ----

From: Prasad Sakhamuri <prasad@velio.com>               

Hi John,

This is so true and so funny (the first paragraph, I mean) I just couldn't
resist forwarding to the entire engineering community in my company.  May
your wittisms go forth and multiply.  :-)

    - Prasad Sakhamuri
      Velio Communications, Inc.                 Milpitas CA

         ----    ----    ----    ----    ----    ----   ----

From: Dennis Brophy <dennisb@Model.com>

Hi, John,

WOW!  I just read the EE Times print version of your column when I saw your
email with the same title.  In the EE Times print version of your column
the editors completely removed your reference to VD!!!???

Now I know how the kids feel when they have me play their rap music in the
car.  Just about every other word is censored on the radio and they can't
wait to go out and buy the CD to hear the real thing.  Although censorship
can be a bad word, it does drive behavior.  In the case of the kids, it
drives them to buy the actual CD.  And I'm certain the recording industry
marketing brain trust enjoys that.  In your case, it might actually drive
one to only read your email version.  I wonder: is that what EE Times wants?

    - Dennis Brophy
      Mentor / Model Tech                        Wilsonville, OR

         ----    ----    ----    ----    ----    ----   ----

> Here again you want to focus on regular expressions & pattern matching in
> chapter 7.  Perl's greedy matching is it's biggest strength, but it's also
> the most obtuse part of the language.  It's here where you'll get your
> first insights into those occult regex incantations that the UNIX man pages
> so often reference.  (For more advanced regex, I recommend "Mastering
> Regular Expressions" by Friedl, but be warned that it's ugly.  You should
> also be warned about the "Perl Cookbook" by Christiansen.  He likes to
> drift off into UNIX arcana a bit too much for practical use, but on
> occasion he can help.)


From: David Weisgerber <A.Weisgerber@infineon.com>            

I don't think so.  The Perl Cookbook rules!  I own it myself.  I think it is
one of the best computer books available because the examples are more
practical than those listed in other books.  (Who wants to hardcode car
vendors and models to be stored in arrays and hashes?  I use to have a
MySQL Server.)

    - David Weisgerber
      Infineon

         ----    ----    ----    ----    ----    ----   ----

From: Billy Vitro <bvitro@cisco.com>        

OK, John, that does it!

I've generally agreed with you, especially in your "Industry Gadfly"
columns, but this is the last straw...

You have dissed "The Perl Cookbook".  Them's fightin' words where I come
from.  That is easily the best Perl book I have ever used.  It has clearly
written examples, concisely expressed, which show a depth of knowledge of
Perl that is sorely lacking in all three of the other books you cite.

OK, so I won't really go to the mat with you over this, but I've found it 
more and more useful the more I write Perl.  Which may be the reason you 
wouldn't recommend it for people learning the language - you need to know 
more than the average bear about Perl than most newbie's have.

    - Billy Vitro
      Cisco Systems

         ----    ----    ----    ----    ----    ----   ----

From: Yaron Kretchmer <yaronkretchmer@hotmail.com>

John, No!

The "Perl Cookbook" is by far the best value-for-money Perl book you can
get.  It will give some stuff to help you with your chip design, some stuff
to help you with CGI scripting, and some insights into other Perl aracana
so you can help your wife (my wife actually) with her bioinformatics stuff.

Learn Perl first, then get the cookbook and thrive.

    - Yaron Kretchmer

         ----    ----    ----    ----    ----    ----   ----

From: Tomasz Prokop <prokop@lucent.com>

John,

You forgot to mention the camel book everyone uses: "Programming Perl" by
Larry Wall, Tom Christiansen & Randal L. Schwartz

    - Tomasz Prokop
      Lucent Technologies

         ----    ----    ----    ----    ----    ----   ----

From: John Providenza <johnp@probo.com>

John,

Another Perl helper I find invaluable is Swain's "Perl 5 Reference Guide"
that can be downloaded from the web.  If you've used Perl as a 'helper',
ie, not your primary language, this cheat-sheet can help you quickly
remember/find some of the obscure variables, etc.

    - John Providenza
      ProBo

         ----    ----    ----    ----    ----    ----   ----

From: Eric Hawkins <eric.hawkins@philips.com>

Hi John,

An excellent way to learn Perl, in conjunction with the books you listed,
is to take a good Perl script and add your comments to each line.  When I 
first started at Motorola I had a DSP verification environment dumped into 
my lap and was told to make it run.  It was about 1000 lines of Perl that 
generated random assembly instructions according to rules defined by the 
instruction set architecture.  I did not know Perl at the time.  I copied 
the scripts into a temporary directory, and line-by-line began to comment 
the code. I call it "Perl boot camp."

The only book I used was "Programming Perl" by Wall, Christiansen and 
Schwartz.  Sure enough, I learned Perl!

Caveat: Make sure you start with good quality Perl.  The guys over at Perl 
Monks have a good handle on things: 

     http://www.Perlmonks.org/index.pl?node=Code%20Catacombs

Follow their guidelines for programming style and you will not go wrong.

    - Eric Hawkins
      Philips Semiconductor                      Duluth, GA

         ----    ----    ----    ----    ----    ----   ----

From: Sy Wong <sywong@hermix.markv.com>

John,

Are you kidding or serious about "key engineers on a troubled project 
really don't know Perl" with the piles of books one must read.  If 
serious, may be you should say more in ESNUG Posts how Perl is used and 
what for.  I am a hobbyst IC designer that cannot afford Verilog/VHDL 
with the necessary tools and use a simple ISO standard programming 
language that is not Perl.

    - Sy Wong
      MarkV

         ----    ----    ----    ----    ----    ----   ----

From: Premysl Vaclavik <premysl.vaclavik@analog.com>

Hi, John,

I like the survey of the Perl books you have written, but I do not share
your view about the "bread and butter".  I would say there is also awk,
nawk, mawk and gawk.  If you program your scripts in shell and awk, you
can do  the same for the chip design scripting as when you use Perl.  If
you want to filter the data approaching 2G and above, the line oriented
awk is in my opinion better.  If you want to program for interactive
processes, you may choose expect.  So, in my opinion the most important
thing is the designers ability to write scripts rather than to be fixed
to the syntax of one scripting language.

    - Premysl Vaclavik
      Analog Devices

         ----    ----    ----    ----    ----    ----   ----

From: Victor Chen <victor@vitesse.com>

Mr. Cooley,

How about Tcl then?  It seems to be a trend now for EDA tools to use Tcl as
their  user-interface platform.  If I don't mistake, Tcl can also do lots of
wonderful stuff Perl has.

    - Victor Chen
      Vitesse Semiconductor

         ----    ----    ----    ----    ----    ----   ----

From: Richard Auletta <rauletta@orci.com>

John,

Learning Tcl/Tk is infinitely better for the engineers in the EDA field.

Reasons are many, best reason is almost every EDA tool is built on Tcl/Tk.
Good chance if they know Tcl/Tk the can skip the Perl script all together by
extending the tool with both simple scripts and complex scripts and GUIs
(and accessing the database extensions of tools like PrimeTime and
First Encounter).  As to regexp, Tcl regular expressions are implemented
using the package written by Henry Spencer, based on the 1003.2 spec and
some (not quite all) of the Perl5 extensions (thanks, Henry!). 

Perl is really old news, and for the average DA/ASIC/FPGA/SoC engineer, Perl
is probably the worst scripting language they could try to learn, for as
many reasons as Tcl/Tk is the right first language.

I agree with the concept of the article, that the problem is VERY wide
spread, can be largely blamed on Universities that teach what is easy to
teach, not important to learn, but your suggestion of Perl in fact makes
the problem worse by offering a language difficult to learn and impossible
to read and maintain.  If C got its name for the grade it received in a
compilers course, then the P in Perl must have come from the Pass/Fail it
received.

    - Rich Auletta
      ViaWest                                    Boulder, CO

         ----    ----    ----    ----    ----    ----   ----

From: Randy Findley <Randy_Findley@LayerN.com>

John,

I would still take a hard core Verilog or VHDL coder with Tcl skills over
a Perl guy any day of the week.

Still I admit, Perl is an excellent language.

    - Randy Findley
      Layer N Networks                           Austin, TX

         ----    ----    ----    ----    ----    ----   ----

From: Erich Whitney <ewhitney@axiowave.com>

Hi John,

Really great advice.  But I want to pick on one thing you said.  I think the
best feature in Perl to exploit second to regexp is associative arrays.  If
you're trying to do any sort of data processing in Perl and you're not using
associated arrays, you're working too hard.  Spend the extra half hour it
will take you to figure them out and you've just justified all the time you
spent learning Perl.

    - Erich Whitney
      Axiowave, Inc.

         ----    ----    ----    ----    ----    ----   ----

From: [ The Gerbil Boy ]

Keep me anon if you're going to post, otherwise its for your consumption
only.  We recently got a chip back in the lab.  We're still running random
tests on it.  I recently noticed that all of the seeds are even!  The
random seed algorithm failed to take into account that Perl was only
compiled with 15 "randbits"!

    - [ The Gerbil Boy ]

         ----    ----    ----    ----    ----    ----   ----

From: Jesse Jenkins <Jesse.Jenkins@xilinx.com>

To me, it is a sad thing that you have to know Perl to design chips.
What is the point of CAD?

    - Jesse Jenkins
      Xilinx

         ----    ----    ----    ----    ----    ----   ----

From: William Lenihan <wlenihan@raytheon.com>

What is Perl?

I'm an FPGA designer using ModelTech for simulation, Synplify for synthesis,
and Xilinx Alliance for P&R, static timing, floorplanning, etc.  Since I see
no 'holes' in my design flow, what, if anything, would Perl do for me?  If
it's not applicable in the realm of FPGA design, then what is it about chip
(read "ASIC"?) design that demands it's use?

Is Perl related at all to "Tcl/Tk"?

    - William Lenihan
      Raytheon

         ----    ----    ----    ----    ----    ----   ----

From: [ Clifford, the Big Red Dog ]

Hi John,

Perl is fine, but in the FPGA world I hardly ever use it.  I learned it when
I was doing ASIC's.  Lately, I've used Tcl/Tk to control a simuation, but I
don't know it well.  I seem to be the only FPGA designer here that knows
even a little Tcl/Tk, or Perl.  We use Xilinx tools.

Question: Is Tcl/Tk worth learning, and why?  Or should we use Perl, and
why? What good is either one?  Our tools, (including EMACS) seem to have
all the functionality we need.  Color me anonymous.

    - [ Clifford, the Big Red Dog ]

         ----    ----    ----    ----    ----    ----   ----

From: Vladimir Orlt <Orlt.V@ems-t.ca>

Hi John,

What do you think of Tcl?  It's used by most tools, so it seems more useful.
I don't know the limitations of Perl vs. Tcl.

    - Vladimir Orlt
      EMS Technologies                   Ste. Anne de Bellevue, Quebec

         ----    ----    ----    ----    ----    ----   ----

From: William Liao <wliao@amcc.com>

John,

I think Tcl is as important as Perl, if not more important.  I find Tcl
easier and faster to navigate, examine, or modify a design in Design
Compiler or PrimeTime, and one must know Tcl well to do that.

    - William Liao
      AMCC

         ----    ----    ----    ----    ----    ----   ----

From: Phil Tomson <ptkwt@aracnet.com>

John,

Perl was my 'language of choice' for many tasks for several years, but
after learning Ruby I find I don't use Perl anymore.  It's a bit like how
I never used Awk after learning Perl.  Ruby is a sort of cleaned up Perl
which is much more object oriented and more consistent.  Some have called
Ruby Perl's younger prettier sister.  ;-)  Ruby also has all the regular
expression power that Perl has.  Check it out:

    http://www.ruby-lang.org
    http://rubycentral.com   (the book "Programming Ruby" is available
                              free at this site)
    http://rubygarden.org

You may not have heard much about Ruby yet, but it is gaining users.  I
recently contracted at Intel and discovered that several design groups
there have dumped Perl and are now using Ruby instead.

    - Phil Tomson
      Aracnet

         ----    ----    ----    ----    ----    ----   ----

From: Steve Waterbury <waterbug@beeblebrox.gsfc.nasa.gov>

John,

You wrote that "Perl scripts are the bread and butter of chip design."
I didn't know that (but then I'm not a chip designer).  My thing is 
developing software to support engineers of all shapes and sizes.  I'd 
be interested to find out if you've ever tried Python ( http://python.org ) 
and if so, what your opinion is.  Languages are a religious topic, of 
course, so I would not try to convert you, but I'd be interested in your 
view of the pros and cons anyway.  I've found Python code to be generally 
clearer and easier to maintain than Perl.  And Python's built-in O-O 
nature appeals to me (but not to everyone!  ;^).

    - Steve Waterbury
      NASA

         ----    ----    ----    ----    ----    ----   ----

> Ignore the remaining 14 chapters of this book because most chip designers
> won't be dabbling in XML, Java, databases, HTML, object oriented
> programming, associative arrays, nor downloading fancy CPAN packages just
> to clean up some PhysOpt DEF output for Silicon Ensemble to read.


From: Mysore Sriram <mysore.sriram@intel.com>

Hi John,

I'm surprised you club associative arrays into the same category as HTML,
etc.  For almost any non-trivial application, associative arrays are
invaluable for making the script efficient (O(N^2) algorithms become
almost linear time, etc.)  Perl would be useless to me without them.

    - Mysore Sriram
      Intel Corp.

         ----    ----    ----    ----    ----    ----   ----

From: Wilson Li <W.Li@AcceLight.com>

Why do you say chip designers don't need associative arrays?  I think this
is one of the most useful Perl constructs.  You can easily create a lookup
table on the fly.

    - Wilson Li
      AcceLight

         ----    ----    ----    ----    ----    ----   ----

From: Matt Welland <mattwell@us.ibm.com>

John,

I don't have the "Perl for Dummies" book to check exactly what is in the
chapters you are talking about, but did you really mean to imply that
associative arrays are not something Designers necessarily need to master?
I find associative arrays to be almost as indispensable as regexes even for
everyday tasks and I would teach them very early on. So long as you don't
get into complex de-referencing, associative arrays are quite intuitive and
easy to learn and use. The rest of your list of stuff not to learn I'd
agree with.

    - Matt Welland
      IBM                                        Essex Jct, VT

         ----    ----    ----    ----    ----    ----   ----

From: Paul Gerlach <paul.m.gerlach@exgate.tek.com>    

Hi John, 

I'm sure you'll be getting lots of responses of the form, "What? How can you
think chip designers won't be using [insert feature here] in Perl?  I use it
all the time!"

Well, here is (possibly) your first:  Ignore associative arrays?  These are
hashes, right?  I use them all the time!  One of the best features in Perl
for the things I do as a chip designer are hashes (and it was a great day
when Perl added multi-dimensional data structures).  It may not be as
important as regex knowledge, but it sure is important to me.

Interesting, I have never read the books on your list.  I guess I'm an old
school camel man.

    - Paul Gerlach
      Tektronix, Inc.                            Beaverton, OR

         ----    ----    ----    ----    ----    ----   ----

From: Richard Gordon <rgordon@terasystems.com>

John,

First, couldn't agree with you more, Perl is a bread-and-butter tool for
chip design.  I make sure all my new hires know Perl to some degree.  DBs,
HTML, XML, object oriented programming, CPAN, etc. are not requirements.

Couldn't disagree with you more, "most chip designers won't be dabbling
in ... associative arrays ..."  I'll take an extreme position: If you
don't know associative arrays you don't know Perl.  Associative arrays,
a.k.a. hash tables, is the single most important data type in Perl.  I
would even go so far as to say that if you wanted to fully capture every
attribute of a chip using Perl you can do so with little more than a
hash of a hash of a hash. 

For example, a simple Perl quiz I give to prospective employees is to
parse a Verilog gate-level netlist and print a count of every type of
gate in a chip.  The heart of this function is a hash table and takes
just 1 line of code to be completely general purpose (no need to store
lists of valid gate types or anything like that).  To do the same thing
without hashes is many orders of magnitude more complex and virtually
impossible to make general purpose.

    - Richard Gordon
      Tera Systems

         ----    ----    ----    ----    ----    ----   ----

From: Chris Gori <cgori@Sanera.net>

John,

I hope _you_ didn't ignore the part on associative arrays.  You can make
some pretty kick-ass netlist handling tools using them (can you say "use
instance path to lookup module reference"? -- I built an entire automated
IPO flow out of associative arrays).  On the rest of your column, I agree
100%.  It is fairly frightening when ASIC-folks shy away from learning one
of the most powerful tools available to them, but your book list
recommendations are solid.

    - Chris Gori
      Sanera Systems

         ----    ----    ----    ----    ----    ----   ----

From: Matt Billenstein <mattb@sjs7.lsil.com>

Hi John,

I couldn't agree more.  A healthy dose of software in college and on the job
has been the best preparation for my current "hardware" job.

    - Matt Billenstein
      LSI Logic                                  Milpitas, CA

         ----    ----    ----    ----    ----    ----   ----

From: Paddy McCarthy <paddy3118@tiscali.co.uk>

You missed out the question of Perl script maintenance.

You really want to impress on those same engineers that most 'one off'
scripts tend to linger or grow into hard to maintain and re-use
monstrosities.  Time spent learning how to write clear, maintainable code
is usually time well spent - same as for your HDL.

    - Paddy McCarthy                             UK

         ----    ----    ----    ----    ----    ----   ----

From: Paul Zimmer <pzimmer@cisco.com>

John,

I LOVED the monkey-see-monkey-do column!  That very day I had been trying
to convince one of our engineers to learn Perl, and we had a good laugh
over the column.  

    - Paul Zimmer
      Cisco

         ----    ----    ----    ----    ----    ----   ----

From: Matt Jones <matt.jones@cox.net>

Hi, John,

I want to thank you for your INDUSTRY GADFLY column in the 8/12 issue of
EE Times titled "Monkey see, monkey do."  I'm a circuit designer in a server
chipset group with a BS in computer engineering, and MSE in electrical
engineering, and am a MBA.  Know what I do?  I write Perl scripts.  The
reason?  I'm the only one of twelve engineers in our group that knows how!!
And believe me, they'd crash and burn without me.  The problem is that they
don't know it because no one really understands that I do.  I'm hoping your
column will help.  It's posted outside my cube and I will be sending a copy
of it to my boss before my next review.  Thanks!

    - Matt Jones                                 Chandler, AZ

         ----    ----    ----    ----    ----    ----   ----

> ... nor downloading fancy CPAN packages just to clean up some PhysOpt DEF
> output for Silicon Ensemble to read.


From: Mike Klein <mike@kleinnet.com>

John,

I really picked up on this point because one of our P&R engineers is trying
to find exactly what to do to make this flow work.  I spent an hour
searching for this package but didn't find it.  Are you pulling our 
collective legs on this one example, or is there really such a package?

    - Mike Klein

  [ Editor's Note: No, Mike, there aren't any specific CPAN packages that
    I know of to clean up DEF for PhysOpt.  I was just giving that as an
    example of complexity I didn't need.  Sorry if it mislead.  - John ]


============================================================================
 Trying to figure out a Synopsys bug?  Want to hear how 14,488 other users
  dealt with it?  Then join the E-Mail Synopsys Users Group (ESNUG)!
 
     !!!     "It's not a BUG,               jcooley@TheWorld.com
    /o o\  /  it's a FEATURE!"                 (508) 429-4357
   (  >  )
    \ - /     - John Cooley, EDA & ASIC Design Consultant in Synopsys,
    _] [_         Verilog, VHDL and numerous Design Methodologies.

    Holliston Poor Farm, P.O. Box 6222, Holliston, MA  01746-6222
  Legal Disclaimer: "As always, anything said here is only opinion."
 The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com




 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)