Editor's Note: Sorry, I just can't be witty today -- I'm just too stunned
  about the news in Item 1 below.  God Damn!   For once in my life, I was
  RIGHT!!!  I've been so used to people ridiculing me on this topic, I now
  have to ask: "Does my one time of being 'right' fulfill one of those
  spooky prophesies from the Bible that proceeds the End of the World???"

  It's not supposed to be this way...  I've always been told I'm 'wrong' on
  this...  It's NOT supposed to be this way...  

  (I'm scared.)
                                            - John Cooley
                                              the ESNUG guy

( ESNUG 303 Item 1 ) ---------------------------------------------- [11/4/98]

From: [ A Little Bird ]
Subject: Cadence Lays Off 1/3 Of Its Spectrum Consulting Division

John,

Looks like your noisy predictions about Cadence Spectrum Services finally
came true.  Good call!  Got any other predictions I should know about?
By any chance do you know what the Megabucks numbers are going to be next
Wednesday?  :)  If you print this, keep my name out of it.

  - [ A Little Bird ]


From www.eet.com:

   Cadence Closes Multiple Design Centers, Lays Off 600

   By Michael Santarini
   EE Times
   (11/98, 10:36 a.m. EDT)
   
   SAN JOSE, Calif. (EE Times) - Cadence Design Systems will close down
   several of its design centers in the coming months, laying off roughly
   1/3 of its front-end design services division, according to sources
   inside the company.
   
   The sources, who were told they are losing their jobs at the end of
   the year, said the company is laying off 400 front-end and
   mixed-signal engineers from its design services division and 200
   people from sales and management, roughly a third of its 1,800 design
   services employees.
   
   Over the next few months the company will close design centers in
   Chelmsford, Mass.; Rochester, N.Y., Toronto; Orlando, Fla.; Tempe,
   Ariz.; Arden Hills, Minn.; Milan, Italy; Bracknell, United Kingdom;
   Tredifferin, Pa. and will consolidate its services into its centers in
   Cary, N.C.; Austin, Texas; San Jose and San Diego, the sources were
   told by management.
   
   A Cadence division manager told the sources that Cadence was having
   troubles in the design services business and needed to cut down its
   cost structure.
   
   News of the layoff comes as a surprise in light of recent comments
   made to EE Times in Edinburgh, Scotland by Jack Harding, president and
   chief executive officer of Cadence. Harding said that Cadence's Design
   Services business was doing well and that the company planned to hire
   more engineers for its system-on-a-chip design center in Livingstone,
   Scotland.
   
   The layoffs at Cadence seemingly puts to rest the belief that design
   jobs would not be effected by the industry's economic woes.
   
   John Cooley, an independent consultant and long time critic of
   Cadence's expansion into the design services arena, said the layoffs
   at Cadence were inevitable.
   
   "I said it three years ago and surprise, surprise" said Cooley. "In
   April of 1995, I wrote a letter to the editor in EE Times that said
   pimping out Cadence's designers wasn't a good idea, and if you sell
   $10 million of design consulting, you can end up paying $6-to-$17
   million in labor to actually service the customer. And if you sink $10
   million in creating a new EDA/CAD tool, you can bring back $10-to-$50-
   to-$150 million in revenue. Consulting just isn't as lucrative as
   selling good products. Now Cadence, and unfortunately the engineers
   Cadence stole from customers, are learning that the hard way."
   
   Late last month, Cadence reported lower than projected design services
   revenue for its third fiscal quarter of 1998. Services revenue for the
   third quarter was $67.704 million, while cost of services was $50,061
   million. Analysts expected the company's Q3 revenues would be well
   over $70 million.
   
   Cadence is expected to officially announce the layoffs later today
   (Nov. 4).


( ESNUG 303 Item 2 ) ---------------------------------------------- [11/4/98]

From: [ Kenny from South Park ]
Subject: Secret Variable For Annoying Translate_off's Buried Inside Code

John, company politics says I stay anon.

I encountered the following code in a VHDL file I wanted to run through
Synopsys:

   -- synopsys translate_off
   library IEEE;
   use IEEE.VITAL_Timing.all;
   -- synopsys translate_on

However, when I tried to actually use this code, I got the following:

   Error: The package 'VITAL_Timing' depends on the package 
          'std_logic_1164' which has been analyzed more recently.
          Please re-analyze the source file for 'VITAL_Timing' and
          try again. (LBR-28)

The Magic Synopsys Variable (tm) to get around this is to do a:

          hdlin_translate_off_skip_text = true;

BEFORE you compile the "untranslated" code.

    - [ Kenny from South Park ]


( ESNUG 303 Item 3 ) ---------------------------------------------- [11/4/98]

Subject: ( ESNUG 302 #9 ) To Users: Is VERA A Good Tool For Verification ?

> I am looking for lessons learned from someone who has used VERA in the past.
> Our full-chip Verification environment has Diag that issues commands to the
> HW Abstract Layer which spawns Verilog simulation. The Verilog side then
> forks, and execvp to execute a child process to run C simulation of C
> Reference models.
>
> During the course of C simulation, stimulus and responses around each
> module are envelope-captured. The stimulus are sent to an IPC (Interprocess
> Communication) FIFO to be applied to the Verilog counterpart. The responses
> are sent to another IPC FIFO to be compared to that of the Verilog
> counterpart. The Verilog and C sides are run in parallel and handshake thru
> semaphore and shared memory.
>
> I have two questions of which answers would help us in evaluating the tool.
> First, what is your detailed lesson you learn from using VERA in your
> environment. Any caveat and pitfall of the tool contrary to mktg claim ?
> How does VERA stack up against Specman ?
>
> Second, does anyone have a similar environment as ours and use VERA as a
> cosimulator between C and Verilog sides successfully ? If yes, is there any
> catch ? How are its RPC (Remote Procedural Call) and IPC ?  Please keep
> me anonymous because we're looking into a possible purchase of VERA.
>
>   - [ Curious Minds ]


From: Rudolf Usselmann <rudi@logic1.com>

John,

The interesting environment that [ Curious Minds ] built allows its
verification team to generate transactions, determine on the fly if they
are correct, and keep the golden and Verilog models in sync. In essence,
they've built a self-checking environment. This environment, combining
Verilog and C, supports some of the capabilities that would be available
with less effort, and in more general ways, in VERA - which is good
news, in that the verification philosphy is alligned.

Apparently the first-level interest is in how VERA would help link all
this C and Verilog, and VERA can do that and more. In regards to the
IPC, VERA allows to make both blocking and non-blocking calls from C to
Vera/Verilog, and vice-versa, and allows any number of cooperating C and
Vera/Verilog processes (in case they also want to distribute the
simulation). In those calls one can pass arguments back and forth too.
To use Vera's IPC, one those not need to know one iota of IPC or PLI:
there's just half a dozen vanilla C routines the user interacts with.
Very simple to use.

However, VERA can do more than that in the direction of a self-checking
environment. Curious Minds have created a very elaborate synchronization
and data exchange mechanism to verify that the actual and predicted
results match. VERA fits well within that approach, and would simplify
the creation of additional transactors, with more flexibility and
varying granularity levels. Some of the finer-grain transactors can
catch many corner cases which are harder to handle with the full-fledged
model that looks only at the boundary. Also, they are easier to write,
and the checkers/tasks which are developed for the module-level tests
can be re-used at system level test.

Beyond that, of course, there are all the other VERA capabilities in
coverage, stimulus generation, floating expects, etc. Finally, the tight
integration of VERA with Flex Models and Eagle allows system-level
verification completely beyond the reach of competing products.

Please also see my VERA write-up in ESNUG 296.

    - Rudolf Usselmann, Consultant
      Logic One, Inc.


( ESNUG 303 Item 4 ) ---------------------------------------------- [11/4/98]

Subject: ( ESNUG 302 #3)  Finding Logic0 Cells On Unconnected Scan Ports!!

> Since the scan chains in my design are inserted 'outside' Synopsys, I
> have unconnected (scan out) ports in my design.  After a compile run, DC
> connects these unconnected port to logic0.  These logic0 cells cause
> problems in the rest of the design flow.  I can fix this with a simple
> DC-script that removes these cells, but I was wondering if I can prevent
> the inserting of the logic0's by with some variable or 
> attribute???
>
>     - Charles E. Klaasen
>       Philips Semiconductors                Eindhoven, The Netherlands


From: William Liao <wliao@vadem.com>

Hi, John,

There are a few things Charles did not clarify:

  1.  I don't understand why creating scan chains outside
      Synopsys can leave unconnected ports.  If the design
      has scan capability, Test Compiler should integrate
      these scan chains with those it created.

  2.  Why is the "scan out" ports connected to Logic0?  I
      think Charles meannt "scan in" ports.  Otherwise, isn't
      bus contention a problem?

  3.  Exactly what problems is Logic0 causing?  It's possible
      that the problems he has isn't caused by Logic0.  I have
      encountered a few situations like that.

Until I have more info, I am not sure what else to say except that scan out
porst connected to Logic0 will probably cause contention problems.

    - William Liao
      Vadem

         ----    ----    ----    ----    ----    ----   ----

From: "Charles Klaasen" <klaasen@natlab.research.philips.com

Hi, John,

Let me clarify some things regarding my logic0's.  In our design, we have
a number of in- and output ports that are unconnected: the scan in- and
outputs.  OUTSIDE ANY SYNOPSYS TOOLS, these ports are connected to some
newly inserted scan FFs.  Before that, that is witin DC, our design does
NOT contain any scan FFs. So that leaves us in DC with unconnected
output ports that DC connects to a logic0 as soon as we start any form
of compile run.

If you set:

           compile_preserve_subdesign_interfaces = true

you can prevent DC from breaking up your (scan) buses and inserting
logic0's at the top-level. It will leave the buses untouched 
and pushes the logic0's one level down in the hierachy.

With a script, I can check for logic0's in the design and remove them.  But
it would be much nicer, if I can prevent DC from creating the logic0's in
the first place.  Does anyone have any clues???

    - Charles E. Klaasen
      Philips Semiconductors                Eindhoven, The Netherlands

         ----    ----    ----    ----    ----    ----   ----

From: William Liao <wliao@vadem.com>

Hi, John,

I think I understand the problem now.  Let me repeat it, just to be sure.

  1. A design has regular FFs and unconnected output ports.

  2. Outside Synopsys, regular FFs are replaced with scan FFs and the
     unconnected output ports are used as scan out ports.

  3. But if compiles are done between steps 1 & 2, Synopsys connects the
     unconnected port to Logic0.  This is undesirable.

If I am right, then do this:

                set_unconnected [unconnected ports]

to solve your problem.  It worked on my test case.

    - William Liao
      Vadem


( ESNUG 303 Item 5 ) ---------------------------------------------- [11/4/98]

Subject: ( ESNUG 297 #4 299 #7 ) Even More EDA-Should-Support-Linux Debate

> Most of the responders are hackers who want the source code and therefore
> miss the point.  I do not think they are representative of the EDA
> community.  But, they are the vocal minority.
> 
> I think.
>
>     - Elliot Mednick
>       Wellspring EDA


From: landmh@taec.toshiba.com (Howard Landman)

John,

I can't tell you how many times *every* *week* I wish I had source code
access to, say, Design Compiler or Aquarius-XO.  There have been times
when it has literally taken me a half-dozen emails and 3 or 4 face to face
meetings spread over several *months* to even convince some EDA vendor
that they had a *problem*, let alone get them to start working on it or
actually deliver a fix.  And this when I'm paying maybe $75-100K per
year in *maintenance*!

If I had the source code, I could at least debug the problem and tell
them "Here. Fix *THIS*.", or fix it myself and get on with my work.
Heck, I could chuck the maintenance and *hire* someone full time
to do nothing but fix that one vendor's problems for the same money.

I just finished taping out a chip on EDA software that had so many
bugs, we ended up writing numerous scripts to find and fix the
damage it did to the design and *THEN* we ran Chrysalis after every
single major operation to make sure it was fixed and that we hadn't
broken anything ourselves.  Given the expected market volume and the
amount of time it would have saved, I think full source access to
the tools would have been worth easily over $100K.  And *that* assumes
that *I'm* the only one with source access and I can't share fixes
with anyone else - which would not be the case with open-source tools.

I think the real problem here is finding a viable economic model.
There aren't nearly as many users of EDA tools as of generic PC
software, and the tool algorithms are often difficult, requiring
talented and highly trained people to implement well.  How do these
people get compensated?  Why should anyone work on, say, a free
DC substitute when they can make more money working for Synopsys
or writing video games?

On the other hand, the sad fact is it's quite possible to pay some
EDA vendor half a million dollars for software that simply doesn't
work well enough to get the job done, and users have virtually no
recourse in such cases.  At least with open source you'd have a
fighting chance of not having to write off the whole investment
and start over with a different vendor.

And finally, as we keep pushing into the deep submicron, the game
is changing.  New problems arise and have to be dealt with.  The very
same tool that worked *perfectly* on your last chip may suddenly
show some signs of weakness on your next one.  The fast response
time of open source could be a *big* advantage in these cases.

    - Howard Landman
      Toshiba                   in Kawasaki, Japan (for a month or so)

         ----    ----    ----    ----    ----    ----   ----

From: Premysl Vaclavik <tnepva@neuroth.co.at>

Hi John,

Please let me add some points to this discussion.

 0. As an IC design house, we are using commercial design SW running
    on the unix based workstations. So I hope we are representative
    enough.

 1. The only reason why we are NOT using high end PCs for the design is
    that none of the UNIX OS alternatives running on a PC is supported by
    the CAE vendors. Try to ask Mentor, Avant!, Cadence or Synopsys for the
    UNIX based PC platform !

 2. The only reason why we are using PCs for the design support is that
    MS Word and Excel are now some kind of a standard and everybody
    expects that we are able to read and write in this format.  We tried
    to use so called compatible products but had a lot of troubles and at
    the end gave up and moved to the NT4.0 and Office97.

Conclusion:

We would  pay  usual industry price for a LINUX PC based professional CAE SW
because it could boost our unix workstation computing power by using high
end PCs without changing our usual environment and without any problem with:

  * OS administering and reliability
  * running existing scripts
  * SW installation / deinstallation
  * file system, user permissions and environment variables
  * terminal environment
  * usage of floating licenses between different platforms

If something goes wrong under LINUX we have still theoretical chance to find
it, understand it and fix it. If the same happens under Win9[5,8] or NT we
get real problems.

    - Premysl Vaclavik
      T. Neuroth Ges.m.b.H.                           Vienna, Austria

         ----    ----    ----    ----    ----    ----   ----

From: Alec Stanculescu <alec@fintronic.com>

John,

Fintronic supports Verilog simulators on many platforms (SUN-OS, SUN
Solaris, HP-Unix, Dec-Alpha-Unix, Dec-Alpha-NT, Win95, Win98, Intel-NT,
Sony-News, SGI-Unix, and Linux). 

However, since 1995, our development platform is Linux.  Linux is the
first platform for which a new version is ready. Since the release of
Undertow by Veritools on Linux our simulator benefits from a high
quality waveform display on Linux, making it possible for many of our
customers to use Linux for COMMERCIAL development of ASICs.

There is definitely enough critical mass for Verilog simulation under
Linux, as Design Acceleration announced the porting to Linux of their
excellent SignalScan product.

Customers pay between $3,400 to $13,500 for various flavors of FinSim
on Linux bundled with Undertow. I say this to counter the argument
that Linux users want all software to be free. Not one of our
customers was sorry that they purchased the Linux version of FinSim!
Many expressed their happiness about FinSim on Linux on public buletin
boards.

Several customers asked, and were granted their request, to exchange their
Unix or NT licenses for Linux licenses!

For the general user (as oposed to the hacker) there is only one rule
to observe when intending to use Linux: buy the PC with Linux
pre-installed to make sure that there is no unsupported component on
your machine. There are several places where PCs with pre-installed
Linux can be purchased. This may save a lot of time and money. 

Best regards,

  - Alec Stanculescu
    Fintronics

         ----    ----    ----    ----    ----    ----   ----

From: Howard Pakosh <hpakosh@avanticorp.com>

Good day all,

Since I've arrived at Avant! one of the most common questions I've heard
during my visits is "what is Avant! doing to support LINUX?" 

As some of you know, we are now supporting LINUX from Red Hat Software.  Our
benchmarks have shown that test cases run on Polaris (LINUX) have run 50%
faster using a 450MHz Pentium II over the top end Sun UltraSPARC.  This is
significant and worth looking into.

The following article appeared on the FRONT PAGE of the 10/19 issue of EE
Times. This is very positive press coverage for Avant!, and obviously for
Polaris, too!  The full article is also available online at

     http://www.techweb.com/directlink.cgi?EET19981019S0001

Please check it out and if you're curious how well it could perform for you,
please let me know and we can arrange an on-site evaluation.

    - Howard Pakosh
      Avant! Corp.

         ----    ----    ----    ----    ----    ----   ----

From: [ Something Smells Rotten In Denmark ]

John-

something you might want to share with your readers is at:

http://www.tuxedo.org/~esr/halloween.html

it's a (alleged) internal microsoft memo analyzing open-source software
(in other words, linux), the potential threat to MS' business, and how to
counteract it.  The solution is to de-commoditize as many standards-based
protocols that exist now and replace them with moving targets that are
hard to develop replacements for.

Eric Raymond's analysis of the memo includes a section that particularly
rings true for me:

  "The `folding extended functionality' here is a euphemism for introducing
   nonstandard extensions (or entire alternative protocols) which are then
   saturation-marketed as standards, even though they're closed, undocumented
   or just specified enough to create an illusion of openness. The objective
   is to make the new protocols a checklist item for gullible corporate
   buyers, while simultaneously making the writing of third-party symbiotes
   for Microsoft programs next to impossible. (And anyone who succeeds gets
   bought out.)"

  "This game is called ``embrace and extend''. We've seen Microsoft play
   this game before, and they're very good at it. When it works, Microsoft
   wins a monopoly lock. Customers lose."

I think we've seen similar tactics within the EDA world that we didn't /
don't stand for either (synopsys .lib, verilog itself, DEF formats, etc).
When the standards are closed, the customer certainly doesn't win.  I
really hope we don't want to climb in bed with the master of such
techniques, Microsoft.

(Please don't quote me by name for ESNUG, etc.)

    - [ Something Smells Rotten In Denmark ]


( ESNUG 303 Item 6 ) ---------------------------------------------- [11/4/98]

Subject: (ESNUG 302 #5)  PLEASE Help W/ Cadence Verilog-XL Licensing Problem

> Earlier this month I posted it on comp.cad.cadence as a description of a 
> licensing problem when running Verilog XL on Solaris 5.6.  I was told that
> updating to 97AQSR5 is required for Sol5.6.  I did but to no avail. 
> Verilog-XL refuses to connect to the license daemon but works fine under
> Sol 2.5.1...  The following are transcripts of everything I tried...
>
>     - Janick Bergeron
>       Qualis Design                                    Lake Oswego, OR


From: janick@qualis.qualis.com (Janick Bergeron)

John,

We finally resolved the problem!

When we installed our new E450 server, we tuned to system to be able to
support more users/processes than the out-of-the box configuration allows.

The following two parameters in the file "/etc/system" are what broke
Verilog-XL:

       * set rlim_fd_max = 2048
       * set rlim_fd_cur = 1024

If we comment them out (as shown above), XL works fine.  If they are
built-in the kernel, Verilog sees a time-out when talking to the license
server.

I'm sending this problem description and solution to Cadence, Globetrotter,
and Sun.

Thank you to all of you who send valuable information to help us resolve our
problem.

    - Janick Bergeron
      Qualis Design                                    Lake Oswego, OR


( ESNUG 303 Item 7 ) ---------------------------------------------- [11/4/98]

Subject: (ESNUG 301 #2) New DC 98 "Design Budgeting" Has Serious Problems

> If you've got DC 1998.08, and are interested in the new design budgeting
> features, let me save you some time and frustration:
>
>   * It looks good if you run it from the new budget_shell tool (though I
>     have not done a detailed analysis of results yet).
>
>   * It just plain does not work from dc_shell.  Don't bother.
>
> I downloaded the EST update that's supposed to add design budgeting to
> dc_shell.  I suppose I should have taken it as a bad omen when the EST
> file was called "db-1998.08.zip" and I was using a Unix system.  Synopsys
> support insisted that Solaris 2.5.1 has an "unzip" command.  I didn't
> find one, and no tool I could find would open it.  I went back and forth
> with support on this for a week before they decided to send me an unzip
> tool.  ...
>
> My overall impression is that Synopsys just completely skipped any testing
> on the dc_shell -> design budgeting interface.  The EST could not be opened
> with standard Unix tools.  And once installed, the tools either failed to
> work or failed to exist.  To top it off, they were accompanied by
> documentation that, between missing crucial options and adding nonexistent
> ones, seem to be little more than bad science fiction.
>
>     - Tom Harrington
>       Ford Microelectronics                       Colorado Springs, CO


From: "Tom Harrington" <tharring@ford.com>

John,

Here's an update on Design Budgeting from DC.

After my last message appeared in ESNUG, I was contacted by [ Name Deleted ]
at Synopsys.  [ Deleted ] told me that yes, the EST update I had was buggy,
but that there was a newer update available which fixed the problems.  I got
the new update, and it works as advertised; I can now run design budgeting
from dc_shell without problems.  If anyone else has tried to use this
feature, and encountered the problems I did, they should go and get the
latest update (<ftp://ftp.synopsys.com/pub/db_1998.08/db-1998.08.zip>).

I don't have any definitive comparisons on budgeting vs. characterization
and the overall impact on results yet, but I am running some experiments
and I hope to have data soon.

I'm still a little puzzled by the way things played out.  Firstly, Synopsys
Support never told me about the revised update.  If they had, I could have
avoided a lot of frustration.  Secondly, the revised update has the same
filename as the previous one.  If I'd gone looking for a newer update, I
would have thought that I already had it.

In any case, though, the problems that I had before have been resolved.

    - Tom Harrington
      Ford Microelectronics                     Colorado Springs, CO

         ----    ----    ----    ----    ----    ----   ----

From: [ A Synopsys CAE ]

John,

In ESNUG 301, Tom Harrington wrote that he ran into problems when using the
design budgeting command in dc_shell.  The problem that Tom found was caused 
by dc_shell calling the incorrect budget_shell command.

This problem was found and fixed on October 1st and a new EST image was
created.  The solvit article STAR-59389 explains the problem and how to
download the corrected image from EST.  (Note that if you downloaded the
Budgeting release from EST on or after October 1st, you do not need to 
download again.)

If you are using Design Budgeting, I recommend you take a look at the
following articles on Solv-It!:

  STAR-59389          The allocate_budgets Command Fails in Design Compiler
  budget-release      Synopsys 1998.08 Design Budgeting Release
  Methodology-69      Budgeting to mimic Characterize
  BudgApps            Design Budgeting Applications Note

You can grab these articles with a simple email request to Solvi-it!  Here
is the email that you would send to solvit@synopsys.com

  start:
  email: <your Synopsys registered email address>
  get:STAR-59389
  get:budget-release
  get:Methodology-69
  get:BudgApps
  end:

These articles may also be found via the web in a Solvit search on the word
"budget".

    - [ A Synopsys CAE ]


( ESNUG 303 Item 8 ) ---------------------------------------------- [11/4/98]

From: Victor_Duvanenko@truevision.com
Subject: What's The Latest Scoop On Fixing Hold-Time Violations?

John,

Fixing hold-time violations is tricky and I wondering how Synopsys users
approach this problem (since Synopsys doesn't deal with it adequately at
the moment).

    - Victor J. Duvanenko
      Truevision


( ESNUG 303 Item 9 ) ---------------------------------------------- [11/4/98]

From: [ Echoing From Somewhere Inside Motorola ]
Subject: Dc_shell Sucks!  Give Us A REAL Shell To Use With Synopsys!
 
John, (anonymous please)

Considering the history and current discussions regarding wrappers and
shells around dc_shell, will anyone at Synopsys get the hint that no one
likes dc_shell?  OK, its not horrible, but we designers are used to much
more efficient shells (tcsh, perl-style shells).

It doesn't take an MBA or marketing wizard to figure out that putting a
little effort into the shell would be good for Synopsys.  Don't we spend
just as much of our time actually designing/typing as we do running the
tools?  A better shell could improve design productivity just as much as
tweaking the who-knows-what optimization algorithm.  Give us a real shell
(perhaps more than one), make the users happy, and Synopsys will benefit.
Since Synopsys likes buying things so much, why don't they just buy one of
the available perl-like shells?  (Could I get a commission out of this,
Steve?)

Perhaps Synopsys has R/D money devoted to this.  It just doesn't feel right
when you have to spend valuable time installing a freeware/shareware tool
to make a 100K tool user friendly.

    - [ Echoing From Somewhere Inside Motorola ]


( ESNUG 303 Item 10 ) --------------------------------------------- [11/4/98]

From: Vercelli Stefano <vercelli@sisun10.cselt.stet.it>
Subject: Help Needed In Translating Specific VHDL To Specific Verilog

Dear John,

I'm one of the lots of people who had to pass from VHDL to Verilog.  In my
searching for documentation about verilog, I saw the web page with your
challenge.  And so i thougth that maybe you can help me.

First, the environment I'm using.  For VHDL, I use the Synopsys tools (now
with the 98.02 version); for Verilog, I have the Cadence Verilog-XL.

My problems are about some VHDL constructs (synthesizable!) that I'm not 
succeeding to translate in Verilog.  Maybe you know some "tricks" to solve
this problems.  Here they are:

  - Libraries: from what I read (from the online documentation) it seems
    that in  Verilog libraries there may be only components and primitives.
    Can't I put in there a function or a procedure as I used to do in VHDL
    packages?

  - Sometimes I needed to define a signal whose range is defined by a
    function.  For example, I had a generic (let's call it SIZE), and I
    needed to declare a signal whose dimension is defined by a function.
    For example, if it had to be log2(SIZE), I defined a synthesizable
    function (let's call it mylog2), and then I declared:

          signal A: std_logic_vector(mylog2(SIZE)-1 downto 0)

    I tried something like this in Verilog, but I didn't succeed.  I defined
    the parameter SIZE and the function mulog2, then I tried with

          reg[mylog2(SIZE)-1:0] A;

    But the compiler told me "Vector range must be a constant".  I also
    tried in defining an intermediate constant, that is:

          parameter LOGSIZE = mylog2(SIZE)

    But also here I had an error.  Is there some solution to this problem?

  - All my generical VHDL functions which accepted and returned unconstrained
    std_logic_vector are no more possible here?  Must I write a different
    function  for EACH dimension of the arrays?

Then something about the test benches. I used to have test benches whose 
behavior was customizable through an external file (let's say this was a 
command file that was read by the tester, and then it generated the stimuli 
according to commands in the file).  I read on manuals that in Verilog is 
possible to WRITE on a file.  But I found nothing about READING a file.

  - Stefano Vercelli
    CM/TM


( ESNUG 303 Item 11 ) --------------------------------------------- [11/4/98]

Subject: (ESNUG 289 #13)  Can I Test Embedded RAMs W/ My Full SCAN Chain?

> I'm curious if anyone in the chip CAD market can test the embeded RAMs
> through the full scan chain.  Are there any EDA products/tools that do
> this?  Theoretically, you should be able to accomplish this since there
> are Flip-Flops all around the RAMs, which the scan chain has complete
> control over.  Plus, I don't really care how many vectors it takes, as
> long as the process is automagic.
>
>     - Victor J. Duvanenko
>       Truevision


From: Dominic Botti <dominic.botti@netvantage.com>

Hi Victor:

Did you ever find any tools to solve or half solve your problem?  I am
considering the same option, except I am possibly going to generate the WGL
file myself and use a tool to generate a test bench to verify the pattern.
For me, the approach is not too efficient, unless I can use the scan unit
on the tester.  If the scan path is designed in a reasonable order, the WGL
file might be possible with a script, but if you were asking for a tool
which works with an arbitrary scan chain, that is a tall order.  Anything
you can tell me will be of use.

  - Dominic Botti
    NetVantage                                     Sunnyvale, CA

         ----    ----    ----    ----    ----    ----   ----

From: Victor_Duvanenko@truevision.com

Hi Dominic,

I'll have to look through my e-mail messages, but I vaguely remember that
Mentor may have had a tool that tested the RAMs through the scan chain.
LSI Logic has RAM-bist, but you'll have to figure out how to control it and
I don't believe that they can automatically control it through scan vectors.
IBM has RAM-bist that they control through their LSSD scan chain, but
this tool is not available commercially and is only available internally
(it sure would be nice if the rest of the CAD world would realize how nice
of a capability it is to test your entire chip through a single interface
automagically - now that's a productivity gain - plus making perfect silicon
is a silicon vendor's problem, and not a designers problem!  Vendor's need
to realize this fact!).

Good luck and let me know what you find,

    - Victor Duvanenko
      Truevision



 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)