> I guess that's the heart of romance for many women -- seeing what they
> can get out of men.
From: [ An Ear In The Valley ]
But John, why is this surprising?
Evoutionary theory posits that optimum reproductive strategy for males
is to impregnate as many women as possible (which implies not committing
too many resources to each one), whereas for females it's to ensure
enough resources to rear the offspring (which they can be quite certain
is theirs). Being able to attract, mislead, cajole, hoodwink, and
bamboozle men is a very strong survival trait in a woman! Just like
being able to seduce women is a very strong survival trait in a man.
In birds, which were long assumed to be mostly monogamous by reason of
lengthy and detailed observational studies, newer research based on DNA
fingerprinting is showing that females are having offspring with multiple
males. This means that even scientists with binoculars can be bamboozled.
The whole notion of romantic love has its base in the code of chivalry and
courtly love in medieval times. The game for women there was to get some
poor schuck of a knight to declare their undying love, and then go off on
a crusade and perish nobly with her name on their lips ... all without
giving so much as a kiss. I like the modern system better - at least we
get kissed sometimes.
Current estimates are that somewhere between 10-30% of all people born in
the US were *not* genetically fathered by the father of record.
"One never knows, do one?" - Fats Waller
> just like that "Bridges of Madison County" where wifey gets _bored_ with
> loyal hubby so she flings with wandering photographer and that "English
> Patient" movie where wifey flings with an English playboy because she's
> _bored_ with her loyal hubby.
If you really want to understand this, you could read "The Erotic Silence
Of The American Wife" which talks about why married women have affairs,
what they get out of it, and why they often don't even feel guilty. But
I warn you, it's strong stuff, and you may never see another woman quite
the same way again afterwards. I know that a lot of my wife's female
friends are having affairs, which their husbands don't know about. (And
NOT with me! I'm just a good listener sometimes.)
Please don't put my name with any of this if you publish it.
- [ An Ear In The Valley ]
( ESNUG 304 Item 1 ) --------------------------------------------- [11/98]
From: Andrew Frazer <Andy.Frazer@idt.com>
Subject: Whew! The "New" VCS License Daemon Really Scared Me At First...
John,
I received a letter from Synopsys announcing that VCS4.2.1 would be using a
new Synopsys-based licensing scheme. I immediately panicked and called the
VCS Product Line Manager, Mary Ann White.
She explained to me that they are changing license daemon's, but not the
license server. This is good news because we've been pressuring all our
vendors over the past few years to migrate *to* the FlexLM license server.
There's nothing more confusing for a CAD group than to have to manage a big
mix of different license server vendors. We used to have four different
ones: Flexlm, Ilmadmin, Sunrise's proprietary system, and Viewlogic's
proprietary system.
Now, we're down to only two: Flexlm and Ilmadmin; and we still have a long
way to go.
- Andy Frazer
Integrated Device Technology Santa Clara, CA
( ESNUG 304 Item 2 ) --------------------------------------------- [11/98]
Subject: ( ESNUG 303 #8 ) What's The Latest On Fixing Hold-Time Violations?
> Fixing hold-time violations is tricky and I wondering how Synopsys users
> approach this problem (since Synopsys doesn't deal with it adequately at
> the moment).
>
> - Victor J. Duvanenko
> Truevision
From: Scott Evans <scott@NPLab.Com>
John,
After struggling with this on earlier versions of the Synopsys tool and
never coming up with a "workable" solution, we basically adopted the
approach of only fixing hold violations after layout is complete. We
found that almost always after layout, the CLK->Q delay combined with
routing delay fixes all hold times. This gives you the added benefit of
not having a large number of delay cells taking up area on your chip.
Your mileage may vary....
But if you must, since 98.02, Synopsys introduced the ability to handle
both min/max timing in one session. This allows the tool to fix hold
times without violating setup times as it did before. To use this
capability, depends a bit on the libraries you are using. One of our
libraries has both best/worst case timing in the same file, so you only
need to specify
set_operating_conditions -max WCCOM -min BCCOM -library foo
If you have seperate best/worst case libraries you need to "combine" the
libraries before issuing the above command
set_min_library slow.db -min fast.db
The following may work as well, but I'm not using it.
set_operating_conditions -max WCCOM -min BCCOM -max_lib fast -min_lib slow
When you get to the point where you apply set_fix_hold, etc., it will
work as you intend with fixing hold violations by adding in appropriate
delay cells, and at the same time watch to make sure it doesn't violate
any setup times.
As always you need to make sure that all hold violations you are fixing
are "real". A lot of times, timing constraints are not setup completely
(set_input_delay -min mostly) and so you end up with a number of false
hold violations which the tool will fix for you at the cost of a large
number of delay cells. So, be sure to generate a timing report and review
it before trying to fix any hold violations.
- Scott Evans
NeoParadigm Labs San Jose, CA
( ESNUG 304 Item 3 ) --------------------------------------------- [11/98]
From: "Andi Carmon" <ANDI@Orckit.Com>
Subject: Scripts/Commands To Find Those 3 Unloaded Nets Out Of 300,000 Nets ?
John,
I have a huge file of set_load commands, obtained from the layout tool. The
command report_internal_loads can confirm what nets have been assigned via
the set_load command.
I am looking for a command, or script, to report the nets that have NOT been
loaded by a set_load command. This is because I want to determine if there
is any net that escaped the set_load command, and hence the timing report
may be erroneous.
- Andi Carmon
Orckit Communications Ltd. Tel Aviv, Israel
( ESNUG 304 Item 4 ) --------------------------------------------- [11/98]
Subject: ( ESNUG 303 #1 ) Cadence Lays Off 1/3 Of Spectrum Consulting
> SAN JOSE, Calif. (EE Times) - Cadence Design Systems will close down
> several of its design centers in the coming months, laying off roughly
> 1/3 of its front-end design services division, according to sources
> inside the company. ... "I said it three years ago and surprise,
> surprise" said Cooley. "In April of 1995, I wrote a letter to the editor
> in EE Times that said pimping out Cadence's designers wasn't a good idea,
> and if you sell $10 million of design consulting, you can end up paying
> $6-to-$17 million in labor to actually service the customer. And if you
> sink $10 million in creating a new EDA/CAD tool, you can bring back $10-
> to-$50-to-$150 million in revenue. Consulting just isn't as lucrative as
> selling good products," said Cooley. "Now Cadence, and unfortunately the
> engineers Cadence stole from customers, are learning that the hard way."
From: "Ted Frederick" <tedf@Cadence.COM>
Well, you *almost* got it right, John... but I knew you'd take this
opportunity to blow your horn.
Your allusions regarding the non-profitability of services are inaccurate.
Cadence simply over-staffed for the amount of design services that were
coming in. Paying salaries for idle designers does not make good business
sense... so away they went.
The Design Services business is good, however, you need enough of it to
justify keeping that many people on board and with the slowing economy and
Asian flu, clients have been less inclined to farm-out Design Services work.
It's just that simple. ;]
- Ted Frederick
Cadence
---- ---- ---- ---- ---- ---- ----
From: Yatin Trivedi <trivedi@seva.com>
Hey John,
I can't pass up this opportunity. The services business isn't bad. In
fact, a lot of design places that cut down their engineering force are
asking for help. Send me your tired (I mean Cadence's laid off) folks and I
will show you where they can be very happy.
We have been flushed with Design Services work, have grown in the past six
months and still can't keep up with it.
My take on this layoff is that either:
(a) Cadence removed some large amount of dead wood accumulated through
recent (3 years?) acquisitions and hiring binges, or
(b) strategically they have looked for projects (and places)
where there is little money.
It is a lot easier to get ten $1M projects than to get one $10M project
(and implement it successfully). If you spend $5M in marketing and sales
to get a $10M project, that leaves only $5M to implement and nothing for
profits! Your customer won't pay $10M as $5M for engr labor and $5M for
overhead -- he expects $10M to be all spent on engineering labor. So now
you have to deal with $10M cost in $5M revenue, and you end up with $5M
loss. Real simple, isn't it?
Your comment is valid - "Consulting isn't just as lucrative as selling good
products". Every customer wants a fixed cost bid for poorly defined spec
and wants to negotiate based on time & material cost. Go figure!
By the way, this isn't a negative criticism of Cadence. It applies to
similar outfits that call themselves Design Factories but largely
remain marketing and sales organizations. For consulting services,
the organization must remain largely engineering with some (necessary)
marketing and sales support.
- Yatin Trivedi
SEVA Technologies Fremont, CA
---- ---- ---- ---- ---- ---- ----
From: [ Royally Pissed ]
Dear John,
I am royally pissed at EE Times and Gary Smith [ the Dataquest EDA analyst ]
for trying to pass me off as a fucking 'Arthur Andersen' type. I have worked
as a fucking DESIGN engineer for over 7 years sweating out the fucking
details of 6 large ASICs. I am _not_ one of those deadwood Arthur Andersen
pretty boys who makes his living sweet talking CEOs! I fucking design chips
for a living. I _work_ for a living! I really resent this way of being
characterized by Gary Smith and EE Times. The only mistake I've done was to
take a job consulting at Cadence and to then be in one of the Cadence design
centers that Jack Harding [ the CEO of Cadence ] closed after some fucking
Arthur Andersen pretty boy probably convinced him to do it. John, very anon
PLEASE!
- [ Royally Pissed ]
[ Editor's Note: What this letter is referencing is on pg.4 of the Nov. 9
issue of "EE Times" in their article discussing the Cadence layoffs.
"What's happening," said Gary Smith, principal EDA analyst at Dataquest,
"is a much-needed housecleaning in which 'Arthur Andersen' types are
being replaced by skilled ASIC designers. Actually, Cadence's design-
services group is working better now than ever. They're starting to
concentrate on a sustainable service business." is the quote. - John ]
---- ---- ---- ---- ---- ---- ----
From: [ the voyeuristic one ]
john: i don't think this repudiates the model. mentor thinks it does and
others think it does but i don't. companies are cutting back left and right
and this is indicative of that. now if the economy was roaring and the
semiconductor industry was up 45 percent this year AND cadence did this,
then the model goes out the window. but in a down market, i don't think
you can make the claim that the model isn't working.
if they kill the operation someday, then it's a different story.
- [ the voyeuristic one ]
---- ---- ---- ---- ---- ---- ----
From: jcooley@world.std.com (John Cooley)
Dear [ the voyeuristic one ],
Positive fundamental changes in business models that are supposedly superior,
by definition, outperform their markets. For example, cheapie Walmarts
everywhere worked whether the economy was *booming* OR in *recession*
BECAUSE it's a better idea. Let's take your it's-just-the-economy-stoopid
argument and flip it on its head with what I said all along: the reason why
the Cadence Spectrum Services model initially worked *WAS* because of a
booming economy where everyone was desperate for engineers. That is, the
idea wasn't tested because everyone, everywhere was experiencing a deep need
for more engineers to get their hot projects done. (Shit, you oughta know
this -- look at all those employment ads that practically filled the back
third of EE Times for the last 2 years.) Cadence would steal the engineers
from customers and the customers were stuck having to rent the very same
designers back. The true test of the Cadence consulting business model is
how did it hold up when the feast was over? Did it still keep growing? Or,
if it shrunk, did it shrink less than it's competitors? In both cases,
Cadence failed if it's laying off 1/3 of its consulting force. And that's
why everyone's telling you the Cadence consulting business model has failed.
- John Cooley
the ESNUG guy
( ESNUG 304 Item 5 ) --------------------------------------------- [11/98]
Subject: ( ESNUG 302 #3 303 #4 ) Logic0 Cells On Unconnected Scan Ports!!
> I think I understand the problem now. Let me repeat it, just to be sure.
>
> 1. A design has regular FFs and unconnected output ports.
>
> 2. Outside Synopsys, regular FFs are replaced with scan FFs and the
> unconnected output ports are used as scan out ports.
>
> 3. But if compiles are done between steps 1 & 2, Synopsys connects the
> unconnected port to Logic0. This is undesirable.
>
> If I am right, then do this:
>
> set_unconnected [unconnected ports]
>
> to solve your problem. It worked on my test case.
>
> - William Liao
> Vadem
From: "Charles E. Klaasen" <klaasen@natlab.research.philips.com>
Hello John,
The command set_unconnected does not work in my case, since the logic0's
appear not on top level, but one level lower. The command only works for
ports, and not on pins, unfortunately.
- Charles E. Klaasen
Philips Semiconductors Eindhoven, The Netherlands
( ESNUG 304 Item 6 ) --------------------------------------------- [11/98]
From: Shuhui Lin <shuhui@aur.alcatel.com>
Subject: What Are The Pros & Cons To Running EDA Apps On SUN Compute Farms ?
Hello, John,
We are investigating converting our individual high performance desktop Sun
workstation computing model to a compute server, specifically Sun's E10000
server with multiple processors and up to 1 Gig of RAM on each processor.
Does anyone have any experience with this type of compute server for EDA
applications? What are the pros and cons of a single compute server model
versus a "compute farm" network of high performance workstations?
- Shuhui Lin
Alcatel Telecom Raleigh, NC
( ESNUG 304 Item 7 ) --------------------------------------------- [11/98]
Subject: ( ESNUG 302 #9 303 #3 ) Experiences Testing With C, VERA, & Specman
> I have two questions of which answers would help us in evaluating the
> tool. First, what is your detailed lesson you learn from using VERA in
> your environment. Any caveat and pitfall of the tool contrary to mktg
> claim ? How does VERA stack up against Specman ?
>
> Second, does anyone have a similar environment as ours and use VERA as
> a cosimulator between C and Verilog sides successfully ? If yes, is
> there any catch ? How are its RPC (Remote Procedural Call) and IPC ?
> Please keep me anonymous because we're looking into a possible purchase
> of VERA.
>
> - [ Curious Minds ]
From: Tom Symons <tsymons@level1.com>
John,
I am a new user to VERA, having just adopted the tool earlier this year.
I have worked with both Verilog and VHDL, but I now prefer using VERA
over either HDL for test bench design.
In my view, a testbench is much more a software function than it is a
hardware function. I believe that is one reason why you see so many
projects using C to implement at least part of their testbench. But C
has no concept of hardware, and this makes it an awkward choice for test
benches. VERA solves this problem by giving you the software-like feel
and functionality that you're looking for with C, while still leaving
the easy HDL-like interface to hardware that C lacks.
The first thing you need to realize about VERA and cosimulation is
that VERA itself may eliminate the need to use C code in your test
bench. VERA is different than C or C++, but it does provide the same
functionality and the same ease of high-level coding. However, if you
require C code anyway (to simulate a driver, use existing code, etc.),
then VERA can help you by providing an easy link between the two
languages.
Once you make this shift to viewing a testbench as a software function,
and you have a hardware-aware tool that shares this view, then you
slowly start to see new ways that you can approach your verification.
There are not really any features in VERA that reach out and knock your
socks off at first glance, but when you take them all together, and you
realize that your test benches are becoming both more sophisticated and
easier to follow, then you'll find that it is very difficult to go
back to using HDL or C to implement your testbench.
One simple example might help you get a feel for this:
Our device was required to support both big and little-endian host CPUs.
This required that our testbench support both modes as well. In
particular, we wanted all our tests to run (and self-check) in either
mode - we didn't want to have to write separate tests. We started out
by conditionally coding memory accesses by testing a mode flag, or by
just coding it in the default mode and figuring we'd come back to add
the other mode later. Both approaches would mean lots of work and lots
of debug. Note that our testbench included test generation,
self-checking code as well as simulation of the host CPU. If you've done
this before, you'll know that it can be a real annoying problem.
Then we realized that we could make the endian mode transparent to the
testbench by using an object-oriented approach. We were definitely in
the software realm here. We created an 'object' that represented host
memory. We allocated the memory in this object and defined all memory
access functions (methods) within it. We then 'inherited' two memory
objects from this base object, one for big-endian access and one for
little-endian access. The testbench code could then be written to
access memory via the base object, which was instantiated as one of the
two inherited types, depending on which memory mode was desired. So we
just had to test each of the objects, and then access all memory without
worrying about which mode we were using.
So an annoying, error-prone problem was easily eliminated. And our
approach was straight-forward and easy to understand. Now, you can be
clever and do something similar without object oriented programming
(OOP) methods, but the end result will probably just look like something
trying to mimic OOP. I think it's much easier (not to mention more
readable) to just use the OOP constructs directly.
Then, moving on to the co-simulation arena, we made new versions of the
memory objects that allocated the host memory in C, versus having them
allocated in VERA. This allowed the memory to be easily accessed by
either C or VERA code - while still maintaining our endian transparent
access from VERA.
Our co-simulation environment consists of a host driver for a PCI device
running in a VHDL simulation. We have only just begun this stage of our
verification, but we have got both C and VERA communicating with each other.
We are using a C library format where VERA can make a call to any C routine,
but C cannot call back to VERA. This was adequate for our purposes, and
seemed a little easier to set up. You can get C callbacks if you want, but
this requires that the C code run in a separate process that communicates
with VERA over sockets.
We hired an intern to help us out with this co-simulation setup, and he
was able to get it up and running fairly quickly. I didn't do it
myself, but the goods news was that I didn't have to. There is some
tedium required to setup the cross-language calls, but they were
relatively straightforward once we understood the process.
The writer also asked about Specman. We evaluated both tools, and
selected VERA largely on price. The capabilities of the tools appear
fairly similar, but we feel that the VERA language overall is cleaner
and its OOP constructs are more complete. But Specman probably does do
a better job with the overall random generation environment.
VERA is not without it's shortcomings (the debugger in particular is
rather weak), but overall we find it to be a worthwhile tool. I really
would hate to have to write a testbench now in just HDL or C.
- Tom Symons
Level One Communications
( ESNUG 304 Item 8 ) --------------------------------------------- [11/98]
From: Nukala Ravikanth <ravikanth@msemi.com>
Subject: Having Doubts About The Synopsys Bottom-Up SCAN Methdology
Hi John,
I have a small doubt in bottom up scan methodology. At the submodule
level,is it better to specify dedicated scan out ports or should i allow
test compiler to use the default. I want to know about the cases (if any)
where dedicated scan out ports become a necessity.
- Nukala Ravikanth
Meridian Semiconductors Irvine, CA
( ESNUG 304 Item 9 ) --------------------------------------------- [11/98]
Subject: ( ESNUG 303 #2 ) Variable For Annoying Buried Translate_off's
> I encountered the following code in a VHDL file I wanted to run through
> Synopsys:
>
> -- synopsys translate_off
> library IEEE;
> use IEEE.VITAL_Timing.all;
> -- synopsys translate_on
>
> However, when I tried to actually use this code, I got the following:
>
> Error: The package 'VITAL_Timing' depends on the package
> 'std_logic_1164' which has been analyzed more recently.
> Please re-analyze the source file for 'VITAL_Timing' and
> try again. (LBR-28)
>
> The Magic Synopsys Variable (tm) to get around this is to do a:
>
> hdlin_translate_off_skip_text = true;
>
> BEFORE you compile the "untranslated" code.
>
> - [ Kenny from South Park ]
From: miller@symbol.com (Wayne Miller)
Hi John,
I'm not so sure I agree with Kenny's solution. If you check the time
stamps on the IEEE std_logic_1164, Vital_Timing and VITAL_Primitives
packages, you'll see that the files are really out of date (.sim and
.syn). This is a software release problem that Synopsys has finally
admitted to in the 1998.02 and 1998.08 releases. They have posted a
fix for this on SolvNet, but I wouldn't recommend it. It should be
fixed in the next VSS release (?April '99?), not in the DC release
scheduled for February '99.
- Wayne Miller
Symbol Technologies, Inc.
( ESNUG 304 Item 10 ) -------------------------------------------- [11/98]
Subject: ( ESNUG 303 #9 ) Dc_shell Sucks! Give Us A REAL Shell To Use !!
> Considering the history and current discussions regarding wrappers and
> shells around dc_shell, will anyone at Synopsys get the hint that no one
> likes dc_shell? OK, its not horrible, but we designers are used to much
> more efficient shells (tcsh, perl-style shells). ...
>
> Perhaps Synopsys has R/D money devoted to this. It just doesn't feel right
> when you have to spend valuable time installing a freeware/shareware tool
> to make a 100K tool user friendly.
>
> - [ Echoing From Somewhere Inside Motorola ]
From: [ Tickle Me Elmo ]
Dc_shell does suck and designers have complained about it for years (it
always feels good to know that you're not the only one suffering). DC_shell
was a hack that got the job done, and you have to give Synopsys credit for
that (it got you 80% there very quickly). Dc_shell is only part of the
problem - inconsistent command structure and arguments are another (forcing
designers to play adventure).
Next year Synopsys will incorporate Tcl/Tk into their tools (Synopsys was
one of the investors in Professor Ousterhout's company that will take
Tcl/Tk to the next level). Definitely Anonymous.
- [ Tickle Me Elmo ]
( ESNUG 304 Item 11 ) -------------------------------------------- [11/98]
From: "Paul.Zimmer" <paul.zimmer@cerent.com>
Subject: My Fast-And-Slow-Clocks-Mixed-With-Logic DC Constraining Nightmare
John,
The reason why I asked for "ied" on Suns (in ESNUG 301) is because ied sits
between you and any stdin/stdout program, and provides ksh-like command line
history and editing. It works on most everything, including ftp, and
(ta-da!) dc_shell.
The reason I have to edit the Synopsys stdin/stout syntax (aside from the
fact that set_arrival has disappeared from the man pages ) is because I am
now dealing with modules that have multiple clocks all mixed in with logic
to be compiled (don't ask).
Now, this is what stdin/stout was invented for, so this should be easy,
right? Well, wrong. The stdin/stout approach assumes that the constraint
setter KNOWNS what inputs and outputs are related to what clocks. That's
fine at the top of the chip, where I've been using stdin/stout for years so
that I can break bidirectional loops, but my lower-level generic scripts
DON'T know that, and don't WANT to know that.
If you do the stdin/stout for each clock and clk_per in a foreach loop,
you'll get silly things like a path from the slow clock, plus the input
delay of the slow clock, ending at a flop clocked by the fast clock.
(Well, I guess this isn't really silly from DC's point of view. You've
effectively told dc that there is a source for each input from each
clock domain with a certain delay, and a sync for each output in
each clock domain with a certain delay, and it doesn't know any better.)
But, this isn't what you really WANT. What you want is for the input
and output delays for a clock to only be applied to paths that end/start on
that clock. So, an input that goes to flops on both the fast and slow
clocks (for example) should have the slow clock's input delay budget on
the path going to the slow clock flop, and the fast clock's input delay
budget on the path going to the fast clock flop. Likewise for outputs.
How do you do this? I'm still trying it out, but the only way I've found
so far is to do it all with editing DC's stdin/stout with ied, and then
setting false paths across all the clock boundaries.
foreach(_clock,all_clocks()) {
foreach(_other_clock,all_clocks() - {_clock}) {
set_false_path -from _clock -to _other_clock
}
}
That's a shame, since I've already gone to a lot of trouble to define the
clocks using harmonics of the fastest clock, then shrinking them to the
target period using minus clock skew. So, I shouldn't have to disable
all the cross-clock paths.
Does anybody know of a better way to do this?
- Paul Zimmer
Cerent Corporation
( ESNUG 304 Item 12 ) -------------------------------------------- [11/98]
Subject: (ESNUG 302 #5 303 #6) Sun/Cadence Verilog-XL Licensing Problem
> When we installed our new E450 server, we tuned to system to be able to
> support more users/processes than the out-of-the box configuration allows.
>
> The following two parameters in the file "/etc/system" are what broke
> Verilog-XL:
>
> * set rlim_fd_max = 2048
> * set rlim_fd_cur = 1024
>
> If we comment them out (as shown above), XL works fine. If they are
> built-in the kernel, Verilog sees a time-out when talking to the license
> server.
>
> - Janick Bergeron
> Qualis Design Lake Oswego, OR
From: hugh@Eng.Sun.COM (Hugh McIntyre)
You should never normally increase rlim_fd_max beyond 1024 or rlim_fd_cur
beyond 256, otherwise lots of things may break. If the current limit is
above 256 than any program that uses the C stdio library is subject to
problems (except for 64-bit applications under Solaris 7). If the limit is
raised above 1024 then any program that uses select() will also fail (again,
it is possible to get round this with Solaris 7).
If you really need to raise these limits for a specific application, then
the recommended solution is to write a small wrapper script (or program) to
increase the limit and then run the program. For example:
#! /bin/sh
ulimit -n 256
exec verilog "$@"
Hope this helps.
- Hugh McIntyre
Sun Microsystems
( ESNUG 304 Item 13 ) -------------------------------------------- [11/98]
Subject: (ESNUG 289 #13 303 #11) Test Embedded RAMs W/ My Full SCAN Chain?
> I'm curious if anyone in the chip CAD market can test the embeded RAMs
> through the full scan chain. Are there any EDA products/tools that do
> this? Theoretically, you should be able to accomplish this since there
> are Flip-Flops all around the RAMs, which the scan chain has complete
> control over. Plus, I don't really care how many vectors it takes, as
> long as the process is automagic.
>
> - Victor J. Duvanenko
> Truevision
From: Andrew Hulbert <andrew.hulbert@st.com>
John,
Try looking at the Macrotest feature in Mentor's Fastscan DFT tool. Our DFT
engineer used this feature to test a large percentage of 110 small embedded
RAMs in our lastest design. The small number of RAMs that couldn't be tested
had insufficient test points to allow Macrotest to work. Of the RAMs that
were Macrotested some needed a few additional test points added to the
netlist in the back end flow.
- Andrew Hulbert
STMicroelectronics Limited Bristol, Great Britain
---- ---- ---- ---- ---- ---- ----
From: Stephen Sunter <sunter@lvision.com>
John,
A friend of mine forwarded your ESNUG discussion to me, regarding testing of
embedded memories via the scan chain. I'm not a sales person, but the
company that I work for supplies a solution to your problem. LogicVision's
specialty is BIST. We have automation software to insert BIST circuitry at
the RTL level, to test everything via instructions through the JTAG/1149.1
port (or other scan access of your choosing), including:
* memories (embedded or off-chip; SRAM, DRAM, or ROM; single or
multi-port)
* random logic (full or mostly full scan, multi-domain/frequency clocks)
* PLLs (automatically measure loop gain, lock range/time, RMS & pk-pk
jitter)
* I/O and boundary scan logic
* board-level interconnect between ICs with JTAG/1149.1
When testing memories, using the scan access alone can mean very long test
times, does not apply the vectors at-speed, cannot detect certain types of
faults, and is not automatic. BIST allows an at-speed test using vectors
optimized for your specific memory - and it's automatic and diagnostic.
- Steve Sunter
LogicVision, Inc. San Jose, CA
---- ---- ---- ---- ---- ---- ----
From: Bejoy Oomman <boomman@genesystest.com>
Hi, John,
We have a product Memory BistCoreTM which is a library of synthesizable,
parameterized, RTL models for implementing Built-In Self-Test of embedded
memories. This can be added to memory behavioral models to achieve fault
coverage without external patterns. If you have an asynchronous memory, it
can also be used as a synchronizing wrapper with BIST capabilities.
- Bejoy G. Oomman
Genesys Testware Fremont, CA
( ESNUG 304 Item 14 ) -------------------------------------------- [11/98]
From: rjoshi@duettech.com ( Rajesh Joshi )
Subject: How Is Process Numbering Done In Synopsys Cell Library Modeling ?
Hi, John,
I am writing Synopsys model for a cell. In it we need to specify the
operating conditions in terms of "PROCESS", "TEMPERATURE" and "VOLTAGE"
for BEST, TYPICAL and WORST case conditions. I know how to specify the
temperature & voltage. However we are arbitrarily putting the process
numbers as 1, 2 & 3 for BEST, TYPICAL & WORST operating conditions
respectively. But we are not sure about their correctness since the SDF
file generated by using these process numbers does not seem to have
correct timing values. When we changed these process numbers to 1.9,
2.0 & 2.1 for BEST, TYPICAL & WORST operating conditions respectively
we got the correct SDF files in terms of delay & transition time
values. Also I have seen Synopsys models of some other cell libraries
in which the process numbers have values like 1.28, 0.728 etc.
These cells have been designed for a specific foundry & we are ourselves
characterizing these cells.
Can anyone please let me know how this process numbering is done ?
- Rajesh Joshi
Duet Technologies
|
|