( ESNUG 315 Item 1 ) ---------------------------------------------- [3/24/99]
Subject: ( ESNUG 314 #1 ) These CoreBuilder/CoreConsultant Tools Kinda Suck
> I've just spent the past couple of days working with Synopsys' CoreBuilder
> and CoreConsultant tools. ... They basically work like:
>
> IP Maker IP Buyer
> ---------- ----------------
> (e.g. MIPS ) (e.g. Hewlett-Packard)
>
> CoreBuilder --> CoreKit files --> CoreConsultant
> captures IP into (encrypted in converts CoreKit files
> CoreKit files .kb format) into gates
From: "Clifford E. Cummings" <cliffc@sunburst-design.com>
John -
Starting with ESNUG 297, you had significant discussion on the "Reuse
Methodology Manual" and reusable code in general. Michael Keating (author
of the manual), Janick Bergeron from Qualis and Yatin Trivedi from SEVA
posted extensive comments on the topic.
Now, in ESNUG 314, you just published a scoop on a pair IP oriented Synopsys
tools.
What a waste of time and energy!
Design Reuse is myth; it's this year's EDA hobby. They're trendy industry
buzzwords that'll be long forgotten 2 to 3 years from now.
Design reuse largely fails for one main reason: motivation. Engineers
(myself included) do not want to reuse somebody else's design. Engineers
want to "create" and for the most part, they do _not_ want to modify another
engineers code to do it. The prevailing attitude is: "I'm the design
engineer now, I'll do it my way!"
The design blocks that I do see being reused most often include:
(1) Block diagrams
(2) Some special core-logic cells
(3) Engineers reusing their own code -- not another engineers code.
(Reusing your own code is "cool", reusing somebody else's code is
frustrating, boring, and a sign of technical weakness.)
Arguments will fly that a good coding style and good naming conventions and
good comments and good this and good that would make reuse easier, less
frustrating and less boring. As my teenage daughters would say, "no, duh!"
And as long as you can do all of this without impacting schedule, your
management team will support it.
But guess what, reusable designs take a lot longer to create!
Re-engineering a design is often easier than trying to understand an
existing design (for the most part, designs are not well commented). Even
minor modifications to an existing state machine design can require more
effort and cause more grief than just re-drawing the state diagram and
re-coding the new design.
Engineers enjoy the exhilaration that comes from creating new designs and
will often unintentionally sabotage a prior-existing design concept just to
find a reason to start over and do the entire design again ("Oops! One of
the I/O pins has changed, we have to start completely over from scratch!").
Even Ron Collett's research backs this engineering bias. In the Dec. 7, '98
issue of "EE Times" (pg. 49), Collett studied 200 IC & ASIC design projects
and found that, on average, designs were 70 percent new material and 30
percent reused IP. Of that 30 percent reused, 88 percent was recycled
in-house designs. Mathematically, this works out that designs consisted of
only 3 percent outside-supplied IP. (And I'll bet you that these were almost
all RAM core and PCI interfaces, John!) When he asked the engineers to
project into the next 12 months, the amount of externally sourced IP grew
to 8.3 percent of their designs. Still, small potatoes!
But let's now talk testbenches. There, reuse is loved. I have seen design
teams go out of their way to reuse a testbench. I have seen examples almost
as absurd as: "well, we have this old testbench to test an 8-bit 8088 Intel
microprocessor. Let's see if we can adapt it to the Pentium-Pro!!"
Whenever I teach Verilog and Verilog synthesis classes, I always take a
short poll. The question: "How many engineers in this room prefer writing
testbenches over doing ASIC design work?" Of more than 600 engineers taught
in the last two years, fewer than five preferred writing testbenches.
At last year's IVC-VIUF conference, one of the hot topics was the fact that
as designs have become larger, the testing effort has become even more
complex. In the past, the design-verification effort was approximately
50%-50%. Last year I heard engineers claiming the design-verification
effort for million-gate ASICs to be more like 30%-70%. Design is not the
main problem; TESTING IS!!!!
A testbench typically tests at the I/O-pin-level of the design. A testbench
does not have to be exceptionally sophisticated (testbench code is not used
to decrease the number of gates, or increase the speed of an ASIC). A
testbench is used to apply stimulus and verify outputs. If a design team
already has a testbench that can be used to test a SCSI interface on an
existing design, why re-write the testbench for the second generation
component?
Engineers are very willing to reuse a test suite because they did not want
to write the testbench in the first place! Engineers always jump at the
opportunity to reuse a testbench. But engineers will stomp on anyone
suggesting they reuse somebody else's design!
Since we all know that the verification effort is growing much faster than
design complexity, why is the EDA industry focusing so much effort on the
_easy_ part of the job (reuse) and not on TESTING? New, fluffy, IP-oriented
EDA tools like Synopsys CoreBuilder and CoreConsultant are a waste of
everyone's time. What we engineers _really_ want are EDA tools that make
_testing_ more and more effortless!
- Cliff Cummings
Sunburst Design Beaverton, OR
[ Editor's Note: I just have one question, Cliff. I noticed you're citing
Ron Collett in your letter. Is this the same Ron Collett who predicted
the utter & grave demise of Verilog so many years ago? :) - John ]
---- ---- ---- ---- ---- ---- ----
From: Dave Brier <dbrier@ti.com>
Hi, John,
I'm not too happy about the new Synopsys IP delivery tools. The problem
I have with them is that they are tied to proprietary formats once again.
Yuck.
Rather than just sit around complaining, my engineering group discussed the
IP delivery problem and came up with the following solution. Our idea is to
use a true encryption engine, with _no_ proprietary anything, to create
secure source files for the exchange if IP via PGP. PGP is readily
available around the world and easily used. We are proposing is that all
EDA tools be able to call the PGP algorithm when they read a file if
required. The flow looks something like this:
1.) Create Verilog / VHDL model of IP
2.) Encrypt model using PGP from within the EDA tool
3.) E-mail file.v.pgp to customer
4.) Design Compiler, VCS, or whatever "receiving" EDA tool
automatically decrypts the file.v.pgp
The basic steps and requirements in each step of the flow are:
Keys:
Either there could be a central IP key server or every provider of IP
would have a client key server. Whatever works.
Each customer would have a "public" and a separate "private" key.
Encryption:
Nothing more than running PGP on your files and applying the destination's
"public" PGP key. (The way PGP works, the only way file can be decrypted
is by using the matching "private" PGP key. This way the customer and
*only* the customer can read the encrypted file.) Our idea is that this
PGP encryption/decryption process be embedded *within* EDA tools. That
way, customers could recieve and use IP without having direct access to
the source Verilog or VHDL.
Protection & Licensing:
What we're suggesting somewhat mimicks what many Internet e-mail tools
(Eudora, Netscape, Outlook) already perform invisibly. They automatically
apply the proper PGP key to each mail dependent upon who you send it to.
This works even on a broadcast message, since each message gets its proper
PGP key applied.
Our suggestion is that a licence server is added to the process.
To protect the encrypted source at all times, the "private" PGP key that
the destination has cannot be the complete key. There will have to be
some sort of server, similar to the LM License server that is capable of
only delivering the complete PGP key to a properly licensed tool. So, for
example, when you read an e-mailed file into DC, DC is capable of pulling
the complete PGP key and decrypting the file. You can't have anyone
being able to run PGP on the source file themselves.
Decryption Happens Only On Reading:
Our overall idea is that when you read the file into DC, Verilog-XL,
Modeltech, etc., and when these tools find a .pgp extension, they would
automatically poll the license server and pass the file (IN RAM) and the
key (IN RAM) to PGP and get back (IN RAM) the original decrypted source
to compile in DC, Verilog-XL, Modeltech, etc.
It seems simple enough, we have a DC_Shell script that does this, albeit in
a crude manner, but does demonstrate what I am talking about. It's crude
script (which does dump a temporary file, so it's not secure at all), but
it demonstrates the concept.
# Assume the existence of some pre-encrypted files:
# file1.v.pgp, file2.v.pgp, file3.v.pgp
files = { file1.v, file2.v, file3.v }
foreach ( filename, files ) {
# decrypt the file. PGP appends a .pgp extension to the
# filename by default as it searches for something to
# decrypt. Note that if the files were encrypted with
# different keys (say from different vendors), you'll need
# multiple pass phrases
sh pgp filename
# read in the verilog file
read -f verilog filename
# remove the verilog file. This will leave just the .pgp
# version
sh rm -f filename
}
Required tool limitations for PGP encrypted models (that we thought of) to
keep total security:
o Must suppress all hierarchical messages( includes reg/latch reports,
timing loops, etc.).
o Must suppress reporting of constraints.
o Restrict schematic generation (could be argued that this isn't a
big deal).
o Restrict descending as you can do with Modeltech and VCS, etc.
o Restrict tracing of signals internal to models that are encrypted.
You could also encrypt STA scripts, DC scripts and any other information
that might contain data that is sensitive. You could output log files that
could be encrypted for design support as well as many other useful items.
The most important issue here is that the flow be *non-vendor* specific.
You know something of a standard, that way you don't have to support and
maintain dozens of different formats. You avoid all of the PLI/FLI
socketing issues that you get when you create C models and there should be
less performance hits this way.
Ultimately what is needed for a scheme such as this to work is the
cooperation of the EDA industry. I am certain that someone can shoot holes
in this method, anyone if they try hard enough can crack a model, and steal
your IP even if they have to do it polygon-by-polygon.
The concept here is to put a reasonable amount of protection with a minimal
amount of heartache.
- Dave Brier
Texas Instruments Dallas, TX
|
|