> P.S.  John, I'm writing another longer, very detailed technical review to
 > discusses the design reuse ideas they proposed and how they clashed with
 > our real world experiences.  (It's about 900 lines long.  This review was
 > ~150 lines.)  Do you think your readers would be interested in it?  
 >
 >   - Janick Bergeron
 >     Qualis Design Corporation


  From: Vi Chau <Vi.Chau@emulex.com>

  Hi, John,

    I am interested in reading a more detailed review of this book. Janick
  has done an excellent job decribing the book and I want to hear more of
  his opinion. I have been thinking about taking a look at the book and if
  Janick has done it; I probably don't need to do it. :)

    Thanks for running the ESNUG; it is the most valuable information source
  for me.

     - Vi Chau
       Emulex Corp.                                Costa Mesa, CA

         ----    ----    ----    ----    ----    ----   ----

  From: "Donegan, Peter J." <DONEGANP@pcmail.systems.gec.com>

  John,

  At the end of his article, Janick Bergeron of Qualis Design Corporation
  writes about a longer review; I would be very interested in seeing it.
  I just ordered a copy of this book because I so that I can make my future
  designs as reusable as possible.

     - Peter Donegan
       GEC-Marconi Hazeltine                        Wayne, NJ


  [ Editor's Note: Because of the strong interest in Janick's detailed
    review (we've both received a steady stream of e-mails requesting
    it) this week's ESNUG is devoted just to that topic.  (It appears
    that design reuse it going to be a hot topic this year.)  - John ]


( ESNUG 298 Item 1 ) ----------------------------------------------- [8/4/98]

From: janick@qualis.com (Janick Bergeron)
Subject: A Detailed Review Of The Synopsys/Mentor "Reuse Methodology Manual"

Hi, John,

Here's the detailed, context based review of the book.  It's 951 lines long.
I've started with their section "1.1  Goals of this document" and gone all
the way to the end of the book, section by section.  For those of you who
are in a hurry, feel free to read the summary critique previously published
in ESNUG 297 #12.  This detailed review _assumes_ you have a copy of Keating
& Bricaud's "Reuse Methodology Manual for System-on-a-Chip Designs" in hand.

I look forward to see what others think about this book and/or my review of
it in subsequent ESNUGs.

  - Janick Bergeron
    Qualis Design Corporation

-----------------

Introduction

1.1  Goals of this document
This section offers a good review of the pressures currently faced by 
designers and of the issues involved in design reuse. It promises to cover 
issues from two perspectives: the provider and the user.  However, the book 
later delivers a lot more to the former than to the latter.

1.1.1     Assumptions
The assumption that the reader will be familiar with design-for-test and 
full-scan techniques is a major one. Based on our experience, design-for-
testability is a technique still not widely used in the industry.  Also, the 
principal audience of this manual, namely new designers or designers with 
basic levels of experience, will likely not be familiar with these 
techniques.  Fortunately, the book does not stay true to this assumption 
and a lot of design-for-testability guidelines are outlined.

1.1.2     Definitions

1.1.3     Virtual Socket Interface Alliance

1.2  Design for Reuse: The Challenge

1.2.1     Design for Use
Makes an important statement: before being reusable, a design must first be 
useable.  Lays the groundwork for the material included in the book: how to 
create high-quality designs.

1.2.2     Design for Reuse
Introduces the additional criteria that make a design reusable.  These 
criteria are all valid, but the book later fails to issue guidelines as to 
"how much is enough".  For example, to be reusable, a block should be 
designed to solve a general problem - but what is "general"?  Taken to the 
extreme, this goal implies that we should only design microprocessors.  
Companies successful in designing for reuse tend to design for the problem 
that is at hand, looking ahead no more than a single generation and making
sure the design can be easily adapted to new requirements.  That way, 
precious engineering resources are not wasted solving a problem that will 
never occur.

Most of the criteria for designing for reuse are covered well in subsequent 
chapters.  However, very few guidelines are presented to ensure that a design 
will work with a variety of simulators.  The Verilog language is notorious 
for creating race conditions that, when using different simulators or 
different command-line options, can result in different simulation results. 
In VHDL, the use of shared variables creates similar problems.
An important criteria is also not mentioned and never addressed in the book: 
The architecture of SoC designs should be based on the available reusable 
blocks, even if it initially means lower performance or increased area.

1.2.3     Fundamental Problems
A good statement of the problems that designers face when they try to reuse 
another design.  This is probably the primary reason why a culture of reuse 
is so difficult to develop.

2    The System-on-a-Chip Design Process
This would have been an ideal section to address the issues of architecting 
a design with the intent to reuse components as much as possible.  Instead, 
the authors discuss nomenclature and the shortcoming of ideal design 
processes.

2.1  A Canonical SoC Design
A "typical" SoC design is introduced.  This section seems out of place and 
adds little to the discussion.  The design is not really referenced until 
chapter 11.

2.2  System Design Flow

2.2.1     Waterfall vs. Spiral
A new design using leading technology never uses a waterfall process.  A 
spiral approach evolves naturally.  The characteristics outlined for the 
spiral model are excellent and eye opening for managers who expect a more 
traditional linear process.  It would have been valuable to see these
characteristics cross-referenced with the guidelines or sections of the 
book that describe specific techniques for enabling or facilitating them.

2.2.2     Top-Down vs. Bottom-Up
Here it is in print: an admission that no one really does top-down design 
anymore.

2.2.3     Construct by Correction
A cute description of an actual project that successfully planned for and 
used the spiral development process.  One thing which is not mentioned, but 
we hope came out of the process, was a methodology to reduce the number of 
iterations for subsequent designs.

2.3    The Specification Problem
The greatest contribution this book makes is to highlight the importance of 
up-front specification. However, it is too RTL-centric in its definition of 
necessary specification.  An RTL model is one possible implementation among 
many.  In our experience, a specification sufficiently detailed to write a
functional behavioral model is adequate - unless special functional or 
technological requirements force a specific implementation.  In that case 
a specification detailed enough for RTL coding is required.  A specification 
document that implies a particular implementation is too restrictive and may 
hinder the use of more powerful front-end capture tools, such as Synopsys's
Behavioral Compiler or Protocol Compiler.

2.3.1     Specification Requirements
The hardware section lacks an important requirement: the specification must 
address the interface to the outside world.  Too often the interface 
specification is a by-product of the implementation simply because "that's 
the way the device expects/produces data".  Interfaces to other hardware 
blocks should be specified up-front.  Without this early specification
of interfaces, it is impossible to design and implement the verification 
infrastructure in parallel with the implementation.  And with today's huge 
designs, you want all the parallel effort you can get.

This section also seems to imply that an English specification can be 
replaced by an executable specification.  It should be clearly stated that 
executable specifications (or behavioral models) supplement the English 
specification and that the process of writing these executable specifications 
can be used to ensure that the English-language document does not contain 
ambiguities or is incomplete.  To that effect, the executable specification 
should be written by someone other than the author of the English document.

Also, languages other than English are perfectly useable as specification 
languages: witness the United Nations which uses French as one of its 
official languages :-).

2.3.2     Types of Specifications

2.4  The System Design Process
A very good description of how verification should be addressed and performed
 at the system level.  The third paragraph highlights an important point that
 needs mentioning: If properly designed, the algorithmic system verification 
environment can be reused later to verify the hardware components.

Item #5 presents a rather rosy picture of the hardware/software cosimulation 
problem.  This section does not hint at the problem's complexity and usually 
slow performance unless specific resources are allocated to that problem.  
Fortunately, the book redeems itself for that omission later.

It is ironic that this section describes a linear design process after a 
spiral model was recommended.

3    System-Level Design Issues: Rules and Tools

3.1  Interoperability Issues
Basically, an introduction for the next sections.  In our opinion, it 
should not be a separate section.

3.2  Timing and Synthesis Issues

3.2.1     Synchronous vs. Asynchronous Design Style
The book politely says, "Anyone using or designing with latches should be 
shot."  We agree.  The book also fails to mention why flip-flop-based 
designs are superior for timing analysis, formal verification, testability, 
etc.

3.2.2     Clocking
A good list of rules.  An example circuit implementing the first guideline 
would be nice because of its simplicity.  A reference to documents 
explaining metastability and the little known fact that double sampling 
registers only reduces the probability of metastability to an acceptable 
level, rather than eliminating it, would also be nice.

3.2.3     Reset
The issues presented with asynchronous reset fail to mention that it may 
make it difficult or impossible to use other tools and methodologies 
downstream, such as cycle-based simulator or static timing analysis.  
Also, scan test structures positively hate asynchronous resets that are 
not directly controllable from the outside.

3.2.4     Synthesis Strategy and Timing Budget
The third paragraph describes a cool approach!

3.3  Functional Design Issues

3.3.1     System Interconnect and On-Chip Buses
This is an issue that even the VSIA does not want to deal with.  Instead, 
the VSIA has chosen to standardize a transaction layer independent of a 
physical bus.  This is probably one of the most significant barriers to 
design-with-reuse.  This standardization favors internal reuse within large 
companies who can enforce a single standard.  For commercial blocks, this 
standardization will probably translate into optional interfaces to make a 
block directly compatible with whichever processor bus the customer is
using.

Two on-chip buses are mentioned, but no reference is available to look 
further into their appropriateness for a particular design.

3.3.2     Design Structures
As a designer, you have to admit and accept that bugs will happen.  You have 
to take steps to make your life easier when it is time to locate them. A few 
good suggestions are mentioned without specific implementation details.

3.4  Physical Design Issues

3.4.1     Hard Macros

3.4.2     Floorplanning

3.4.3     Clock Distribution
A good description of the challenge faced by designers in large synchronous 
designs.  The guideline describes a good approach to mitigate some of the 
challenges.

3.5  Verification Strategy
OK. Now what are the characteristics of a good verification strategy?  This 
section should do more to highlight the importance of planning verification 
in the early stage of the design, and the need to change the design to meet 
the requirements for verification.

3.6  Manufacturing Test Strategies
A good overview of the various test strategies that are applicable to various 
types of components.  However, there is no mention of the characteristics of 
a good test strategy.  The last subsection mentions the advantage of Logic 
BIST, but fails to explicitly recommend this approach for reusable blocks.

4   The Macro Design Process

4.1  Design Process Overview
The authors do a great job of describing the importance of specification 
and review before RTL coding begins.  Item #2 states that the subblock 
designer is responsible for RTL coding, synthesis, testbench, and test suite 
without offering a description of the difference between subblock-level 
testcases and macro-level testcases, or the objective of each.  Items #1 and 
#3 talk about a behavioral testbench and a series of testcases, but do not 
mention the characteristics of a good test suite or the process used to 
determine which testcases compose this test suite.

Figure 4-2 shows a box where multiple configuration tests are developed and 
run against the top-level RTL model after functional verification.  The 
effort can be done in parallel by developing and running those testcases 
against the behavioral model during the specification stage.

Figure 4-3 shows a large box labeled "Check against exit criteria".  What 
are those criteria?  Who defines them?  What are the characteristics of 
appropriate exit criteria?

4.2  Contents of a Design Specification
An excellent breakdown of the content of a thorough specification document.  
The topic of specification is a recurring one throughout the book, which 
serves to heighten its importance.

4.3  Top-Level Macro Design

4.3.1     Top-Level Macro Design Process
This section states that a behavioral model may be of little use in designs 
dominated by state machines and have little algorithmic content because all 
of the interesting behavior would be abstracted out of it.  We passionately 
disagree with that statement.

Let's take it from the top.  State machines are an artifact of the 
implementation and are functionally irrelevant.  A behavioral model must 
present to the outside world an accurate description of the functionality 
of the block it models, and that is all the interesting behavior that is 
required.  If latency and clock-cycle allocations created by the 
implementation are important to the overall functionality, then the 
behavioral model must model those characteristics - without implying a 
specific state machine implementation - and those same characteristics must 
be stated in the specification document.  The issues related to coverage 
when verifying state machines is orthogonal to the development of 
functional testcases and cannot make use of the behavioral model to 
evaluate their effectiveness, whether the content is highly algorithmic 
or not.

4.3.2     Activities and Tools

4.4  Subblock Design

4.4.1     Subblock Design Process
The text introduces two important tools that help in the verification 
process: code coverage and linting.  The requirement of 100% statement and 
path coverage at the subblock levels is, in our opinion, too aggressive.  
To achieve this level of coverage will require a lot of effort at the 
subblock testbench level, which is a level that has not been formalized and 
is later described as being ad hoc.

Because the interface of a subblock is subject to change when and if the 
overall macro is redesigned, a lot of that effort will be wasted.  It is a 
better investment of the design team's time to perform basic functional 
verification at the subblock level, and then seek thorough functional and 
coverage verification at the macro level.

Of course, being removed one level from the subblock may not make a goal of 
100% statement or path coverage feasible.  The uncovered statements and 
paths must be examined to check if they do not represent false paths or 
non-functional code within the context of the subblock's usage. If not, 
the macro testsuite needs to be augmented to cover the specific statement 
or path.

4.4.2     Activities and Tools
The order in which the activities are listed suggests that RTL should be 
completely developed and tested before attempting synthesis.  This is the 
correct approach.  However, mentioning linting as an activity that follows 
synthesis is misleading.  Linting should be done throughout the RTL 
development process (as stated in the text).  Linting may catch functional 
errors without a single simulation cycle being spent.  Think of the time
savings.

4.5  Macro Integration

4.5.1     Integration Process

4.5.2     Activities and Tools
Steps 1 and 3 state that a script may be necessary to properly instantiate 
parameterizable subblocks. Is this an acknowledgment that Design Compiler 
sometimes has difficulties with parameters/generics?

Several activities mention scan insertion and later use of static timing 
analysis. Yet, the book mentions no guidelines specifically intended to make 
these activities easier.  Running lint on the entire design is listed last 
and the text also suggests that it is a final step.  We disagree.  Linting
should be performed throughout the top-level macro integration process even 
before a single CPU cycle is devoted to simulation.  It is by far the most 
cost-effective and rapid way of identifying a class of problems that may be 
very difficult to locate by simulation.

4.6  Soft Macro Productization

4.6.1     Productization Process
This is the first time RCS and source code control are mentioned.  A bit late 
in our opinion, given the imperative of its use.

4.6.2     Activities and Tools
The book recommends that both macro models and testbenches be provided in 
both VHDL and Verilog.  Unless the model and testbench are at the RTL level, 
any behavioral style will be onerous to translate.  Writing testbenches at 
the RTL level, although amenable to emulation, requires a significant 
increase in effort over a similar behavioral testbench.  With integrated 
VHDL/Verilog simulation environments available today from all major EDA 
vendors, there is no real need to support two different languages.  Any 
customer serious about reusing blocks will be impervious to the choice of 
VHDL or Verilog for any block.  Another activity mentions the need to test 
the model and testbench on several simulators, especially with regard to 
VHDL.

Although different VHDL simulators interpret and adhere to the standard 
differently (especially with regard to the availability of VHDL-93 features), 
the book does not mention Verilog's inherent race conditions, nor possible 
different simulation results between different simulators by using different 
command-line options that change the ordering of event processing.  The book 
should contain guidelines to make this activity simpler.  For example, use 
only VHDL-87 features and present rules to avoid race conditions or other 
unspecified behavior in Verilog.

5    RTL Coding Guidelines
This is the longest and most detailed chapter in the book. Rather than 
outlining every section and subsection, we will only include the ones that 
require commenting.

There is nothing really new in this chapter to anyone who has been following 
industry-leading conferences such as SNUG, IVC, or VIUF. However, it is good 
to have all of the sometimes-conflicting guidelines that were issued over 
the years consolidated in a single reference book.

5.2.1 General Naming Convention
Table 5-1 shows a naming convention for internal tristate signals.  Although 
such a naming convention is a good thing,internal tristate signals are not 
recommended. Unless special care is taken to ensure that it is impossible to 
have contention on the tristate signal or a floating value, internal tristate 
signals will lower the manufacturing test coverage.

5.2.8 Indentation
The first guideline suggests using 2 spaces for indentation. We find this 
spacing too narrow because it is ineffective in highlighting code structure 
at a glance.  Most language-aware editors use a default indentation level of
3 or 4.  

5.2.12 VHDL Entity, Architecture, and Configuration Sections
The first guideline recommends putting entity, architecture, and 
configuration sections in the same file.  We strongly disagree. Putting them 
in separate files will minimize the amount of required recompilation when 
changes are made to an architecture.  To eliminate the need to know the 
compilation order, the files can be concatenated into a single file for 
delivery to a customer.

5.2.14 Use Loops and Arrays
The recommended coding style shown in Example 5-12 uses a construct that 
does not work with Leapfrog. Instead of using "(others => bbit)", use 
"(temp'range => bbit)".

5.3.1 Use Only IEEE Standard Types
The note about standardizing on either std_logic or std_ulogic fails to 
mention that using std_ulogic will automatically detect inadvertent multiple 
drivers on the same signal or internal busses.  However, as also stated, 
because std_logic is somewhat better supported by the EDA tools and available 
packages, it tends to be the default choice.

The guidelines about not using types bit and bit_vector are very 
Synopsys-centric.  There is an IEEE package that provides arithmetic 
operators for these types, but it is not supported by Synopsys.

5.3.4 Include Files
`define symbols are global to the compilation once they are defined and 
should be avoided as much as possible.  There is no way to ensure 
independence and uniqueness of `define'ed symbols between blocks 
independently designed or externally sourced.  Instead, use parameters where 
possible.

If you have to use `define symbols, declare them in the file in which they 
are used, then `undef them at the end of the file.  If they must be used by 
more than one file, `define them in an included external file as per the 
guideline, but also provide an include file to `undef them at the end of 
every file.

5.3.7 Coding for Translation
The last guideline recommends not using code that "modify constant 
declarations".  What?  Are the authors talking about deferred constants?  
In VHDL, constants are constants.  They cannot be modified.  Verilog-XL 
2.4.2 still allows assignments to parameters - albeit with a warning since 
version 2.

5.4.3 Avoid Gated Clocks
It should be mentioned that these guidelines will also help scan
insertion, test coverage, and static timing analysis.

5.4.4 Avoid Internally Generated Clocks
Again, it should be mentioned that these guidelines will also help scan 
insertion, test coverage, and static timing analysis.

5.5.2 Avoid Latches
Avoiding the use of latches is a must.  There is no mention that it helps 
scan insertion, test coverage, and static timing analysis.  The guideline 
suggests three ways to avoid inferring latches. Since the first way mentioned 
is the easiest way to guarantee that no latches will be inferred and 
typically yields the most maintainable RTL description, it is preferred over 
the other two.

A useful guideline not mentioned is the need for a unique naming convention 
for signals intended to be inferred as latches.

5.5.4 Specifying Complete Sensitivity Lists
When will the standardization committees listen to the users and standardize 
an automatically derived sensitivity list using constructs, such as 
"always @ (*)" and "process (*)" ? The Combinatorial Blocks paragraph only 
lists the VHDL signal assignment and the Verilog non-blocking assignment 
operator.  It should also include VHDL's variable assignment (:=) and 
Verilog's blocking assignment (=).

5.5.5 Blocking and Nonblocking Assignments (Verilog)
The rule states that the non-blocking assignment must be used in 
always @ (posedge clk) blocks.  That is true only for regs that are to be 
inferred as flip-flops.  The computation of intermediate values, a technique 
that often minimizes area, requires the use of the blocking assignment.

Furthermore, the only justification for the rule is that, otherwise, 
simulation results could differ between RTL and gates.  If the rule is not 
followed, race conditions will be created and results may differ between 
RTL simulation on different simulators or when using different command-line 
options.

5.5.6 Signal vs. Variable Assignments (VHDL)
Apparently, this section was written for symmetry with Verilog.
Unless one uses shared variables (which are a VHDL-93 feature -
and should be avoided), it is impossible to get different simulation 
results between RTL on different simulators, or between RTL and gates 
(assuming bug-free simulator and synthesis tools). Because there is no way 
for any other process to use the value in variable "c", the example shown as 
poor coding style in Example 5-31 would be optimized by synthesis anyway.  
The recommended coding style is the only one that will work.  There is no way 
for someone to develop a working model using the poor style.  Hence, the poor 
style is actually an invalid style.

5.5.8 Coding State Machine
The second guideline recommends using `define symbols in Verilog.  See the 
comment on section 5.3.4 for an objection.  The third guideline recommends 
keeping FSM logic in separate modules or in entities from other logic.  
This would quickly create a large number of files and hinder maintainability.
Maintainability concerns should be the primary criteria for deciding when and 
where to partition code into separate modules/entities.  The promise of a 
justification later in the chapter is not fulfilled.

6    Macro Synthesis Guidelines

6.1  Overview of the Synthesis Problem
Toward the end of the section, the book states that coding for functionality 
first, then fixing timing problems later, can cause delays and poor overall 
performance.  But how does one know what and if RTL will meet timing?  Unless 
a designer knows that a module or entity currently being coded is going to be 
a critical section, the primary criteria should be functional correctness and 
maintainability.  On the other hand, designers should always be aware of the 
implications of the design architecture implied by the RTL and of the timing 
difficulties it may introduce.  This statement also contradicts the earlier 
guideline in section 5.2.14, which was to use loops. Typically, unrolled 
loops generate long cascades of decode logic that can quickly become the 
critical path.

6.2  Macro Synthesis Strategy

6.2.1     Macro Timing Budget
The method for specifying combinational delays through a macro described 
as "preferred" is definitely a Synopsys-centric statement.

6.2.2     Subblock Timing Budget
The good nominal starting point mentioned for loading specification is 
definitely too optimistic.  Why specify the lowest possible driving 
capability for an input (pessimistic), and the lowest possible load for 
an output (optimistic)?  We recommend an output load of 4 flip-flop data 
inputs.

6.2.3     Synthesis in the Design Process

6.2.4     Subblock Synthesis Process

6.2.5     Macro Synthesis Process
No mention of a suitable load factor to assume on primary outputs. Since one 
must include the expected wireload on the primary outputs, an output load of 
4-8 flip-flop data inputs is probably appropriate.

6.2.6     Wireload Models

6.2.7     Preserve Clock and Reset Networks
Definitely Synopsys-centric.

6.2.8     Code Checking Before Synthesis

6.2.9     Code Checking after Synthesis
Now, who wrote this section?  The explicit recommendation to use Formality, 
which is not the industry-leading formal verification tool at this time, to 
verify the equivalence of the post-synthesis netlist and RTL, is clearly 
Synopsys-centric.  Of course, if the macro is being shipped as RTL code 
(or soft-macro), this step is not required.

6.3  High-Performance Synthesis

6.4  RAM and Datapath Generators
A big sales pitch for Module Compiler. Stay tuned for more in later chapters!

6.5  Coding Guidelines for Synthesis Scripts
Oddly enough, there is no recommendation to use TCL or DC perl.

7   Macro Verification Guidelines
Overall, this is an excellent chapter that describes how to design powerful 
testbenches and a proper verification strategy.  Some minor weaknesses are 
outlined below.

7.1  Overview of Macro Verification
The fourth bullet states that the investment in a properly designed testbench 
can be reused in the future as a macro may be substantially redesigned.  This 
argument goes against their recommendation that a significant portion of the 
verification be spent at the subblock level, as a redesigned macro only has 
to guarantee equivalence at its top-level interface, not down to the 
subblocks.  All the effort invested in reaching the required 100% code and 
path coverage will likely not be reusable.  This result further supports our 
claim that only basic functional tests should be performed at the subblock 
level.

7.1.1     Verification Plan
One bullet is missing: the plan let's you know when you are done!  Otherwise, 
how do you know that you have written all the testcases that are necessary 
and sufficient?  A well thought-out and reviewed testplan is an invaluable 
tool in planning and scheduling the verification effort.  Otherwise, 
verification is simply an activity that is done until the design must be 
shipped, with the designers never knowing whether it has been sufficiently 
tested.

7.1.2     Verification Strategy
The authors give an excellent definition of a nomenclature for various 
testing strategies.

The simulation section is too focused on RTL simulation.  A proper 
verification strategy starts with the development of the recommended 
behavioral model during the specification phase, and allows the parallel 
development of testbenches during the RTL coding phase.  The behavioral 
model that is created is also a valuable tool for the users of the macro.  
The users can quickly simulate the functionality of the macro in their 
system design, while preserving the developers confidentiality of the macro
implementation details.

The Prototyping section implies that there should be guidelines for synthesis 
into an FPGA prototype.

7.1.3     Subblock Simulation
As mentioned earlier in sections 4.4.1 and 7.1 above, we disagree with the 
goal of 100% code and path coverage at the subblock level.  While it is much 
easier to achieve at that level, it will result in a lot of duplicated effort 
with the macro-level verification and may not be reusable when the macro is 
redesigned.  Also, a lot of time may be spent trying to exercise
paths that are not functional when integrated in the macro (that is, false 
paths).

The second paragraph makes an important statement: Output should be checked 
automatically.  However, the paragraph continues with the statement that the 
checking logic must be more robust than the logic being checked.  How does 
one ensure that or measure it?  Currently, the best way to achieve confidence 
in the testbench and checking code is to have it developed independently from 
the logic designer (which is not what is recommended in the book for 
subblocks), and to have it tested against a behavioral model (again, not 
recommended at the subblock level).  For even greater confidence, the 
behavioral model should include known faults to ensure that the checkers 
detect the problem.

7.1.4     Macro Simulation
Because of our stance on the subblock-level verification strategy, our 
macro-level verification strategy differs from the authors' recommended 
strategy.

100% code and path coverage is not the ultimate goal.  Any uncovered path 
and statement should be checked, and it must be understood why they are not 
exercised in the current environment.  If they should be exercised, then 
the verification plan must include such tests.

The macro simulation design is the most important simulation area, as it is 
the ultimate judge of the correctness of the design.  It should be reused in 
any redesign of the macro and a subset of it should be shipped with the 
macro to customers.

7.1.5     Prototyping
The book states that the proposed methodology encourages rapid prototyping.  
However, no guidelines or techniques are presented to speed-up prototyping 
efforts.

The last paragraph is excessively pessimistic.  A properly executed 
verification plan, even without prototyping, can achieve first-pass fully 
functional silicon (as we have proven on numerous occasions).  In certain 
applications such as video processing, microprocessors and asynchronous event 
handlers, simulation times may be prohibitively long, and therefore
prototyping is an excellent solution for verifying the correctness of the 
design.  Relatively simple macros with few combinations of internal states 
and data units usually do not require prototyping.

7.1.6     Limited Production

7.2  Testbench Design

7.2.1     Subblock Testbench
The authors state that subblock testbenches tend to be ad hoc, which is 
further admission that they should not be the primary vehicles for high 
coverage verification.  The Output Checking section is a nice overview.  
Guidelines and specific techniques should be described.

7.2.2     Macro Testbench
The verification set-up shown in Figure 7-2 is a very simple testbench 
structure and looks more complicated than it really is.  Very complex 
testcases and reliable regression can be accomplished with such an 
environment.  In general, we do not use command files due to an excessive 
proliferation of files; instead, we use a higher-level module/entity that 
dynamically controls the bus-functional models.

Testcases using the environment shown in Figure 7-3 would not be as efficient 
as testcases using the previous environment.  It would be preferable to 
recreate the interesting scenarios in the latter, or use execution traces 
instead of HW/SW cosimulation in the former.  However, where simulation using 
representative algorithms from the application software is required, an 
environment akin to the one shown is required.

7.2.3     Bus Functional Models
Contrary to what is stated in the book, bus functional models are typically 
written using behavioral description, not RTL.  The bus models from Synopsys 
LMG and some other models use some form of external command language.  The 
more powerful ones can be controlled directly from VHDL or Verilog.  These 
languages allow multiple bus-functional models to interact without requiring
complex multi-processing programming techniques, extraneous synchronization 
artifacts, or devising yet-another control language.

7.2.4     Automated Response Checking
This is a very weak section.  Automated response checking is a very powerful 
technique for ensuring that no errors slip by.  Guidelines and techniques 
should be presented.

The "effective" technique to compare a simulation model against a known good 
model or implementation occurs very rarely.  Using a behavioral model as a 
reference is not very good either: Who made sure that it was correct and 
how?  Automated response checking should be used when verifying behavioral 
models, too.

7.2.5     Code Coverage Analysis
Great overview and explanation of available metrics.  The second to the last 
paragraph on page 133 ought to be in bold characters.  The section should 
state clearly that code coverage tools indicate when your verification effort 
is complete or close to completion.  Code coverage is not an audit of the 
quality or correctness of the testcases.

The last paragraph in the section mentions the lack of tools to verify the 
coverage of state or path conjunctions between separate state machines.  That 
is exactly what 0-in's tool attempts to do.

7.3  RTL Testbench Style Guide
This section leaves us cold.  One must have very good reasons to implement a 
testbench in RTL.  With the significant effort that goes into behavioral 
testbenches, the additional burden to make them synthesizable simply to suit 
the simulation technology had better be worthwhile.  If it must be done, 
minimize the portion done at RTL, i.e. only the portion of the bus-functional 
model that needs to provide or monitor signals on a clock-by-clock basis.  
Leave the data generation and verification to high-level behavioral routines.  
Exchange data between the behavioral and RTL sections of the testbench at 
well-defined times with the largest amount of information possible every time 
to minimize the interaction.

7.3.1     General Guidelines

7.3.2     Generating Clocks and Resets
Do not generate clock, data, and resets from a single process.  OK, now tell 
us why.

The next guideline recommends that asynchronous clocks be generated from 
separate processes. OK. How do you ensure skew coverage?

The fourth guideline talks about reading and applying vectors one clock cycle 
at a time.  Using vector-based verification techniques is not efficient to 
reach the desired functional and source coverage goals.

7.4  Timing Verification

8    Developing Hard Macros
Section 8.6 has an excellent overview of possible modeling abstractions and 
their typical use and benefits.  Section 8.6.1 sounds like a pitch for VMC 
and section 8.7 is a blatant pitch for CBA.  The entire chapter contains a 
lot of Synopsys-centric statements and would benefit from a broader view of 
the available tools and technologies across the industry.

9    Macro Deployment: Packaging for Reuse
Good chapter but it has the feel of a grocery list.  Much of the information 
is a repeat of material and arguments previously made.  However, the 
presentation has the benefit of making this chapter stand on its own.

10   System Integration with Reusable Macros
This is the only chapter answering the promise of addressing the issues faced 
by those using reusable blocks.  The previous chapters where all devoted to 
the reuse provider.  This lack of coverage on using designs is probably 
because the industry has a lot of experience in creating designs and knows 
the characteristics of good designs, but has little experience with the 
actual reuse of these (good) designs.  With Synopsys and Mentor Graphics 
pushing reuse so strongly, why is this chapter on integration so weak?

In short, this book lacks an important chapter: system design with reusable 
macros.

10.1 Integration Overview

10.2 Integrating Soft Macros

10.2.1    Soft Macro Selection
The authors provide a shopping list of broad characteristics to look for.  
The reader is still left to his or her own devices to define what is an 
acceptable level of quality or robustness.  In reality, a soft macro will 
always remain an "experienced product".  This experience is similar to 
purchasing a car. Until the car (or macro) has been fully experienced and 
well beyond the point where it can be returned or exchanged, one cannot know 
the full extend of the quality of the product.  The selection will usually 
come down to trust between the customer and the supplier, the quality of 
past experience, and the popularity of the product.

10.2.2    Installation
A trivial and self-evident section.

10.2.3    Designing with Soft Macros
Surely, there is more to designing with soft macros than simply setting 
parameters and instantiating them. Otherwise, everyone would be reusing 
designs, right?.  How do you architect a system to be able to use a macro? 
What are the cost/benefit trade-offs between using an obviously 
overpowered macro versus designing only the required functionality from 
scratch?  What are the challenges of integrating macros designed by 
different vendors using different interface standards?  How do I integrate 
the software drivers for all of the macros in my design?  How do I integrate 
test structures from hard macros or soft-macro, with scan or logic BIST, into 
my overall chip test strategy?  More proof of a weak chapter.

10.2.4    Meeting Timing Requirements
Again, nothing very specific here.  What makes a script robust, yet 
flexible?  How do I evaluate a macro so I can be assured I will be able to 
meet timing?

10.3 Integrating Hard Macros
Nothing new here.  This entire section does not present anything different 
than what has been done in the past when integrating RAM or ROM blocks, an 
older form of hard macro. 

10.4 Integrating RAMS and Datapath Generators
RAMS have been integrated in designs for a long time, so the issues and 
techniques associated with them are not new.  However, they are worth 
repeating for completeness.

The second half of the section simply encourages the use of Module Compiler 
and is a blatant Synopsys sales pitch.

10.5 Physical Design
This is a GREAT overview of the back-end process. A must-read for any new 
designer.

11   System-Level Verification Issues

11.1 The Importance of Verification
YES! YES! YES!  Verification must be planned, scheduled, and resources 
allocated for it. Like design, there is a process that can be followed for 
verification.

11.2 The Test Plan
The suggested step-by-step approach is excellent.  Leaf nodes may not be 
visible for testing as stand-alone units from the boundary of the 
"black box".  It may be necessary to introduce features in the design that 
will make verification easier.

11.2.1    Block Level Verification
Because we disagree with the recommended thoroughness of the block-level 
testing in favor of a more extensive top-level verification approach, we do 
not agree with the conclusion of the second paragraph.  The test plan 
should be (and can be) thorough enough and properly executed to ensure 
first-pass success.

11.2.2    Interface Verification
If only block interfaces were always so regular as outlined in the first 
paragraph.  Reality is often more complex, with interfaces not directly 
accessible from the top-level.  The level of transaction controllability 
on a particular interface may not be sufficient for such a neat verification
model.

Transaction Verification
If there is a large number of different interfaces, listing all possible 
transactions may not be feasible.  This is likely to occur if macros are 
sourced from different providers using different interface standards.

The second paragraph used to be the recommended verification approach from 
Synopsys under the term "pins-out" verification.  Is it no longer the case?

The last paragraph in this section describes a scenario that we have never 
seen occur.  It would be nice to be able to perform a verification that way, 
but RTL interfaces are usually so tightly coupled with the rest of the block 
that it is not easy to replace the core of a macro with a behavioral one 
without touching the interface logic, especially if the macro was purchased 
or designed externally.  And why wasn't this capability mentioned in the 
macro packaging section?  Too little, too late.

Data and Behavioral Verification
Figure 11-1 is not a very clear illustration of what they are talking about 
in the second paragraph.

The third paragraph is too pessimistic.  Automated response checking can be 
verified using a slave bus-functional model (e.g. "watch this bus and expect 
a write cycle at address X with data Y and generate a parity error on the 
acknowledge").  Also, the expected response must be an integral part of the 
verification plan.  The verification strategy must take into account how the
expected response will be specified and monitored.

The fourth paragraph makes an important statement: The verification plan 
should also cover unexpected and invalid input patterns and the expected 
response of the design.  If not for testing the design, these are at least 
needed to test the testbench!

11.3 Application-Based Verification

11.3.1    A Canonical SoC Design

11.3.2    A Testbench for the Canonical Design
The suggestion of replacing the CPU and memory model with a cosimulation 
environment is an excellent one.  An easier, but less flexible approach, 
would be to use an execution trace played back on a bus-functional model of 
the CPU.  The next paragraph mentions abstracting the RTL models with C++
models.  Interesting concept, but how does one ensure the conformance of the 
two models?

11.4 Fast Prototypes vs. 100 Percent Testing
The first paragraph mentions the old "90% of the ASICs operate correctly, but 
50% fail in the system" quote.  We heard these metrics back in 1988.  Have 
there been no improvement or changes in the last 10 years?

The rest of the section is an excellent analysis of the value of reuse.  This 
value can be translated into dollars, although it's situation specific.

The analysis also talks about blocks and chips verified to a 90%
or 99% bug-free probability level.  How does one measure this probability 
with this kind of resolution?

11.4.1    FPGA and LPGA Prototyping
Another disadvantage of the FPGA prototype not mentioned: the interconnect 
between FPGAs is not synthesized from the "golden" RTL that is being 
verified.  As a result, this kind of FPGA prototyping does not provide any 
real verification of what will ultimately be implemented.

11.4.2    Emulation Based Testing

11.4.3    Silicon Prototyping

11.5 Gate-Level Verification

11.5.1    Sign-Off Simulation
There is no mention of Nextwave's Epilog simulator that can perform a 
min/max simulation with full timing spread and correlated operating 
conditions in a single run.  What if one of the blocks is near minimum 
timing, while another block is near maximum timing? An all-best-case or 
all-worst-case simulation will not catch these timing hazards.

11.5.2    Formal Verification

11.5.3    Gate-Level Simulation with Unit-Delay Timing

11.5.4    Gate-Level Simulation with Full Timing

11.6 Choosing Simulation Tools
The authors make a key statement in the last bullet: Once designs have 
stabilized and are relatively bug-free, they are migrated to emulation.  
Emulation requires long cycle times between debug iterations and should be 
used only to uncover long-run defects or to perform application-based 
testing.

The requirements for synthesizable testbenches make us shudder.

11.7 Specialized Hardware for System Verification

11.7.1    Accelerated Verification Overview

11.7.2    RTL Acceleration

11.7.3    Software Driven Verification

11.7.4    Traditional In-Circuit Verification

11.7.5    Support for Intellectual Property

11.7.6    Design Guidelines for Accelerated Verification

12   Data and Project Management
These are absolutely essential tools.  Along with using the always-working 
model, one should perform regular tagging or lineups of the model to be able 
to recreate a specific version of the model at any time.

Bug tracking is essential.  With large teams distributed across many sites, 
the 3M Post-It note and water-cooler bug tracking systems do not work.  Bugs 
must be tracked, assigned, measured, and reviewed as part of the regular 
team meetings.  Anything that does not work or appears not to work should be 
entered in the bug tracking system.  That includes errors in the 
specification document, ambiguous descriptions, and omissions, as well as
functional errors in testcases.

13   Implementing a Reuse Process
A critical chapter that falls too short.  Probably the key to a successful 
reuse design methodology.  Section 13.1 does not mention the creation of a 
reward system for creating reusable designs, reusing designs, or architecting 
designs so reusable components can be included.



 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)