!!!     "It's not a BUG,                           jcooley@world.std.com
  /o o\  /  it's a FEATURE!"                                 (508) 429-4357
 (  >  )
  \ - /                  
  _] [_                  Boston SNUG'00 Trip Report
                                  - or -
   "17 Engineers Review Boston SNUG 2000 in Newton, MA, Sept. 20-22, 2000"

                              by John Cooley

       Holliston Poor Farm, P.O. Box 6222, Holliston, MA  01746-6222
     Legal Disclaimer: "As always, anything said here is only opinion."


    "I was the guy who had his 6 year old daughter in tow during 'Vendor
     Night' where all the non-Synopsys EDA vendors came.  She was amazed.
     Thought it was the coolest thing in the world.  Shirley Temple gets
     free toys.  On the way out, we had this conversation:

             Me: So, what'd you think?

       Daughter: It was really neat.

             Me: What'd you think of all the stuff we saw.

       Daughter: There was lots of stuff on the computers but it didn't do
                 alot did it.  Those guys just talk a lot, right?

             Me: I can't argue with that.

     It's amazing what truths children see, John."

         - Mark Garber of Equipe Corp.


    "I guess it was because I made those anti-VHDL statements.  I knew those
     people were going to get me some day."

         - Cliff Cummings, who lost out on the 'Boston SNUG Best Paper'
           competition by 0.03 points to a Library Compiler paper by Steve
           Start of AMI.  Cliff won the 'Best Paper' for the prior 2 SNUGs.


    "Our paper and presentation went very well.  The presentation was given
     on the afternoon of the first day of the conference and we received
     several nice compliments on our work.  On Thursday night, it was a nice
     surprise to receive the conference 1st place best paper award as voted
     by conference attendees.  Later that evening at "Synopsys Night," I
     learned from several attendees that winning this award also came with
     a particular obligation, by SNUG tradition, namely that several members
     of the SNUG Technical Committee adjourn to the local "watering
     hole" for refreshments (with whoever else they can round up at the
     time) and do their best to run up a large bar tab that the SNUG Best
     Paper winner is obligated to pay.  It's a tradition.  Last year, the
     bar tab was over $300.  I got off this year having only to pay $90."

         - Steve Start of American Microsystems, Inc.


( BSNUG 00 Subjects ) ----------------------------------------- [ 10/13/00 ]

Item  1 The Numbers
Item  2 The Bigwig's Big Speech
Item  3 SystemC C++, Superlog, Verilog, VHDL, C-Level, Vera, Verisity
Item  4 VCS, Verilog-XL, Scirocco VHDL, VSim, Roadrunner, Radient, FSMs
Item  5 Veritools, Tensilica, Synopsys LM Models, Silicon Perspectives
Item  6 Verif., Formality, C++ Verif., A Synthesizable 'Watchdog', O-in
Item  7 Altera, Xilinx, FPGA Express, FPGA Compiler II, Synplicity
Item  8 Library Compiler, Non-Linear Delay Models, IBM'S DCL, IEEE 1481
Item  9 PrimeTime Static Timing Analysis (STA)
Item 10 Dc_shell vs. DC-Tcl, Tcl Gotchas, TOPS Tcl Synthesis Environment
Item 11 Design Compiler 2000, Presto, Floorplan Manager, ACS
Item 12 Automatic Chip Synthesis (ACS)
Item 13 Design-For-Test (DFT), Scan, ATPG
Item 14 Power Compiler, Prime Power, Module Compiler
Item 15 Physical Compiler (PhysOpt), Chip Architect, FlexRoute, FlexPlace
Item 16 Linux Farms, Sun Workstations
Item 17 Synopsys DesignSphere Website
Item 18 The Hotel, Red Sox, Vermont Inns


( BSNUG 00 Item 1 ) ------------------------------------------- [ 10/13/00 ]

Subject: The Numbers

VOTING WITH THEIR FEET:  To get an idea of user interest in a particular
topic, there is no better way than a headcount of the number attendees in
each talk.  Overall, 256 people attended this year's Boston SNUG gathering.


  Wednesday, Sept. 20                                  Number Of Attendees

    9:00 - 12:15  (WA1) Tutorial on PrimeTime              104 + 17 standees
    9:00 - 12:15  (WA2) Tutorial on VCS                         28
    9:00 - 12:15  (WA3) Tutorial on Power Compiler              32

   12:15 -  1:30  Lunch                                        157

    1:30 -  3:30  (WB1) Users on Makefiles, Tcl, Scripting     103
    1:30 -  3:30  (WB2) Configurable Regs, DW Debugger          33
    1:30 -  3:30  (WB3) Library Compiler, Libs                  21

    3:30 -  5:00  Sun EDA Compute Farm Talk                    146

    5:00 -  8:00  EDA Vendor Fair                          Est. 250


  Thursday, Sept. 21

    9:00 - 10:15  Keynote Address (Aart's Speech)              161

   10:30 - 12:30  (TA1) PhysOpt, PrimeTime, PowerMill           98
   10:30 - 12:30  (TA2) FPGA Express / FPGA Compiler II         28
   10:30 - 12:30  (TA3) C++ BFM Verif, Watchdog Design       49 + 7 standees

   12:35 -  1:45  Lunch                                        188

    1:45 -  3:45  (TB1) Module Compiler, Behavioral Compiler    23
    1:45 -  3:45  (TB2) ACS, FSM Synthesis, Low Power          107
    1:45 -  3:45  (TB3) Formality, Verification                 38


    4:00 -  4:45  Fireside Chat                            Est. 150

    5:30 -  8:00  Synopsys AE Night                        Est. 250


  Friday, Sept. 22

    9:00 - 12:15  (FA1) Tutorial on Physical Compiler           88
    9:00 - 12:15  (FA2) Tutorial on TetraMax, DFT, Test         38
    9:00 - 12:15  (FA3) Tutorial on Advanced Formality          17

   12:15 -  1:30  Lunch                                        136

    1:30 - 4:45   (FB1) Tutorial on DC 2000 Improvements        52
    1:30 - 4:45   (FB2) Tutorial on Adv. FPGA Compiler II       11
    1:30 - 4:45   (FB3) Tutorial on SystemC C++ Modeling        36
    1:30 - 4:45   (FB4) Tutorial on Scirocco VHDL                6


( BSNUG 00 Item 2 ) ------------------------------------------- [ 10/13/00 ]

Subject: The Bigwig's Big Speech

WHAT WAS NOT SAID:  As usual, the Boston SNUG keynote was given by the CEO
of Synopsys, Aart De Geus.  Afterwards, as usual, Aart was at customer
cocktail party.  Everyday Synopsys users love this aspect of SNUGs because
they get to connect *directly* to the Head Honcho of the company.  (Many
software company execs, once their company hits the big time, tend to become
very aloof to their average everyday user -- usually stupidly rationing
their face time only for the management of their biggest customers.)  The
big news at this keynote address wasn't in what Aart said as much as what
he *didn't* talk about...


    "Aart said nothing about SystemC.  Isn't this his baby?"

         - an anon engineer


    "Thursday Morning I attended the keynote speech of Art de Geus.  His
     talk centered on the following top 5 issues:

           1) Timing Closure
           2) Functional Verification
           3) IP reuse
           4) Signal Integrity
           5) Test vector generation/flows

     SystemC and C synthesis was not mentioned.  Timing closure is the big
     issue.  At .18 um and below, interconnect delay is 70-80% of the total
     delay, the gate delay is small.  Physical Compiler has been used by
     NVidia for 750K gates / 245K instances.  PhysOpt has over 20 tapeouts.

     For verification, Linux farms seems to be taking off, with all Synopsys
     verification tools running on Linux.  Verification is 70% of the
     project and growing.

     As far as test vectors, Synopsys is starting to work w/ Test Companies
     to deliver complete solutions.  They already own 95% of the test vector
     market with Tetramax and other tools.

     Some statistics on Synopsys, 3000 employees, #1 in product revenue and
     customer satisfaction.  20-25% of revenue goes back in R&D, this is in
     the top 3 of all companies, not just EDA.  Total revenue last year was
     $806 Million.  They are moving to about 75% time based licenses from
     the traditional permanent licenses to help smooth their cash flow."

         - Tim Wilson of Intel


    "Keynote Address:

     Aart de Geus, Chairman and CEO, gave the usual address describing the
     EDA roadmap and where Synopsys sits.  He also gave a Wall Street
     explanation for the recent switch to time based licensing.  Some
     interesting technical tidbits:

        o Synopsys Professional Services is presently doing RTL to GDSII.
        o PrimeTime is now in beta with crosstalk analysis capability
        o plans to integrate signal integrity capability into Physical
          Compiler
        o Synopsys is working with various tester companies to further
          bolster their test offerings.

     Nothing on SystemC."

         - Bob Wiegand of NxtWave


( BSNUG 00 Item 3 ) ------------------------------------------- [ 10/13/00 ]

Subject: SystemC C++, Superlog, Verilog, VHDL, C-Level, Vera, Verisity

THE SUPERLOG EVOLUTION:  Instead of hardware design languages converging on
one standard, they're splintering into at least 7 different flavors with
each splinter having its own special constituency.  Out of this chaos, it
appears (oddly enough) that Superlog is gaining in popularity with many of
the very experienced (and influential) veteran chip designers.  Also, it
was surprising not to find a single paper on Vera at the Boston SNUG.


    "I don't have the time to write a trip report but I did notice a few
     things you might want to include in your report.  The most interesting
     part of Aart de Geus' keynote address what was he did not talk about.
     He did not mention SystemC at all.  Most of the focus was on timing
     closure and how Physical Compiler is going to help you with both
     timing closure and crosstalk.  When asked, he mentioned SystemC for
     verification, but said that it is important not to get into a
     language war such as the one with Verilog and VHDL and therefore you
     should use C.  Strange when there is SystemC, C-level, and SpecC to
     mention a few.  Now we have five, instead of two languages, if you do
     not count Vera and E.  My vote for verification language still goes
     to Superlog since it is an evolutionary path from what we do today
     coding Verilog.  Ideally, I want all Superlog features in Verilog so
     I can truely have one language for design and verification."

         - Anders Nordstrom of Nortel


    "The bar gathering was a good opportunity to discuss technical topics
     in a round-table setting.  Among attendees were Kurt Baty (consultant),
     Anders Nordstrom (Nortel Networks), and John Cooley (the ESNUG guy).
     Kurt and Anders are members of the 1364 Verilog Behavioral task force
     and know Steve Wadsworth.  One of the interesting things that we talked
     about was SystemC.  I asked Kurt and Anders what they thought.  Kurt
     said, as a consultant, there were definitely things that he would use
     it for but implied that Verilog or VHDL are still more suitable for
     other aspects of design problems that he works on.  Anders said he
     prefered Superlog over C.  I agree.  Superlog looks like the better way
     to go.  It's geared toward SoC as a one-language solution and
     simplifies aspects of software/hardware co-design because design teams
     use a common language.  Being a superset of Verilog, it incorporates
     some of the more powerful constructs of C while allowing use of older
     proven design blocks or IP written in Verilog.  It's hard to be very
     enthusiastic about SystemC.  I came away with a "wait and see"
     impression of SystemC."

         - Steve Start of American Microsystems, Inc.


    "Specman v. Vera v. Rave?  Please.  We're a startup.  We're doing lots
     of good verification, finding a high degree of correlation with the
     lab and best of all, finding lots of bugs using good old Verilog.
     Vera/specman/Rave have a place, I guess.  Just not here."

         - Mark Garber of Equipe Corp.


    "Didn't hear anything about SystemC (C++ used for hardware modeling and
     eventually direct to synthesis), there were zero papers on the subject,
     still hear about it alot in the press, but the users aren't using it
     yet."

         - Brian Fall of Microchip Technology, Inc.


    "SystemC/C++ Modeling:

     This is cool stuff.  It allows the use of C++ to write cycle based
     behavioral models and RTL.  The RTL is a small restrictive subset
     compared with the simulation.  However, it seems extremely powerful
     because it gives you the full use of C++ to write simulation models at
     a very high-level.  This seems perfect for architecture development.
     Also, it's free and synthesis companies are supporting it.

     Drawback:

     It is a cycle based simulation language so it can not mingle with
     Verilog's event based engine. This means no structural simulations.
     Hence, the dependence on Formal methods and static timing checks for
     Post-Syn verification. 

     Verilog and C without PLI

     Co-Design Automation Inc. (www.co-design.com) has a language call
     "Superlog".  The marketing says that this language allows C/C++ and
     Verilog to mingle in the same modules. This would be a big help in
     writing simulation models.  It sounds very interesting, but they may
     use the PLI as a interface under the covers.  Benchmarks may need to
     be done to determine performance."

         - an anon engineer


    "On the subject of SystemC, I attended the tutorial and was absolutely
     unimpressed.  The presenter explained that he was an instructor of the
     language and not the inventor and then made several apologies for its
     Byzantine methods and structures.  SystemC and all of the other C and
     C++ based efforts keep confusing their goals based on what they think
     the customer may be wanting today.  Is it portability and simulation
     speed?  It shouldn't be.  What we need is the next quantum leap in
     level of abstraction from RTL so we can describe SOC.  When that is
     coupled with a tool suite and methodology that allows automated or at
     least procedural refinement to hardware and software implementation,
     then they will have something to sell.  From the tutorial, SystemC does
     not raise the level of abstraction from RTL and is actually more
     difficult to code.  Actually the real hold up for SOC is not capturing
     the design but verifying it.  Here is a question I have asked several
     SOC designers:

         If you had to reuse some big block of IP on your next chip and
         you were given the choice of:

         1) a really good implementation in fully synthesizable Verilog
            with constraints and scripts and everything you need to
            implement it
                                    - OR -
         2) a 100% thorough testbench of the functionality of the IP that
            was guaranteed to verify your use of the IP in your system and
            application

         Which would you choose given that you might have to create the
         other?

     The answer is usually number 2.  So do we need better tools, languages
     and methods for capturing designs or testbenches?  As you have noted
     before, John, where was Vera at this Boston SNUG?"

         - Martin Gravenstein of TDK Semiconductor Corp.


    "SystemC C++ is starting to remind me of Gateway selling Verilog before
     Synopsys came along...  you know you can use it for something really
     good.  So does the guy selling it to you.  The trouble is you just
     aren't sure what you'll wind up with or how you'll put its "results"
     to good use yet. 

     Put it in the rocket-science-for-big-companies column, for now anyway."

         - Mark Garber of Equipe Corp.


( BSNUG 00 Item 4 ) ------------------------------------------- [ 10/13/00 ]

Subject: VCS, Verilog-XL, Scirocco VHDL, VSim, Roadrunner, Radient, FSMs

SPEED IS EVERYTHING:  So there's some controversy around SystemC.  Today's
designers still use Verilog and/or VHDL to make chips, so the tech meat
on how to maximize your VCS and/or Scirocco simulations was eagerly lapped
up by the Boston SNUG audience -- especially the detailed talk on how to
use the Roadrunner and Radient speed-ups in VCS.


    "FB4 - Maximizing VHDL Simulation Performance        3 stars (out of 3)

     This presentation is definitely worth reading.  Even though it was
     geared toward VHDL simulators, and we are utilizing Verilog
     simulators, the concepts transfer pretty much intact to Verilog-XL
     and VSim.  The main thrust of this session was to understand how
     typical simulators work and thus, how to code efficiently to minimize
     simulation runtimes.  Keep in mind: the Scirocco simulator is a true
     cycle-based simulator, Vsim and Verilog-XL are not.  Some of the
     techniques described will not apply to our (current) simulators.
     The techniques presented can, and should, be used both in
     non-synthesizable and synthesizable Verilog code.  Main themes: reduce
     the number of events that the simulator has to handle; use as high an
     abstraction level as possible when modeling; make sure you are using
     optimally coded RAM models; don't use gate level models if at all
     possible - they are very compute intensive; watch the number of inputs
     in sensitivity lists; perform operations only when needed - don't
     setup default conditions first before other conditions are tested;
     structure 'if' statements to reduce common sub-expressions... yada,
     yada, yada.  You get the picture.  There are a *host* of things one
     can do to speed up simulations.  Read this presentation!"

         - Brian Fall of Microchip Technology, Inc.


    "We don't use VCS.  We did our own benchmarks of it against NC-Verilog
     and found NC-Verilog to be ~2X VCS.  We're a small customer so Synopsys
     never had the time to teach us the VCS switches to make it run faster."

         - an anon engineer


    "VCS Tutorial: John Girad, Massoud Eghtessad both of Synopsys.
     Overall: Somewhat of a repeat, but some useful new stuff.

     a) Roadrunner:  If you use a coding style (RTL synth subset for
        the most part), Roadrunner (RR) will automagically divide your
        code up into 4state/event and 2state/cycle divisions.  The mapped
        2-state logic should run much faster, but of course accuracy is
        sacrificed.
     b) VCS now takes always blocks with the same sensitivity list and
        merges them into on big block.  This speeds up sims.
     c) Don'ts for VCS speed:
           - no async
           - no feedback
           - no '#' or time variables 
           - do not use case, for, etc (all but simple if)
           - no '<=' (non block)
     d) Do's for VCS speed:
           - sync
           - full sensitivity list
           - simple 'if'
           - blocking
     e) '#1' delays:  +nbaopt option gets rid of them.
     f) Instead of adding a # delay in the middle of an always block
        (which kills VCS speed optimization), put the code in a task
        and call the task with the '#' delay in front (# taskname;)
     g) **** Even if you have an 'ifdef SCAN that is not on to enable
        your $recordvars;, this slows down the sim a lot.
     h) VCS has a new PLI learning mode, where it monitors all the
        PLI read and write calls, and then makes updates to optimize
        the PLI interface to speed things up.
     i) Got a Non response (political) to my question about if the
        non-PLI Vera interface is still happening.  My guess is no.
     j) ****  VCS will actually stop (at any line of code, randomly)
        and start running any other 'always' block with the same
        sensitivity list.  It can do this recursively throught all
        'always' blocks with the same sensitivity list.
     k) VCS version 6.0 has a new trigger called +always trigger, where
        the compiler will compile all 'always' blocks first, then do the
        'initial' blocks.
     l) VCS 5.1 and higher has race catcher/analyser (use +race=all).
        Not a dynamic checker, but more a lint/parser deal that looks
        for common code mistakes.
     m) Radiant Technology: 
          - PreVCS Preprocessor: have to turn on
          - config file allows directing Radiant on certain blocks
          - use +rad switch to turn it on.
          - up to 20x improvement
          - more people using this
          - kills debugging, SDF, timing checks
          - basically Radiant massages the code to get rid of redundant
            events.  Of course the code no longer matches and thus any
            wave output files might be hosed.
          - +rad_1 is a relaxed (sort of a 'Radlite') mode
     n) 2-state technology:
          - 'off' by default
          - up to 2x
          - not too many people using it
          - useful on RTL regression and functional verif
          - no strengths
          - X's go to 1 and Z's go to 0 (can't change this)
          - tri-state maps to logical ORs
          - regs initialize to all 0's, wires initialize to all 0's
          - '===' identical statements, as well as 'casez', case z,
            and even if statements are all complicated by the 2-state
          - to check it out a) run VCS with -Xman=4 to get a global
            Verilog file of your design (tokens.v).  Then parse the file
          - Can't do 4-state = to 2-state assignments.  Must to be 4 = 4
          - Can do 2-state = 4-state, but strength is lost and x and z
            map to 1 and 0 respectively.
      o) Codecover: Covermeter
         New: functional coverage with user defined expressions.  Little
         Verilog-esque language to write checkers to check for or against
         something happening.  Could do most of these in E or Vera, but
         these are tied into the entire code coverage tool.

     In other parts of the Boston SNUG, personally the transfer to tcl synth
     talks and the $assert watchdog approach for verification were my fav
     highlights.  My talk on how to generate a verification plan quickly was
     well received, too.

         - Peet James of Qualis Design


    "TB2 - FSM Designs with Glitch-free Outputs           2 stars (out of 3)
                        
     This paper received the '2nd Best Paper' award for the conference.
     Second time the author, Cliff Cummings, has captured a SNUG title.  He
     promotes a unique method of coding Finite State Machines so that all
     the outputs are registered without incurring the heavy area penalty of
     simply slapping registers on all the combinatorial signals output from
     a module.  Instead of having separate registers for the state encoding
     and for any module outputs, a one-hot state encoding strategy is used
     and any combinatorial outputs are assigned to be asserted during their
     appropriate state.  For all outputs which assert in unique states, no
     other registering is required.  For outputs with do not occur in unique
     states, simply add an additional 'state' to the state encoding and the
     output is now registered.  The idea is fairly straight forward, but
     requires a little bit of paperwork to map out exactly which outputs get
     assigned in what states and which outputs will require the addition of
     an additional 'state' for assertion.  Worth a read for another design
     technique."

         - Brian Fall of Microchip Technology, Inc.


( BSNUG 00 Item 5 ) ------------------------------------------- [ 10/13/00 ]

Subject: Veritools, Tensilica, Synopsys LM Models, Silicon Perspectives

THE REST OF THE WORLD:  One of the nice things about SNUG gatherings is that
rival, non-Synopsys EDA vendors are given a night to push their wares, too.
Sometimes a delicate issue, it's good to see hot competitors like Verisity,
Monterey, and C-Level giving their pitches at the Boston SNUG -- them being
allowed into the Synopsys tent insures that SNUGs are *user* driven instead
of Synopsys marketing driven.  (And it appears that Silicon Perspectives is
still missing those 16 tape-outs they claimed to have at DAC this year...)


    "During the Vendor Fair I let some of the marketing/sales weenies talk
     my ear off.  These were the highlights:

      - Undertow suite looked *really* cool!  We looked at Veritools'
        Verilog linter, but never looked at the tools suite that included
        it, Undertow.  The tool essentially is a waveform viewer that
        incorporates the source code into the equation.  This allows the
        designer to for instance, click on a waveform event and be shown
        were in the RTL code that event occured.  It also infers state
        machine diagrams from RTL, provides schematic views (like Design
        Analyzer from Synopsys, but seemed more powerful, linked the
        originating code and allows tracing of simulation values), allows
        for searching the sim database for complex event occurances similar
        to a logic analyzer, it can compare two simulations and display the
        differences.  The tool also offers true single step trace
        capability to simulations similar to what the software guys offer
        our customers via our emulation suites.  I definitely think we
        should get this tool in and evaluate it.
 
      - Synopsys LM models group has a lot of stuff to pick from when
        developing a system testbench.  Also capability to setup a
        hardware/software co-simulation/validation testbench.  This might
        be of use in letting software/apps get a head start developing
        compilers, application code, etc.
 
      - Tensilica just released a customizable, synthesizable processor with
        a DSP co-processor option for use in system-on-chip designs.  Really
        neat stuff, you customize it via GUI interface!  Just select your
        data path size 8-,16-,or 24-bit, data width, and memory bus widths,
        push a button and it plunks down a parameterized uprocessor/DSP
        core.  Pricey @ $150K.

     Had no time for anything more."

         - Brian Fall of Microchip Technology, Inc.


    "John,

     We had a visit from our friendly Silicon Perspective sales team today
     and the demo they showed sparked some interest.  They claimed to have
     several tape-outs under their belt at this point, which is when I
     thought, "Yeah, but have you told Cooley about them yet?".   Heard any
     more from the masses on this one?  Their "don't flatten the hierarchy"
     approach to floor planning certainly has some major benefits.  Just
     thought I'd look for some further references before engaging in any
     kind of benchmark."

         - Pete Churchill of Conexant Systems


( BSNUG 00 Item 6 ) ------------------------------------------- [ 10/13/00 ]

Subject: Verif., Formality, C++ Verif., A Synthesizable 'Watchdog', O-in

THE CHECK IS IN THE MAIL:  On the not-so-sexy side of chip design, users
discussed the growing problems involved in functionally verifying their
chips.  Peets James of Qualis gave a well received "5 Day Verification
Plan" pointing all the groupthink and politics involved with validating
a real, live design.  Paul Whittemore & Frank Wong of SUN gave a detailed
talk on BFMs and C++.  Others discussed Synopsys' Formality, but, oddly
there were no Synopsys Vera papers or tutorials given at the Boston SNUG.
Design weenies like myself, Kurt Baty, and Steve Golson, were found at
the "Synthesizable Watchdog Code" talk by Duane Galbi and Wilson Snyder
of Conexant.  (It was a talk on making homebrew 0-in 'checkers'.)


    "TB3 - The Five-Day Verification Plan                3 stars (out of 3)

     Peet James presented a set of guidelines that can be followed to
     develop a verification plan for your next design.  Peet's presentation
     slides do not have nearly as much meat as the paper, but the actual
     presentation was very good.  Basically, the author is a real designer
     who knows that other real designers hate meetings and really hate
     dealing with formal verification plans.  So he designed a method to
     get a bunch of cranky overworked engineers together to flesh out a
     verification plan with a minimum of stress.  From what I've seen so
     far here at Microchip, we could use some of the ideas contained in this
     presentation.  The critical points are that the plan should be put in
     place as soon as possible in the design cycle and that it needs to have
     buy-in from everyone (which means soliciting everyone's input).  This
     is a good read."

         - Brian Fall of Microchip Technology, Inc.


    "VI) Verif Planning and Formality

       A) 5 day plan: My paper.  People seemed to realy like the aim of the
          paper of focusing more on how to make the plan and get engineers
          to buy into it.  They also liked the yellow sticky method.  Good 
          questions both afterwards and during the fireside chat.
       B) Formality: Military/Space redo of a PowerPC core.  Good divide
          and conquire approach with scripting. Easy iteration turn around.
          RTL to RTL compare.  Debugging, which is usually the hassle of
          formal verification seemed smooth.  Overall results were good.
          Time saved over pure simulation.
     VII) Fireside chat: This new feature of SNUG.  Had each presenter
          standing by a paper easil.  People came by and asked question.
          I had a steady stream of people.  It was fun and informative.  It
          lasted almost an hour, and I would have liked both a stool to 
          sit on and a beverage.  Gregg Lahti was giving away free Tcl code
          on a floppy.  He had 'FREE CODE' written across his easil.  Not to
          be out done, I wrote in large letters 'FREE BEER'.

         - Peet James of Qualis Design


    "Formal Verification:

     These tools compare the RTL with the Netlist, then report differences.
     The compare coupled with static timing checks in Primetime can replace
     netlist simulations.  This allows the functional simulations to remain
     in the behavioral/RTL world which run much faster than the structural
     netlist.

     Thoughts:

     It sounds good in theory, but everyone I talked with who considered it
     said they didn't have the guts to go without structural simulations.
     We may want to try this Post-[Project Name Deleted] or on a test run."

         - an anon engineer


    "V) High Level Verification

     A) Random C based Verification: Paul Whittemore & Frank Wong of SUN

      1) CVI is their in-house tool that uses C to accomplish what Vera or
         E does in producing a more random verification approach. 
      2) The goal of the CVI environment is to use a C++ object oriented
         approach that will allow for quicker and more effective testbench
         generation.
      3) DUV still has some vlog BFMs.  C++ BFMS hook to the vlog BFMs.  C++
         API is used to direct the BFMs.  The BFMs are low level, mainly
         handshaking.  Higher functions are done by the C++
      4) Decomposition: Logical breakdown of the Verification pieces to
         divide and conqure. (BFMs, MON, TC, DUV, CHK, GEN)
      5) Layering: Levels of abstraction to enable average testcase writer
         to generate tests fast.  Still can access lowest level to make
         special tests.  Most test ment to be automatic.
      6) Transaction Generators: Private and Public coding style via header
         files that hide the overal complexity to the average user.  Random
         elements with autochecking and parameters are builts.  Some
         parameters intialized at start & remain, others change on the fly.
      7) Constraints: Has E and Vera type mechinisms to defualt, weight
         or hardcode parameters.
      8) **** Used UML class hierarchy diagram to show the OOP interaction
         between classes and methods. 
      9) No coverage built in.
     10) Test Control: Uses a FSM model to run the tests. Master FSM,
         with sublayers of other FSMs.  Has arbiter built in.  Chose this
         over thread driven control.
            a) Load BFM data
            b) Program I/O
            c) Check results
     11) Test manager: handles random test parts."

         - Peet James of Qualis Design


    "Formal verification: should we invest in this, especially since we will
     be doing RTL modifications for scan and power savings?  Main use is to
     formally verify RTL source code against a gate level 'equivalent'.
     This can be done at various points along the design flow, but is most
     important after we have added all the post-synthesis junk like clock
     tree buffers, scan logic, etc.  These tools will *not* formally verify
     gate level schematics with any transistor level switches."

         - Brian Fall of Microchip Technology, Inc.


    "B) Synthesizable Watchdog Code by Duane Galbi of Conexant.

        I found this paper the most interesting.  It describes sort of a
        poor man's 0-in technique of putting in little watchdog checkers
        and reporters throughout your RTL code.  This code is done in such
        a way as to not cause any logic in DC.

     1) Method: if done right, does not even need translate on/off 
        constructs in the code. DC will easily ignore.
     2) Preprocessor used to implement the technique.  Available at
        http://www.ultranet/~wssnyder/veripool
     3) Watchdog code is put into a $ system call. The are about 5
        (more used by the author) different calls. Each has parameters
        that are fed to the system call. It can do notes or warning or
        errors, but also more advanced checking. The actual $assert
        commands are kept to one line, in order to not screw up line
        numbering.
     4) Has runtime message levels, so you can turn stuff on and off.
     5) Simple and easy enough so that RTLr's can install as they write
        code. Has to be simple and transparent
     6) Examples are fsm trackers, counter checkers, etc.
     7) Poke holes: You can put these in the behavoir models as well,
        but being a RTL entry item, they still sort of promote the 
        old school methodology of doing RTL first and verification
        2nd. Concurrent Verification and RTL would not reap the benefit
        of any of the checkers firing, until the RTL was fully 
        incorporated."

         - Peet James of Qualis Design


( BSNUG 00 Item 7 ) ------------------------------------------- [ 10/13/00 ]

Subject: Altera, Xilinx, FPGA Express, FPGA Compiler II, Synplicity

THE SHAME OF BEING EDA:  The laughing buzz overheard at the Boston SNUG in
the FPGA world was how Synplicity was IPOing itself not as an EDA vendor,
but as an 'Internet infrastructure provider' and how this opened them up to
possible future securities fraud allegations.  On the technical side, Al
Czamara of ASIC Alliance warned designers about not fully testing their
FPGA designs and another set of presentations discussed block level
incremental synthesis in FPGA Express/FPGA Compiler II. 


    "I think it is a joke that some companies are hiding behind their
     thumbs saying 'we are not an EDA company, we are an Internet
     infrastructure provider'"

         - Lucio Lanza, general partner at U.S. Venture Partners
           (EE Times, 10/9/2000)


    "Re-programmability is *not* a license to ignore verification -- don't
     fall into that trap.  There is little distinction between FPGAs and
     ASICs when it comes to verification requirements."

         - Al Czamara of ASIC Alliance Corp.


    "I own FPGA Express and Synplify seats.  The easiest way to explain?
     They each do well what they each do well.  My gripe?  It'd be nice if
     they both understood what "reproducable across versions" means.  That
     is, a new version usually gives new results - not usually better."

         - Mark Garber of Equipe Corp.


    "5) Programmable Logic Synthesis.

     This session was composed of two:

       1) Block level incremental synthesis run are now doable in FPGA
          Express/FPGA Compiler II.  This is available since version 3.4.
          Blocks of the design have to be defined by the user as "block
          roots" and they will be re-synthesized only if changes occur
          in the source (relies on timestamps).

          This improves synthesis runtime and palce and route runtime.
          Examples showed 42% reduction in runtime for ~8500 logic cells
          (didn't show how big was the change.)

          A feature that is good to know now exists.

       2) A flow for integrating PrimeTime with Altera FPGA.  This flow
          could be seen in the presentation I have or the SNUG web-site
          (soon).  This allows ASIC designer to continue with their regular
          flow.  The disadvantage of this is that SDF file Altera outputs
          can only be used to perform worst-case (setup violations) analysis
          in PrimeTime (best-case SDF is being worked on these days and
          will be added soon).

     The same capability, and same disadvantage, is used also by Xilinx."

         - an anon engineer


( BSNUG 00 Item 8 ) ------------------------------------------- [ 10/13/00 ]

Subject: Library Compiler, Non-Linear Delay Models, IBM'S DCL, IEEE 1481

SURPRIZES:  In one of the most unexpected quirks at this year's Boston SNUG
gathering, the Best Paper was not given to a typical mainstream presentation
on something like FSMs, Verilog coding styles, or DC tricks -- but to a
paper on Library Compiler, Synopsys NLDM libraries and how to fix them
(by Steve Start of American Microsystems, Inc.)  Why?  Those facing NLDM's
complicated issues really, really liked Steve's paper.


    "In one of the user sessions, an IBM engineer presented a paper on DCL.
     DCL was adopted as an IEEE standard (IEEE 1481) in June 1999 and is
     aimed at providing a common set of library rules for all EDA tools.
     DCL is an industry initiative aimed at moving away from the "Synopsys
     .lib standard" to an industry standard that will improve accuracy,
     consistency, and functionality for all EDA tools that use it.  DCL
     still hasn't gained wide acceptance but larger players such as IBM
     and Synopsys are investing a lot of effort to incorporate and utilize
     the IEEE standard.  Not yet ready for prime time but steady progress
     is still being made.

     Dallas Semiconductor/Silicon Metrics presented a paper which reinforced
     and reiterated much of what we covered in our paper about the potential
     for inaccuracies in Synopsys NLDM libraries and how to fix them.
     Dallas Semi. had purchased design libraries for synthesis and
     simulation that turned out to be very poor in quality.  They reported
     delay calculation differences between Synopsys NLDMs and HSPICE that
     were in excess of 40% in some cases.  They used tools from Silicon
     Metrics to recharacterize the Non-Linear Delay Models in their Synopsys
     libraries, reducing the delay calculation error to 1 to 2%.  Silicon
     Metric's solution lists for about $150k."

         - Steve Start of American Microsystems, Inc.


    "Library Compiler:

     We only had about 15 people in the room for the presentations, with the
     majority involved in their own library development.  Obviously, that's
     not an area of interest to many Synopsys users.

     Two presentations focused on the difficulties of modelling with
     non-linear delay models in .lib.  Steve Start gave a great overview of
     the work he has done at American Microsystems on running tests to
     refine non-linear delay models through repositioning grid points
     to reduce maximum error & adding an offset to cancel out more error."

         - Chris Kiegle of IBM


( BSNUG 00 Item 9 ) ------------------------------------------- [ 10/13/00 ]

Subject: PrimeTime Static Timing Analysis (STA)

A WALL STREET LOVE AFAIR:  I've gotten used to the Wall Street Analysts
calling me for explainations of EDA.  (There are about a dozen of them
active and collectively they call about me 5 times per month on something.)
Usually it's about some controversial EDA issue du jour like licensing or
Physical Synthesis, with an emphasis on the money side.  Recently, one of
them called and we somehow stumbled onto static timing analysis tools.
When I said "you probably aren't interested in that because it's almost a
Synopsys monopoly", I was quickly corrected with: "No!  Monopolies are
good, John.  They're guaranteed cash flow.  Tell me more about this
'PrimeTime' software.  What does it do?"  (Dataquest's 1998 market share
numbers gave Synopsys PrimeTime 74.1 percent, Cadence Pearl 22.6 percent,
and Mentor SST Velocity 2.7 percent.  That was 1998.  My gut says it has
gone to even more PrimeTime in the upcoming 1999 Dataquest numbers due out
this month.)


    "I found the PrimeTime tutorial to be very informative.  I don't have
     much experience with STA, and this tutorial helped to bridge the gaps
     in my understanding of STA.  What I especially appreciated was the
     explanation of STA from first principles, at the transistor level."

         - an anon engineer


    "I thought most of the user presentations and tutorials were good.
     The one on Primetime was good tho I didn't learn a whole lot of new
     things.  It was useful to hear about the bug on the "set_clock_latency"
     command and I liked hearing the explanation of how the tool acts when
     you use the "set_clock_latency" command as opposed to creating a
     clock and moving out the edges with the "-waveform" switch."

         - Tamar Barry of Honeywell


    "Advanced Static Timing Analysis in Design Compiler and PrimeTime

     Unfortunately, I was fooled by the word "Advanced" in the title of this
     presentation.  It was a good intro to static timing, but I did pick up
     a few nuggets.

        o 2000.05 has a new report_timing switch, -path full_clock.
          This switch will include the buffers in the clock tree,
          instead of just total propagated clock time.
        o The create_generated_clock command is now in DC 2000.05.
          This is great for divided clocks, and eliminates the need for
          two-pass STA to determine correct insertion delays before
          netlist level optimization with propagated clocks.
        o There is a set_min_pulse_width command in DC2000.05
        o There is a new algorithm in PrimeTime 2000.05 for calculating
          transition times with back annotated net parasitics.

    "In the past I've been reluctant to use any PrimeTime specific features
     because I have to optimize in DC.  I'm happy to say the feature gap
     between PrimeTime and DC is closing nicely."

         - Bob Wiegand of NxtWave


( BSNUG 00 Item 10 ) ------------------------------------------ [ 10/13/00 ]

Subject: Dc_shell vs. DC-Tcl, Tcl Gotchas, TOPS Tcl Synthesis Environment

NO ONE'S LAUGHING:  About 2 years ago, when Synopsys seriously jumped into
Tcl by offering DC-Tcl and making PrimeTime's interface Tcl, most of the
users were clueless what Tcl was.  There were 103 people in the room in
the Boston SNUG user presentations on Tcl.  Of those 103, 18 raised their
hands when asked "Raise your hand if you're using Tcl today".  Of those
same 103, 14 raised their hands when asked if they were using Tcl because
they using PrimeTime.  These stats may not be scientific, but they do
clearly point to the fact that *most* Synopsys customers currently still
prefer their time-tested standard non-Tcl dc_shell scripts over making the
painful switch into pure play Tcl.


    "WB1 - A Tcl Synthesis Environment (TOPS)            3 stars (out of 3)

     This was a very interesting session that detailed an entire synthesis
     environment based on Tcl scripting.  Synopsys is slowly getting rid of
     the DC shell scripting environment and replacing it with Tcl scripting
     so all of our new synthesis scripts should use Tcl.  Their proposed 
     environment was straight-forward in design, but the scripts supplied
     may seem daunting to the novice reader.  The Intel authors crafted the 
     environment to allow for ease of portability and reuse between projects
     and to allow for a common setup between Design Compiler and Primetime.
     The scripts all reside in one synthesis directory.  There are a
     multitude of issues handled by the scripts including pre-synthesis,
     synthesis, and post-synthesis issues (such as buffer insertion between
     back-to-back flops to resolve hold time issues on Q->D paths).  I plan
     on using the script package presented in this session as the basis for
     the synthesis environment on the DSC project.    (The Tcl source code
     for TOPS is on http://www.DeepChip.com in the downloads part.)
 
     WB1 - Tcl: The Good, the Bad, and the Ugly          2 stars (out of 3)

     Pointing out the good and bad things about Tcl was the basis for this
     session.  Synopsys apparently bought the kernel for Tcl from some
     company and inserted it wholesale into DC with some modifications and
     extensions.  That means that no attempt was made to address and/or fix
     some of the inherent bugs or quirks in Tcl.  Tcl is an improvement over
     DC-shell scripts, but from the standpoint of Microchip users, that will
     not be evident.  Trust me, it is a better environment for scripting :^)
     The presentation and paper are probably a good read for those of you
     who will be working with DC and Primetime.  The Intel author points out
     a host of issues to watch out for when using Tcl and some tricks to
     make life easier."

         - Brian Fall of Microchip Technology, Inc.


    "II) Make/Tcl:

     A) Design Independent topdown or bottom up synth script by
        Mariagrazia Graziano of U. Torino, Italy

        1) 4 scripts:
            L Preprocessor
            R Revise (custom constraints)
            W Write (Design output)
            S ??? Transparent to user, maybe main script
              that calls others.
        2) Clean directory structure
        3) scripts are perl based with user file input
        4) Intermediate Makefile generated

        Pretty basic/classic setup. Clean, has time budgeting, has default
        setup for newbie or first run, has expert higher run mode as well.
        Some code available.  Rest possible if asked (some NDA).

     B) TCL for Synth (TOPS) by Tim Wilson of Intel

        1) 7 steps of synth (good succinct format: setup, libs, readin,
           constraints, compile, write, report)
        2) 8 accompanying Tcl files.  Some global, others overide global
           with block specific Tcl file).
        3) Some ugliness with incompatiblity between Tcl interface of
           DC and Primetime
        4) Good list of DC-Tcl improvements
        5) No legacy DCscripts.  Must go cold turkey.  DCscript is going
           away.
        6) No ACS tie-ins, just basic DC run.  No time budgeting.
        7) Some people still resistent to converting (maybe 30% convert
           so far).
        8) Modular approach allows for easier update to new DC revs.  The
           have a new release (out of a CVS database) a couple weeks after
           a new DC revs.
        9) Future: ACS type of time budgeting added next.
       10) Ditched legacy perl script. Some minimal perl used.
       11) Code available at http://www.deepchip.com

     C) Good, Bad, & Ugly of TCL Language by Gregg Lahti of Intel.  Very
        funny sound effects.  Clint Eastwood spagetti western stuff.  Had
        a picture of Cooley for the ugly part.

        1) Good: Synth is easier overall, free code avail (Solvnet/Deepchip)
            - better scripting environment than dc
            - procedures and functions calls
            - variables
            - hashes and associated arrays
            - time and I/O functions
            - file operators
            - TK not available, but can use Sockets to do most GUI stuff
         2) Bad: Non-mainstream language syntax
            - Braces and quotes get mixed up
            - global variable are backwards (have to be in procedure)
            - line termination is picky and flakey
            - oddities of conditionals and braces
         3) Ugly: DC specific tcl commands
            - collectors (accessed differently)
            - attributes
            - Synopsys specific commands

     Useful because less than 30% customers use Tcl."

         - Peet James of Qualis Design


( BSNUG 00 Item 11 ) ------------------------------------------ [ 10/13/00 ]

Subject: Design Compiler 2000, Presto, Floorplan Manager, ACS

THANK YOU, AMBIT!:  Even though the press gives more emphasis to C/C++ HW
design and physical synthesis, as a Synopsys user I've been over-joyed with
Cadence's Ambit-RTL synthesis.  Yup, you read that right.  I said "Cadence's
Ambit-RTL".  Why?  Because even though Ambit-RTL isn't used by anyone other
than Philips and a few independent consultants who can't afford to buy a
Synopsys DC license, the mere threat of Cadence Ambit-RTL has pushed
Synopsys R&D into beefing up DC 2000 with all sorts of new bells and
whistles!  Customers always win when there's some good olde competition
keeping the EDA vendors honest.  DC just keeps getting better because of
this.  Thank you, Ambit-RTL, for improving Design Compiler.


    "The Design Compiler 2000 Tutorial (FB1) featured a real live Synopsys
     R&D manager!  Jay Adams discussed his team's product, the Presto
     parser.  Although the Corporate AE's usually do a fine job presenting
     these tutorials, the presence of an R&D person greatly enhanced the
     presentation and resulting Q&A.  We should encourage this level of
     involvement in future Boston SNUG tutorials."

         - Keith Silva of IBM


    "The synthesis discussions were very interesting.  I will be interested
     in trying the Presto compiler and comparing the results to the
     current default HDL compiler.  The new "sticky" load command
     (set_rtl_load) and generated clocks command will be useful.  When I
     get back to a register-based design, I'm interested in trying
     behavioral retiming."

         - Tamar Barry of Honeywell


    "Logic Synthesis Improvements in DC 2000

     This was the session to go to for all the gearhead details of DC2000.05.
     Here's some of the good stuff:

     o New HDL compiler (Presto), enabled by a hidden switch, will be the
       default in 2000.11.  They claim 6x average runtime improvement and
       35% less memory used than 1999.10.   It supports Verilog 2000
       'generate' statements, shows state machine constructs on elaboration,
       supports defparam, array instantiation, allows resource sharing with
       Verilog conditional operator (but still won't guarantee a MUX!), can
       infer MUXes from if statements with some unspecified switch or
       variable.  There was a claim of 5% area improvement using Presto.
     o New Verilog netlist reader enabled by enable_verilog_netlist_reader
       along with read -format verilog -netlist claims 3x memory reduction
       and 3x average runtime improvement.
     o ACS now uses RTL budgeting to eliminate the pass 0 compile.  The
       partitioning, directory structure, compile scripts and ACS commands
       themselves are now user customizable.
     o New commands set_input_parasitics, set_rtl_load to override wireload
       models for known long nets.
     o New command calculate_rtl_load to derive set_rtl_load values using
       set_load and SDF from trial layout.  Use for top level and RAM
       connectivity nets.
     o The set_clock_latency command adds -early and -late switches for
       -source.
     o Generated clocks now in DC!
     o The report_timing command adds -path full_clock to show clock tree
       drivers, -net -capacitance to show capacitance of nets.  It also adds
       -sort_by switch to sort by group or slack.  Slack sorting is by
       absolute value, ignoring any path group weighting values.
     o Implicit dont_touch cells are now implicit size_only.
     o Area optimization improvements yield 3-5% area improvement (average)
       with some cases up to 20% using medium effort compile.  Some designs
       saw 0-5% increase but improved delay.
     o Module Compiler's datapath synthesis is now used by DesignWare
       Foundation, claiming up to 2x faster runtime, 12% better delay, and
       10% better area.  Adding an Ultra license gives the capability of
       replacing transform_csa with partition_dp to enable Module Compiler
       datapath partitioning of vector sums and sum-of-products.
     o Min delay improvements include faster runtime on hold fixing, ability
       to prioritize cell count over area, and multiple cell support for
       set_prefer -min.
     o BRT (Behavioral retiming w/Ultra license) now supports asynchronous
       set/clear synchronous set/clear and load enabled flops without
       decomposing.
     o Floorplan Manager (part of DC Ultra) improvements include the ability
       to specify pin and port locations for RAMs and macros, and
       improvements to buffering and hold fixing.

     Finally, here's a list of stuff on the horizon for DC:

     o 64 bit
     o VHDL Presto
     o QOR improvements for simple compile mode
     o RTL budgeting in simple compile mode
     o HLO (High Level Optimization) speed up w/ lots of operators present
     o Clock gating check like PrimeTime
     o Case analysis like PrimeTime (won't remove logic!)
     o Improved balance_buffer for non-skew controlled high fanout nets
     o Automatic ungrouping of hierarchies
     o Floorplan Manager support for overlapping obstructions, routing
       obstructions, and reoptomize_design -top
     o ACS supporting autopartitioning and acs_read_hdl

     I also found out that the ideal_net attribute was supposed to be
     enhanced to eliminate max_transition in addition to max_fanout and
     max_capacitance.  I was VERY disappointed to find out that it got
     killed before beta.  The lack of max_transition causes major grief when
     trying to leave scan enables and resets for CTS.  I have posted on
     ESNUG about this in the past.   I have some creative workarounds for
     this, but I would really like to see the problem fixed in the tool."

         - Bob Wiegand of NxtWave


( BSNUG 00 Item 12 ) ------------------------------------------ [ 10/13/00 ]

Subject: Automatic Chip Synthesis (ACS)

THE PATH OF LEAST RESISTANCE:  Anticipating that RTL synthesis is going to
take a back seat to Physical Synthesis, Synopsys R&D has been working on
automating your typical RTL synthesis flow.  Whether it gets better results
and whether customers will "take" to ACS is another matter altogether.


    "6) Automatic Chip Synthesis (ACS) in Design Compiler

     ACS is included in the Design Compiler and enables an automatic chip
     synthesis.  When doing bottom-up synthesis, constraint file for each
     module needs to be provided, which is usually a hard mission.  When
     doing top-down synthesis, a top-level constraint file only is needed
     but this does not work well for designs more than ~100Kgates.  ACS is
     the combination of both.  It is part of the DC and needs only a
     top-level constraint file.  It reads in the RTL files and the top-level
     constraint file and generates synthesis scripts and constraints for
     each module (or heirarchy of modules).  The timing and electrical
     constraints for each block is generated by using design budgeting.  The
     ACS also outpts a makefile that is used for the rest of the synthesis.
     After having a constraint file for each block, the ACS automatically
     runs the makefile which compiles each block in parallel.

     Advantages:

      1) Automatically compiles blocks in parallel.
      2) Needs only a top-level constraint file.
      3) The constaraint files could be evaluated and changed if needed.
      4) Generates an automatic makefile.
      5) After the results are analyzed, if they are not good the ACS could
         be run again ( acs_recompile_design command ) or if they are
         partially good the commands acs_refine_design could be used.

     Disadvantages:

      1) When compiles in parallel, uses several licences in parallel.
      2) Accurate constraints?

     ACS seems like a nice thing to try out.  The presenters where from R&D
     and could be addressed with questions about this.  If not for accuracy
     issues, ACS could be used for writing initial constraint file for
     reference and generating the makefile."

         - an anon engineer


    "Synthesis and Coding Techniques

     A presentation from DSP Group described the use of ACS (Automated Chip
     Synthesis) on their DSP cores.  It was pointed out that top down
     ACS synthesis did not give the best results on a 300K gate DSP core.

     Cliff Cummings presented a paper about designing FSM's with glitch-free
     outputs using one hot encoding.  Good paper.

     Tensilica did and interesting presentation about power optimization
     with their cores.  Their techniques include reducing gate count by
     selective implementation of features, clock gating, and performance
     trade-offs.  Their tools provide a graphical tradeoff analysis of
     power, speed, and size.  They presented some interesting data
     comparing compile strategies, and showed their best QOR (Quality Of
     Results) came from a combined bottom up/top down multi-pass strategy
     (which is what I do).  They particularly pointed out that top down
     ACS gave inconsistent results, huge runtimes, and sometimes didn't
     finish on 40K-50K gates."

         - Bob Wiegand of NxtWave


( BSNUG 00 Item 13 ) ------------------------------------------ [ 10/13/00 ]

Subject: Design-For-Test (DFT), Scan, ATPG

BRAVE NEW WORLDS:  One of the side issues in the Physical Synthesis battle
has been "How do we insert scan inside such a flow?".  Synopsys R&D has
been working to expand their test offering with AutoFix, Boundry Scan
Compiler, and RTL Test DRC, and TetraMax.  TetraMax has been very
enthusiastically adopted by customers, but it's primarily a frontend type
of tool.  It'll be interesting to see it grow into the backend; and to see
how customers use or not use these other recent test enhancements, too.

If you want to understand the growing dominance Synopsys is getting in the
test world, just look at the Dataquest numbers.  For 1998, Synopsys owned
94.4 percent of the test-chain-insertion market; Mentor owned 94.6 percent
of the ATPG market.  For 1999, Dataquest reports that Synopsys owned 94.0
percent of the test-chain-insertion (no change); for ATPG, the new breakout
became 46.4 percent Synopsys and 45.6 percent Mentor (in one year Mentor
lost half of its ATPG marketshare to Synopsys!)
 

    "FA2 - Design-for-Test and ATPG

     This session was developed and given by Synopsys personnel, so it was
     inherently biased, but overall was a good presentation of the
     capabilities of the DFT Compiler and TetraMAX ATPG (automatic test
     pattern generation) tools.  Synopsys seems to have well integrated
     DFT Compiler into their design flow after having initially purchased
     it and premiered it as a stand alone tool.  They have added an RTL
     Test DRC (design rule checker) that is run early in the design cycle
     to grade RTL code on it's potential testability.  An AutoFix
     capability allows the designer to let DFTC to automatically add
     logic to the RTL code for added testability.  That scares me a bit,
     I wouldn't use that feature too often.  Also added a Shadow Logic
     DFT function which inserts MUXed-flop test points around designated
     elements such as memories.  Covered Boundry Scan Compiler (BSD) for
     JTAG scan chain insertion, probably not applicable to us.

     Finally, covered the TetraMAX tool.  It, too, is well integrated into
     the design flow.  Seemed to run quite well and is straight forward.
     The obvious downside is that you have to add a good portion of the
     suggested scan for test logic in order to get good results with the
     ATPG tool, but hey, that's what it's for, right?  Issues: our standard
     cell library will need to be modified for DFT and ATPG, I got the
     name of the Synopsys weenie who handles this and she is shipping us
     an ASIC library design kit that should tell us what we need to know.
     Tidbits: ATPG tool supports pattern mapping (ex: we have fault pattern
     for RAMs that we want followed) it will insert that into it's auto
     generated patterns; supports pattern compression and sorting; has
     multiple pattern generation algorithms so fast initial runs can be
     made."

         - Brian Fall of Microchip Technology, Inc.


    "Design For Test (DFT) and ATPG

     There are some very cool DFT enhancements in the 2000.05 tools
     including RTL testability analysis, automatic fixing of certain DFT
     violations, and automatic shadow logic for RAMs.  The RTL checking
     requires the Rest-RTL-Check license, which is included with DC-XP.
     This feature will identify at the RT Level testability problems like:

        o Uncontrollable clocks and asynchronous resets
        o Latches enabled at the beginning of the test cycle
        o Clock used as data
        o Source register launch before destination register capture
        o Registered clock gating circuitry
        o Combinational feedback loops
        o Multiple clocks feeding into Latches and Flops
        o Black boxes

     The idea is to give module level designers the ability to easily
     identify and correct testability problems in the RTL, correct them
     before any compile takes place, and avoid the most of the headaches
     of debugging testability problems during full chip integration.

     Autofix, when enabled, will automatically fix uncontrollable clocks
     and asynchronous preset/clears.  I have mixed feelings about this, as
     I would prefer clean testable RTL.  This would be very useful,
     however, in the case of legacy designs in gate form with no access
     to the RTL.  This would have been very useful in a few previous lives.

     The shadow logic feature will add "sink" registers for observability in
     parallel to the address and data inputs of RAMs to capture any
     combinational logic feeding into them.  It will add "souce" registers
     for controllability in parallel to the data outputs of the RAMs along
     with a test_mode controlled MUX to drive any combinatorial logic after
     the RAMs.  It will also add logic to deal with the output enable and
     drive the tristate nets on the output side of the RAM.

     Another new feature is ScanPlanner.  It provides a mechanism to write
     out scan chain information in formats for Silicon Ensemble or Apollo,
     and provides another mechanism to reorder the prelayout scan chains to
     match the post layout reordered chains.

     Coming soon is the ability to specify different clock periods for
     mission mode and scan mode."

         - Bob Wiegand of NxtWave


( BSNUG 00 Item 14 ) ------------------------------------------ [ 10/13/00 ]

Subject: Power Compiler, Prime Power, Module Compiler

ANOTHER MONOPOLY:  The other little monopoly that most Wall Street weenies
don't know Synopsys has is in power optimization.  Yes, -- Sente, Cadence,
Avanti, Simplex, and Mentor all offer some sort of power analysis set of
tools; but Synopsys is the only EDA company that offers users a tool that
actually does something about cleaning up your power problems.  Of course,
the moment you even say the words "gated clocks", you've immediately made
yourself the lifetime enemy of the engineer on your team who's responsible
for the scan testing of your chip, but at least it's low power!  On the
flip side, Module Compiler got only a moderate notice (because datapath
design is a specialized type of designer) and nobody mentioned Behavioral
Compiler in their reviews of this SNUG.


    "TB1 - Module Compiler, Design Power                 3 stars (out of 3)

     How to design for lower power consumption was the focus of this
     presentation.  This session picked up on one of the themes of the week
     that low power design is becoming more of a critical issue.  It was
     targeted at the Actel ProASIC FPGA library, but most of the information
     is easily applied to any standard cell flow.  The author suggested that
     for optimal results, low power should be considered at each level of
     abstraction of a design, i.e, 1) pick power friendly datapath elements
     2) use power driven synthesis 3) power driven floor planning and
     4) chip level power verification.  Most of the presentation, however,
     dealt with optimal selection of data path elements and their synthesis.
     Major concepts were: power consumption is switching activity dependent:
     so strive for a datapath with fewest nets switching; consider
     pipelining to reduce length of datapath combinatorial runs; turn off
     portions of a datapath when not in use; reduce fanout if possible via
     selection of data path components.  The author characterized quite a
     few test cases based upon adder and multiplier circuits to verify his
     suggestions."

         - Brian Fall of Microchip Technology, Inc.


    "Power Compiler
     --------------

     This tool uses 3 techniques to reduce power.

     1) Inserts Clock Gating:

     This will allow the clk input of a register to switch only when it is
     necessary.  The recommended technique includes adding a negative
     enabled latch and an AND gate into the path before certain registers.
     Alternative methods such as a single AND gate or OR gate are bad
     techniques because they can result in glitches.

     2) Automatic Operand Insertion:

     This technique involves re-arranging combinational logic to keep it
     from "rippling" through a result when it is not needed.  An example was
     given with 3 MUXes in series.  Each of the 3 MUXes has an independent
     select line.  Power Compiler would change the design to insert AND
     gates before the first MUX data inputs.  The data would pass through
     the AND gates only IF both the second and third MUX are selected.  This
     would reduce data switching through the first MUX.  Naturally,
     depending on the width of the data path, a good number of gates will
     be added to the design.

     3) Gate Level Optimization:

     Techniques at the gate level like: Sizing, Tech mapping, Pin swapping,
     factoring, buffer insertion and Phase assignment.  Not all were covered
     in the presentation.  The ones I caught were:

       - Tech Mapping: hiding high toggle rate nets inside cells.  For
         example an AND gate with a high toggle rate output feeding an
         OR gate would be combined into an AND-OR gate.

       - Factoring: reduce network toggling by moving a high activity net
         farther down a combinational cone of logic towards the result.
         This reduces the number of gates connected to the high toggle
         rate net.

     Simulation:

     Power Compiler can collect switching activity from SAIF (Switching
     Activity InterFace) files generated by monitoring your simulations.
     These SAIF files guide Power Compiler as to where to reduce power in
     your design.  The SAIF files can be extracted through the Verilog PLI,
     Modelsim MTI (DPFLI) or a VCD conversion.  Multiple SAIF files can be
     merged together before feeding them to Power Compiler.

     Prime Power
     ------------

     Prime Power is a separate power analysis tool.  It can be used to view
     power consumption "hot spots" graphically or through reports.  It is a
     more comprehensive tool than the DesignPower analysis tool that comes
     with Power Compiler.  The GUI interface looks like an Oscilloscope
     screen showing power consumption at key nets over time.  Prime Power
     take in 3 input files:

       - PIF file: includes switching info on each net from gate simulation.
         Different from SAIF.
       - Wire Cap file
       - Synopsys .db file.  Includes library and power info.

     Prime Power works at the gate level as opposed to another tool called
     PowerMill which looks at power consumption at the transistor level.

     Random Notes:

       - The worst case power data will be extracted running at best case
         simulation conditions (P,V,T) Switching rates will be fastest then.

       - PowerArc tool can be used to generate a library with power info to
         allow you to run Power Compiler and Prime Power.

       - The instructors say some vendors are pushing towards doing "power
         signoff" in the future.

       - Clock skew is always an issue when logic is inserted into the clock
         path.  Supposedly, most vendor clock tree insertion & optimization
         tools can handle this.

       - To get accurate power toggle info, your simulations need to reflect
         what you will do in the real world.  Garbage In = Garbage Out

       - It seems to me that Power compiler may add a significant number of
         gates to your design if not controlled.  You could have problems
         if you have to bump up a die size, or get timing problems as a
         result of using it.

       - Cost of these tools:  Power Compiler: $90K, Prime Power $80K for a
         permanent Synopsys license.   Probably 15% maintenance.  The price
         should be 1/3 of permanent numbers for yearly "lease."

     The presentation slides could have been more organized.  I felt that
     the flow wasn't clear and I had to piece it together.  However, the
     presenters were very enthusiastic about the products.  I believe they
     were both Power AE's.

     These tools seem like a must for anyone doing low power designs such
     as cell phones, PDAs or other portable electronics...  assuming that
     your volumes will be high enough to justify the high purchase price."

         - Ira Hart of Galt Design


    "What I took away from Boston SNUG is Prime Power and Power Compiler.
     I don't know if it was because they are ready or we are ready.  Mobile
     wireless communications equals the hot stuff and they require low
     power solutions.  DSP is synchronous, scalable and synthesizable but
     done the easiest Design Compiler way it is power hungry.  Power
     Compiler does what DC arguably should have done from the beginning:
     gate the clocks instead of re-circulating back through a MUX.  I
     have suspicions this will help with timing as well.  Is Power Compiler
     going to help us meet our power goals?  Is Prime Power going to help
     us identify our power problems?  We will let you know, John, if these
     two tools help us with low power design."

         - Martin Gravenstein of TDK Semiconductor Corp.


( BSNUG 00 Item 15 ) ------------------------------------------ [ 10/13/00 ]

Subject: Physical Compiler (PhysOpt), Chip Architect, FlexRoute, FlexPlace

11 MONTHS & COUNTING:  No real "big" news on the overall Physical Synthesis
came out during the Boston SNUG.  Physical Compiler has 10 known, publically
user-confirmed tape-outs (although Aart claims he has 23); Cadence PKS has
one failed tape-out in April from EmpowerTel (PKS was used on a 50 Kgate
MIPS core inside a 3 Mgate chip) with no other confirmed tape-outs; Magma
and Monterey have no tape-outs.  This means that Synopsys now has an 11
month lead in this market (the first 2 PhysOpt tape-outs were Matrox and
nVidea back in November'99) and has oddly become the Leader To Beat in this
niche.  (Why I say this is odd is that given the backend nature of physical
synthesis, if you'd have asked me 4 years ago who I'd have predicted would
be taking the initial lead in this tool space, I would have said Cadence or
Avanti.  After all, the backend *is* their home turf, right?  Odd.)


    "Synopsys Synthesis is near "End_of_Life":

     More and more designs are being done in <= 0.18u, so the usefulness of
     static timing convergence using the traditional "wire_load_models" in
     synthesis is becoming obsolete.  The reason is that when you go <.25u
     in feature size, more of the delay is taken up in the routing which is
     almost impossible to estimate without placement.  What this is saying
     is synthesis will be used for RTL->Netlist generation but will not be
     able to perform the proper static timing checks based on routing
     estimates.  This requires the back-end or "routing" tool to do most of
     the work to insure the design meets timing.  This also means more
     (Synthesis->P&R->Primetime) database iterations.

     Synopsys is attempting to sell everyone "Physical Synthesis" to help
     bridge the gap between front-end Synthesis and back-end PR to cut down
     the number of front-end -> backend iterations.  The tool allows a
     preliminary placement of the design that can be handed off to the
     back-end people. 

     Personal Thoughts: 

     Although this sounds like a nice solution, depending on the cost of the
     tool, it might be worth the investment in the backend tools.  If you're
     going to spend the time in placing, you might as well do it in the real
     back-end tool and iterate in house (we can target any vendor's library
     and be more flexible in fab selection).  It seems like Synopsys is
     attempting to scare customers into believing that the back-end is too
     complicated by offering what appears to be the easier solution."

         - an anon engineer


    "We expect to ramp up on Physical Compiler in the next few months for
     our next project.  Magma and PKS are too unstable to be viable for us.
     Mike Montana gave a good tutorial."

         - an anon engineer


    "Physical synthesis, physical synthesis, physical synthesis.  Get the
     picture?  That was a hot topic this year as it will continue to be.
     Most of the design community represented at SNUG is well below 0.5u
     technology and they are becoming more sensitive to the issues of
     deep-submicron design.  Physical synthesis is starting to take hold
     with more of the designers as more of them experiment with it.  Mostly
     positive reports.  Our current designs however don't come anywhere near
     needing this type of tool, at least for now.  Note: At 0.5u, we are in
     the bottom 5% of design flows.  About 40% are using 0.35, 40% are at
     0.25u, 15% are at 0.18u and below."

         - Brian Fall of Microchip Technology, Inc.


    "Physical Compiler Workshop Trip Report, 9/19/00 - 9/22/00
     (took this immediately prior to the Boston SNUG)

     I found the single day workshop on Physical Compiler very worthwhile.
     With a strong knowledge of DC and COT flows, one day is plenty to get
     the basic concepts down and also get to play with the tools in the
     labs.  The labs were pretty straightforward, going through the
     following steps:

       o Converting LEF to plib using the lef2plib utility
       o Writing pdb using the write_lib command
       o Converting a floorplan DEF to PDEF 3.0 using the def2pdef utility
       o Reading in wireload compiled gates, PDEF file, and running PhysOpt
       o Analyzing physical views in the GUI
       o Dealing with floorplan obstructions from macros and power straps
         and creating obstructions manually
       o Running PhysOpt with various switches and comparing results.

     PhysOpt gets it's floorplan information such as RAM & macro placement,
     power straps, pad cells, etc. from the PDEF file.  Placement and layer
     blockages are supported using routing obstructions.  Soft and hard
     keepouts are also supported.  Global routing is done within the tool,
     but that information is lost in the transfer to layout tools.  The
     output of PC is a legally placed design that can be written out as db
     file, or a gate-level netlist and a PDEF 3.0 file.  There is a utility
     to convert the db to DEF, making the interface to Cadence tools pretty
     straightforward.  The Avanti conversion uses the netlist and PDEF and
     involves the use of Scheme scripts.

     The major advantage of this tool is the elimination of wireload models
     by using Steiner routes and Elmore delays.  The placement engine,
     FlexPlace, is not a quadratic placer and is therefore free to place any
     cell anywhere to meet timing and reduce wire length.  The claims are a
     10-15% Manhattan wire length reduction over traditional placers.  This
     makes the layout more routable and able to run with faster clocks and
     less power.  Wire length reduction is critical as designs move into
     smaller geometries, especially below .25u, since wire delay is by far
     the dominating factor.  Since wireload models are at best a poor
     estimate of the most dominate delay factors in the design, eliminating
     the inaccuracy they introduce is key.  Traditional methods of margining
     the timing in synthesis leads to overbuilding the design, which means
     increased area, die size, wire length, & power consumption.  Combining
     synthesis and placement allows the design to be implemented with much
     greater accuracy, while reducing the wire length, area, and power.

     Since PhysOpt is built on top of Design Compiler, a lot of the DC
     functionality is there.  PhysOpt adds physical switches to existing
     DC commands.  A DC Ultra license triggers additional optimization
     features in Physical Compiler.  I heard it said that Module Compiler in
     DC works with Physical Compiler as well as the -scan switch for
     compile.  Stitching is still done with insert_scan, and is not yet
     location based, but will be in the 2000.11 release.  Full integration
     with Power Compiler is also in development.  Unlike DC, the command
     interface is TCL only.

     The modes of operation of the tool are gates to placed gates (G2PG)
     with the PhysOpt command, and RTL to placed gates (RTL2PG) using the
     compile_physical command.  Presently, the G2PG method can handle up to
     ~2M gates.  The RTL2PG mode can handle up to ~300K gates since only a
     top down methodology is supported.  A bottom up RTL flow is in
     development, and I later heard it said in the SNUG PhysOpt tutorial
     that simple compile mode is supported.

     The tool has various switches such as -congestion and -area_recovery.
     There is a GUI for viewing the floorplan, placement, and congestion
     maps.  It can run gates in place-only mode for congestion analysis
     with multiple floorplans.  Since clock tree synthesis (CTS) is done
     outside this tool, the -incremental switch is used after CTS.  I
     heard there will be CTS by the end of the year, in both Physical
     Compiler and Chip Architect.  There is an ECO mode to merge in
     new/changed leaf cells, as well as an incremental post_layout mode.
     This can be used after layout extraction like Floorplan Manager on
     steroids producing legalized placements instead of suggested
     placements from incremental PDEF.  This would be great for combining
     PhysOpt with Min/Max compile for post-layout hold fixing.

     One interesting aside is the wireload model problem has been pushed
     back into the LEF.  Having bad R and C parameters can be just as
     damaging as having bad wireload models.  Therefore, the R and C
     parameters must be correlated.


     SNUG Boston 2000 Tutorials: Physical Synthesis / Timing Closure

     I caught the very end of this tutorial since I already attended the
     PhysOpt class.  I was just in time to see the live demo in progress
     and here some very impressive evaluation statistics including:

       o Performance improvements up to 24%
       o Gate count reduction up to 4.5%
       o Power reduction up to 12.6%
       o Elimination of post layout iterations for timing closure
       o Back end timing closure achieved in days instead of months
       o RC correlation to detail route 0.4% pessimistic to 2.7% optimistic

     In the Physical Synthesis tutorial, I found out that RTL to GDSII was
     accomplished using a detail router Synopsys acquired from Gambit last
     year.  Interestingly enough, when I searched for Gambit on the Synopsys
     web site, I found a mention in the key installation guide for SCL (the
     new Synopsys Common Licensing).  The tool listed as being compatible
     with SCL1.1 was Encore, with Gambit in ()'s.  The individual Gambit
     tool entries are no longer shipped, and Encore is not listed among
     their regular product offerings.  Hmmmm.  With CTS in beta, it sounds
     like they are really close."

         - Bob Wiegand of NxtWave


    "Finally, on the subject of PhysOpt.  I attended the tutorial on Friday
     and have to say the AE from Texas, Mike Montana, was one of the most
     dynamic and interesting speakers of the whole week.  Not sure what that
     says about PhysOpt, but if you are going to try and sell something for
     those prices you better have a good spokesman and he was the right man
     for the job."

         - Martin Gravenstein of TDK Semiconductor Corp.


    "Darell Whitaker of IBM did a presentation on Physical Compiler in the
     IBM ASICs design flow for customers.  Basically, PhysOpt fits in as a
     mechanism to work with early floorplanning to synthesize a design and
     generate a detailed layout for use in the IBM backend tools.  Further
     work will still need to be done to insert clocks & scan & do further
     optimization.

     The room was packed.  He got lots of questions, including:

     o What is the estimated time/effort for someone to incorporate Physical
       Compiler into their flow?  How much time did we spend?

     We spent about 8 months.  It should take much less than that to get the
     tools up and running.  A good portion of the 8 months was spent working
     through issues of data conversion to go between Physical Compiler
     formats & data formats used by IBM Physical Design tools.  One problem
     being worked around is support for differing levels of PDEF 2.0.
     PhysOpt uses PDEF 3.0, but IBM is still migrating to that.  Other time
     has been spent on analysis & correlation between the physical modeling
     of the placements between the Synopsys & IBM tools.

     o What is the flow for post-routing ECO?  Incremental mode?

     We haven't investigated this yet.

     o Did we do gates-to-gates or RTL-to-gates? What kind of runtime issues
       are there in going from RTL-to-Gates?  What about benchmarking
       Physical Compiler to the traditional DC flow?

     Most of our work has been on gates-to-gates.  We're looking at
     RTL-to-gates now.  You can do a RTL-to-gates run in PhysOpt but you
     still need to do a rough synthesis run first so that you can create a
     floorplan.  It's for a rough estimate to drive the RTL PhysOpt run,
     otherwise you'd have something Chip Architect (which we don't use.)
     Our flow is more back and forth between Synopsys and IBM's ChipBench
     tools, so that means we use IBM's floorplanner instead of Chip Arch.
     We're still benchmarking, but all the legal crap says I can't tell
     anyone about it.  Talk to the Synopsys and IBM lawyers if you want
     this data.

     o Design teams - do they need to change to think about floorplanning?

     Darell's answer was that logic designers should always be thinking
     about floorplanning.  That might be an unwelcome change for some
     people, but it's something we have to deal with all the time.  We look
     at a lot of designs where some redefinition of logic can have a major
     impact on creating a design that will meet timing & wire, like large
     multiplexers, where data & selects can be grouped based on the
     floorplanning needs.


     Physical Compiler (PhysOpt) Tutorial -

     Good presentation and demos, with examples, by Mike Montana.  What I
     liked seeing most was the was the new PhysOpt GUI with the capability
     to display a cone of logic in the logic viewer and then toggle to a
     display of the same cone in the physical layout.

     Mike said PhysOpt is using DesignTime and Chip Architect is using the
     PrimeTime timing engine.  Why doesn't PhysOpt use PrimeTime, too?  Why
     get more accurate, detailed info from physical floorplanning and then
     back off when you go to run PhysOpt?

     The coarse route in PhysOpt shoots for under 5% inaccuracy.

     They are talking about merging all the db files into one in the future
     to combine the logical & physical...  so combine .db & .pdb."

         - Chris Kiegle of IBM


( BSNUG 00 Item 16 ) ------------------------------------------ [ 10/13/00 ]

Subject: Linux Farms, Sun Workstations

SUN & SNUG:  If you knew the history of Sun and SNUGs, you would have done
a double take at this year's Boston SNUG.  Because Suns are the main EDA
platform of choice (March's San Jose SNUG 2000 stats below)

   I design on a:  32-bit Sun  ################################# 66%
                   64-bit Sun  ############## 29%
                    32-bit HP  ######## 16%
                    64-bit HP  ### 7%
             Windows or NT PC  ####### 15%
                     Linux PC  #### 9%
               other platform  # 3%

Sun has always been a financial supporter of SNUGs.  Sun would give a well
attended talk on their EDA usage with compute farms which SNUG attendees
actually liked because it hit so close to home for the average EDA user.
"Why the double take, John?", you ask.  Because this year Synopsys has done
a big push into Linux and the whole idea behind Linux is running UNIX on
PCs -- eliminating the need of having to buy *workstations* for EDA users!


    "Linux farms: seemed to be a topic getting more attention of late.
     Sun's presentation on Managing EDA Complexity & Tools Interoperability
     touched on this as well as did Art de Geus (Synopsys CEO) in his
     keynote speech.  Apparently people are seeing 2x performance going to
     Linux boxes vs UNIX platforms due to the 2x+ processor speeds in the
     Intel based boxes.  Most of the major HDL simulators now have Linux
     flavors.  This is major bang for the buck for designs which are
     simulation intensive.  Something we might look into, we could throw an
     awful lot of simulations at a design with that kind of processing power
     for that cheap.  Don't know what the license issues are though on
     Linux vs UNIX.
  
     Synopsys recently changed it's license fee structure to a non-rated
     subscription based system instead of up-front payments."

         - Brian Fall of Microchip Technology, Inc.


    "In the afternoon I attended a seminar by SUN on their compute ranch.
     It was a similar presentation to last year's SNUG.  But they had some
     very interesting facts at this Boston SNUG talk, too.  For 1000
     engineers who use it, it takes 150 CAD engineers to support them.  Sun
     uses about 125 commercial tools and 250 internal tools.  All their
     code is in Verilog and they have ALOT of VCS licenses running
     functional simulations.  No backannotated VCS simulations are run, only
     STA is used for timing analysis.

     Simulations run about 25-45 cycles per sec and they run 150 Million
     cycles 3x/week.  85% of their simulations are directed tests with 15%
     randoms.  They run about 1 million batch jobs a month.  The full
     regression run is about 35 CPU years long.  (150 M cycles)  Their
     compute ranch averages 850MB per CPU. (4000 CPU's total) and is 95%
     utilized.

     Each Design Engineer at Sun has about 3 CPU's at his disposal for jobs.
     Their simulation rate has progressed to running 1 Billion cycles per
     month.  Of course, Sun uses their own version of batching and revision
     control SW, but for Software they have used Clearcase."

         - Tim Wilson of Intel


    "Linux:

     Many of the people I've talked with say their companies are switching
     over to Linux.  The term "Linux Farm" or "Linux Ranch" was used to
     describe a rack of Intel boxes running Linux.  Synopsys as well as
     other EDA vendors have ported their tools over to Linux.  Reason: Low
     cost processing platform, low cost hardware service (if it's broke,
     just replace)."

         - an anon engineer


    "Sun's compute farm presentation

     A presentation from a manager in their server design area.  Most of
     their focus is put on verification & running more & more large jobs
     that are growing in size as well.  They have about 3 CPUs for every
     employee.  They run at about 85-95% capacity. They are running Linux."

         - Chris Kiegle of IBM


( BSNUG 00 Item 17 ) ------------------------------------------ [ 10/13/00 ]

Subject: Synopsys DesignSphere Website

ONE STEP BEYOND:  Instead of just enabling EDA users to dump their large
UNIX workstations for high end PCs running Linux, Synopsys has gone all the
way in offering complete EDA tool usage via the Internet.  That is, you
no longer have to do any EDA SysAdmin work any more if you sign up for their
DesignSphere service.  What makes DesignSphere viable is that it offers a
*complete* set of frontend-to-backend EDA tools with Synopsys and Avanti
plus (according to the Synopsys people) you can bring in your own choice of
other 3rd party tools, too!  (If it was just a Synopsys-only zone, I'd give
it a big yawn because real chip design involves all sorts of niche 3rd party
tools.)  Neat.  I'm a chip designer.  I like the idea of not having to do
SysAdmin if I can avoid it.


    "Synopsys started their DesignSphere program, but I hear it still costs
     $50K/6months so depending on how many tools you use it's still
     expensive.  This is a system where you can access their tools via the
     Internet and you pay a subscription price per time period.  They did a
     demo of it over a cable-modem access point and it ran pretty well.
     This was during the DFT Compiler/TetraMAX tutorial."

         - Brian Fall of Microchip Technology, Inc.


    "Vendor Fair:

     There were a lot of interesting things at the vendor fair, but the
     thing that really got my attention was DesignSphere Access by Synopsys.
     This is an internet accessible hosted design environment.  It allows
     outsourcing of compute resources and associated IT issues,
     incremental peak increases in licenses and compute resources, and
     multi-site design collaboration.  Presently, Synopsys and Avanti tools
     are available.  This sounds ideal for startups.

     Synopsys Night:

     Synopsys had the usual presentations of various tools.  The Physical
     Compiler/Chip Architect/Flex Route booth was particularly popular.
     The really cool thing about it was all the demos were being run
     through DesignSphere Access to a cluster in Mountain View."

         - Bob Wiegand of NxtWave


( BSNUG 00 Item 18 ) ------------------------------------------ [ 10/13/00 ]

Subject: The Hotel, Red Sox, Vermont Inns

FALL FOLIAGE:  To many Boston natives it was quite a surprise to see how
many distant out-of-towners came in for Boston SNUG.  At first we felt it
was because Boston was becoming a second "Silicon Valley" to the real
Silicon Valley in California... (Yes!)  Then we remembered it was Fall
foliage season and an awful lot of engineers had brought their wives
along for a romantic weekend after the Boston SNUG had ended...  :^(


    "One of the nice perks about presenting at the Boston SNUG was getting
     to go to a Major League Baseball game: Boston Red Sox vs. Cleveland
     Indians - a game full of playoff implications for both teams.  One of
     the game highlights was a 2 run homer pounded over "the monster" in
     left field by DH, Dante Bichette, that gave the Red Sox a solid 4-1
     lead in the third inning.

     In all, its hard to imagine a better conference or a nicer trip.
     Before the conference, my wife and I enjoyed a brief sojourn in the New
     England area.  The weather was great; the scenery beautiful.  We saw a
     lot of covered bridges too.  If you are ever in Vermont, we highly
     recommend the Maple Leaf Inn.  http://www.mapleleafinn.com/"

         - Steve Start of American Microsystems, Inc.


    "The location of the SNUG hotel got mixed reviews.  10 percent loved
     it.  10 percent hated it.  Seems like it was the locals hating it and
     everyone else loved it.  Traffic was bad in the mornings.  If we use
     the same hotel next year, we can look at starting sessions later in
     the morning."

         - Joanne Wegener of Synopsys


============================================================================
 Trying to figure out a Synopsys bug?  Want to hear how 11,000+ other users
    dealt with it?  Then join the E-Mail Synopsys Users Group (ESNUG)!
 
       !!!     "It's not a BUG,               jcooley@world.std.com
      /o o\  /  it's a FEATURE!"                 (508) 429-4357
     (  >  )
      \ - /     - John Cooley, EDA & ASIC Design Consultant in Synopsys,
      _] [_         Verilog, VHDL and numerous Design Methodologies.

      Holliston Poor Farm, P.O. Box 6222, Holliston, MA  01746-6222
    Legal Disclaimer: "As always, anything said here is only opinion."
 The complete, searchable ESNUG Archive Site is at http://www.DeepChip.com


 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)