( ESNUG 502 Item 8 ) -------------------------------------------- [04/19/12]

Subject: Jim Hogan on 28 nm yield, SPICE memory battle, meta-simulation

    "A visitor from Mars could easily pick out the civilized nations.
     They have the best implements of war."

         - Herbert V. Prochnow (US author 1897 -1998)

Hi, John,

I wanted to give you an update on the SPICE War battles that DeepChip has
been covering, through the Synopsys acquisition of Magma and its FineSim
products, and my thoughts on where it's headed, including the concept of
"meta-simulation" to achieve high yield and performance designs.  This is
especially important with the yield headaches of 28 nm and below.

First, I have heard unofficially that Synopsys is keeping their HSPICE,
HSIM, and XA simulators along with Magma FineSim.  Synopsys is also
retaining the FineSim marketing and R&D teams.

In my experience, it is very difficult to integrate product lines with so
much overlap.  In the end it is best to choose one product platform going
forward; this presumes the simulation platform choice has the scalability
to span multiple process generations.  That is why I'm guessing that
FineSim becomes the target platform to which Synopsys migrates everything.

Customers' willingness to move affects the length of the transition.  This
is because the customers wrap the tools with a lot of custom code to
support their own flows, so the question usually comes down to what
benefit they get by moving.

(Synopsys also has a similar problem with their static timing tools, but
that discussion is for another day.)

Below is a snapshot lining up each SPICE offering with market segments,
along with my understanding of TODAY's market ranking for each:

    Std Cell/Digital:  1st- Synopsys HSPICE, HSIM, XA, and FineSim

              Memory:  1st- Synopsys HSPICE, HSIM, XA, and FineSim

            Clock/IO:  1st- Synopsys HSPICE, HSIM, XA, and FineSim
                       2nd- Cadence Spectre
                       3rd- BDA AFS
                       4th- Mentor Eldo

        Mixed Signal:  1st- Cadence AMS Designer
                       2nd- Mentor ADMS
                       3rd- BDA AFS

              Analog:  1st- Cadence Spectre, APS
                       2nd- BDA AFS
                       3rd- Agilent GoldenGate
                       4th- Synopsys HSPICE
                       5th- Mentor Eldo

                  RF:  1st- Cadence SpectreRF
                       2nd- Agilent GoldenGate
                       3rd- BDA AFS

           Microwave:  1st- National Instruments AWR Office
                       2nd- Agilent ADS

Cadence continues to dominate the Analog and RF simulation markets, with a
strong presence in the mixed language side of the mixed signal market,
while Berkeley Design Automation (BDA) competes with Cadence for market
share.  Synopsys acquisition of Magma returns it to its domination of the
memory and custom digital simulation markets; the acquisition will likely
accelerate FineSim's proliferation.  Mentor Eldo is down to just one
top-20 semiconductor company so is unlikely to be a significant factor.

Memory Is The Next SPICE Battleground:

The stage has now been set for the simulation battles that will follow for
the rest of this year.  I predict that the most important battleground this
year for EDA vendors is the memory market.  The memory market includes
designs being developed by memory IP companies, large semiconductor company
internal memory IP development, memory chip companies, and SoCs with
embedded memories.

The memory market is already $250 M+, and has the highest annual growth
rate of all the sectors above, at >25%.  The largest driver is mobile
devices (including smart phones and tablets), plus large server farms that
are propelled by increased utilization of cloud computing.

We have an insatiable appetite for memory.  The more memory we have, the
more we consume.

To be competitive, memory designers face tremendous pressure to produce
low power, minimum cost/die area, and maximum performance designs, with
a fast ramp to high yield.  These factors move them to state-of-the-art
foundry processes (i.e. 28 nm and smaller) from TSMC, Samsung, Intel, and
GlobalFoundries.  Unfortunately, the new foundry processes have increased
variability, which leads to uncertainty in terms of the yield versus
power/performance/area tradeoff.  Over design, and you waste power,
performance and area.  Under design, and you take a yield hit.

SRAMs are a classic example of what memory designers are facing.  The
SRAM content is increasing 2x per process node, with a 50% area scaling
for each node.  Because SRAMs use the smallest area in the design, they
are the most sensitive to variation effects such as random dopant
fluctuation and line edge roughness.  Low voltage failures are a big
concern for SRAM design.  At 28 nm, 1-sigma for threshold voltage is
45-55 mV.  With supply at 1 V and 6-sigma design specs, SRAM bitcells have
very narrow margins.  Further, since SRAMs are replicated in large arrays,
producing a single working product requires that millions to billions of
repeated cells all work correctly, with only a few defects.  One failure
in a million is about 5 sigma; one in a billion is about 6 sigma.

So memory cells need very high yield (5 to 6 sigma) design.  With Monte
Carlo analysis, this requires millions to billions of simulations.  You
can see why this is an attractive target for the EDA SPICE vendors.  A
memory supplier can never have enough insurance to getting the design.
In one sense you can never have enough simulation -- but this truism is
limited by time and money.

This is especially important now with the yield headaches of 28 nm and
below.  Without the insurance of thorough SPICE coverage, you either risk
a yield loss during manufacturing or wasting power/performance/area in
a competitive market.

SNPS, CDNS, LAVA, BDA:

With its acquisition of Magma, Synopsys now basically owns the memory
simulation market.  The upside Synopsys has is their ability to raise
prices on simulators, given the vacuum of competition that currently
exists.  The word on the street is they are quoting an 18% pricing
increase on their simulators.  SNPS will work to secure longer term
contracts to stave off competition for a few years.

Cadence and BDA are already moving to take advantage of the vacuum created
by Magma's exit, to become the market's new alternative SPICE supplier.
Overall, memory customers win in a couple of ways: First, they have a tech
arms race improving the speed, accuracy and capacity of the simulators.
Second, they have alternatives to Synopsys in contract negotiations.

However, Cadence cannot leverage their environment (ADE) to sell SPICE
simulators in the memory market (as CDNS had in the analog market) as
memory designers typically work at the command line.  Cadence will need to
attack from a different angle - it's rumored that they will be releasing a
new simulator at DAC to directly compete with Synopsys in the memory
market.  Memory is a new market for Cadence; it doesn't have the product
overlap Synopsys has.  A little penetration into it would be a huge win -
a 10% market share grab would yield Cadence an additional $25M+ per year.

Okay, all that is fine but does this simulation battle get to the heart of
the designer challenge?

Remember, measuring a bitcell yield takes millions or billions of Monte
Carlo samples.  That's simply too slow or infeasible even on today's
fastest simulators, and even utilizing massive compute clusters/clouds.
Memory designers need a way to analyze yield-performance tradeoffs
quickly, such as measuring yield at a target spec, or measuring spec at a
target yield.  They want the analysis to feel like a single simulation,
and for it to align with the rest of their command-line design flows.

SPICE Meta-Simulation:

The time has come for alternative methods.  EDA is full of examples of
using abstraction and partitioning methods to manage increasing
complexity, i.e. the tyranny of numbers.  Meta-simulators focus on the
real tasks designers are trying to get done in an efficient way.  A meta-
simulator feels like using a single simulator to the designer, while
driving hundreds or thousands of simulations in parallel from traditional
simulation engines like HSPICE, HSIM, Spectre, etc.  Meta-simulators
reconcile the designer's aim for a "meta-level" analysis that may require
a large number of simulations, while keeping the user input to a netlist
and not much more.  The output is simple, numerical, and well-defined,
just like the result of a simulation.

Meta-simulators have been around for a long time, such as running corners
or running Monte Carlo, but they have traditionally been so simplistic
that they didn't merit their own category.  Today, meta-simulation tools
can be much more powerful, addressing designer challenges and speeding up
different analyses types in precisely-targeted ways.  For example, rather
than naively running 100 or 10,000 PVT corners just to search for the
worst cases, a "Fast PVT" meta-simulation would consider all the PVT
corners, but intelligently simulate only the small subset requires to
identify the worst-case corners with confidence.

The meta-simulator category covers a broad set of high-value analysis
capabilities, such as:  fast PVT analysis, fast extraction of statistical
corners at the 3-sigma level of circuit performances (rather than device
performances); and fast sensitivity analysis.  The ideal meta-simulator
for memory design addresses their key challenge:  measuring yield-
performance tradeoffs out at 5 or 6 sigma, with the same accuracy as
millions or billions of Monte Carlo simulations but with a low
computational cost.  Other methods such as importance sampling also
"reach out" to the high-sigma regions, but since they don't use Monte
Carlo samples at the core, they have issues in scalability, accuracy, and
user trust.

To my knowledge, there is only one "meta-simulator" for memory design that
considers millions or billions of Monte Carlo samples, and focuses its
SPICE simulations on the extreme output values to provide yield-
performance tradeoffs.  That is the "High-Sigma Monte Carlo" (HSMC) tool
from Solido Design.  Let me share with you a result:

A Solido HSMC 5 billion Monte Carlo sample run can take just 15 minutes.

Solido HSMC has the attributes to be an ideal "meta-simulator" for memory
design: a command-line interface; interfaces to the leading SPICE
simulators; and parallelization to hundreds of cores/machines.

Since HSMC's introduction 18 months ago, it is in use at 7 of the top 20
semiconductor vendors.  As a reference, NVidia discusses using Solido HSMC
for memory design in ESNUG 492 #10.

I predict that meta-simulation will be the center of the looming SPICE
memory simulation battle with the need for high yield and performance
designs.  Its time has come, and it looks like the first SPICE killer app
is Solido High-Sigma Monte Carlo (HSMC).

Game on.

   - Jim Hogan
     Vista Ventures LLC                          Mountain View, CA

  Editor's Note: Jim Hogan is a former CDNS bigwig, a veteran EDA private
  investor, and the Chairman of the Board of Directors at Solido.  - John
Join    Index






   
 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)