( ESNUG 515 Item 2 ) -------------------------------------------- [11/29/12]
From: [ Jim Hogan of Vista Ventures LLC ]
Subject: Key aspects, market drivers for the present Custom 2.0 retooling
Hi, John,
As I said in my earlier short history, Custom 1.0 started in the early 1980s
and has had a run for over 30 years. This is something we never dreamed of
at the beginning with Cadence Analog Artist. Looking at 2013 and beyond,
the semiconductor industry will be doing a major Custom 2.0 retooling;
because it must.
CONSUMERS
Chips have become so powerful and low-cost at high volumes, they have
enabled consumers to carry devices which are basically supercomputers in
their pockets. These consumers want to use video, and they don't want the
batteries going dead. The chip makers that can supply low-power, high
performance, low-cost chips will reap the market rewards. But competition
is fierce.
THOSE PESKY ATOMS
The market pressures for gigahertz design frequencies, low cost, low power,
and fast-yield-ramp designs are driving the move to sub-28 nm processes.
However, while transistors are shrinking, atoms aren't.
Just a few out of place atoms can cause severe variability in a device,
which translates into severe variability in circuits, ultimately affecting
the whole chip's power, performance, and yield. The complexity is further
exacerbated by 3D FinFET transistors.
Accordingly, the major market drivers for the Custom 2.0 retooling are the
power and performance of these consumer devices. Memory circuits now must
be power-sensitive, despite having to read and write increasingly large
volumes of data. Analog blocks must be power-sensitive as they take on
increasing responsibility in voltage regulation, clocking, and interfacing
to the outside world. CPUs and logic are getting faster, within power
density limits, and the increased compute load is multiplying CPU cores
faster than rabbits. Taken together, this is a *step* function change.
Figure 1 below illustrates this step function change, and starts to hint at
some of the problems. To have a chance to hit these power and performance
goals, SoCs will now need to be at 28 nm, 20 nm, and below.
Figure 1: SoC design needs driving Custom 2.0
That means somehow these SoCs must deal with variability plus all the other
headaches wrought by moving analog designs to the smaller nodes.
Insatiable consumer demand for more memory is driving up the percentage of
embedded memory on SoCs -- according to Semico embedded memory will account
for 65% of the die area of SoCs by 2014.
Single cores become dual cores, dual cores become quad cores, and app core
count will jump to 120.
Clock speeds are hitting the true gigahertz range, adding the associated
headaches of power consumption and RF-style design issues. In addition,
power and performance are highly sensitive to variation.
ANALOG NOT GETTING ANY EASIER
According to Rachel Parker of Intel at ICCAD 2012, analog currently takes
30% of the area on modern CPU chips. Analog will increase to 50% by 2015,
and if nothing is done to prevent it, analog will go to 90% by 2020!
What's worse is analog scales badly with Moore's Law. Custom 1.0 SPICE
simulation and analysis is simply not scaling; the increased complexity
and smaller processes demand high-precision SPICE simulators with orders
of magnitude greater speed and capacity along with SPICE simulation
reduction technology.
Bringing in 3rd party outside analog IP is an integration/verification
nightmare for even the most experienced analog design team.
Reuse of internally developed IP is increasing across global teams, too,
with it, dependency management is becoming a daily issue for designers.
To solve all these problem simultaneously, analog R&D budgets will have to
grow considerably. Custom 2.0 will involve major retooling. It won't be
cheap to implement by any means.
HOGAN'S FIRST LOOK AT CUSTOM 2.0
So what will the Custom 2.0 retooling look like? A top-level summary of its
key aspects is in this figure below:
Figure 2: Custom 2.0 Key Aspects
I see a new generation of EDA custom tools, which have been designed to:
1. handle variability quickly and accurately, with a reasonable
number of simulations.
2. SPICE simulators improving in performance and capacity by
at least one to two orders of magnitude, enabling new types
of simulation that were previously impossible.
3. have clever device models to leverage 3D process modeling.
4. design data management, bug tracking, and EDA tools bound
together for deep dependency management required for true
3rd party IP integration plus in-house IP reuse.
All this combined delivers my vision of "Custom 2.0." In the next four
sections I will detail the drivers and major market players of each of
the four areas shown above.
- Jim Hogan
Vista Ventures, LLC Los Gatos, CA
---- ---- ---- ---- ---- ---- ----
Related Articles
Hogan on the early days of Custom 1.0 and Cadence Analog Artist
Atomic scaling problems, Variation, and the Custom 2.0 retooling
Custom 2.0 means that SPICE runs must BOTH be fast AND accurate
3D FinFETs mean lots and lots of SPICE runs even with Custom 2.0
Custom 2.0 is design data dependencies, NOT design data managment
Hogan outlines the current players for the Custom 2.0 retooling
Join
Index
Next->Item
|
|