( ESNUG 480 Item 4 ) -------------------------------------------- [03/19/09]
Editor's Note: In ESNUG's entire 18 year history, this is the first
time I've ever seen a TSMC engineer write in to *publically* confirm
anything *any* EDA CEO has ever claimed *anywhere*. Whoa! - John
Subject: TSMC user confirms Magma Titan takes mins vs. weeks for 2nd design
> Rajeev Madhavan: No, SpringSoft Laker guys provided a better layout editor
> and a better schematic editor. OK, so you have a better GUI than the
> other guy's GUI? Nobody buys based on that in EDA! No one will buy based
> on that. You have to provide value propositions, additions to the
> customers, that they can do something which can now be done. I'm telling
> you, in Cadence, you cannot capture a schematic or a layout and migrate it
> from one process node to the other, simulate it in hundreds of corners,
> validate it, none of that can be done.
>
> John Cooley: So, you are saying you have functionality that Cadence
> doesn't have?
>
> Rajeev Madhavan: Absolutely doesn't have!
>
> - from http://www.deepchip.com/gadfly/gad022709.html
From: Eric Soenen <esoenen=user domain=tsmc not mom>
Hi John,
There seems to be some confusion about what Titan's Analog Migration (TAM)
can and cannot do. As part of TSMC's internal evaluation of key design
tools provided by various EDA suppliers, we had the opportunity to use the
Magma's TAM first-hand. We have actually used it twice, with good results.
The first time we optimized a small op-amp for a first-order sigma-delta
ADC in TSMC N40LP CMOS process. The ADC clocked at 100 MHz and runs off a
1.1 V power (0.95 V worst-case). It uses core 40 nm CMOS transistors.
TAM allowed us to meet speed and open-loop gain specs in the most power-
efficient way possible, across process corners.
We used 3 to 5 corners for the initial analysis, and 17 worst-case process
corners for the final run. We have verified the final design using corner
and Monte-Carlo analysis in Cadence Spectre, and now also in silicon. The
circuit behaves as expected. (Actually we believe that it is more robust
than what our designers could have made it by hand.)
More recently, we used Titan for a higher level of optimization.
Our team developed a scalable architecture for a high-performance (95 dB+)
Audio ADC. It's currently in a 0.18 um CMOS process but can easily be
retargeted to any other similar process. This ADC is based on a second
-order, multi-bit sigma-delta architecture. Topology involves two op-amps,
which have to be sized and optimized very carefully in order to meet the
speed, settling and noise requirements. The op-amps drive two sets of
switched capacitors, which are optimized in conjunction with the op-amps.
We were able to code all the relevant circuit equations into Titan. We ran
it to synthesize the whole design hierarchically. We supply a top-level
spec (the SNR of the ADC). TAM then optimizes the circuit all the way down
to the size of every transistor and capacitor, so that total power
consumption (and area) are minimized. Again, the optimized design has been
verified using circuit simulations. No silicon results are available yet.
> Magma claimed that the initial migration from a particular node can take
> several weeks, but that later migration (e.g. from 65 nm to 45 nm) will
> go much faster, on the order of 1 day, instead of the average 12 weeks.
>
> If you believe the migration will only take 1 day, I have a section of
> ocean 50 miles off the coast of Florida I'd like to sell you. Analog
> designers just cannot know what new issues they will uncover at 65 nm
> and 45 nm prior to the existence of a SPICE model or a set of design
> rules, so turnaround is simply is not possible to predict.
>
> - from http://www.deepchip.com/items/0476-04.html
It's true that TAM (originally Sabio before LAVA bought it) requires a
designer to first code his circuit knowledge into the tool. This is done
using equations, which are similar to the ones used in college circuit
design textbooks or scientific publications. Writing and testing the
equations can take significant extra time. It's not uncommon for this
step to add 20 to 50% to the overall design cycle.
When seen from the perspective of a single design, this overhead for
writing circuit equations may seem steep. It doesn't save you much on
tiny designs, but TAM shines are larger more complex designs. This is
because once the equations are written, the optimization of all device
sizes in the design can happen very efficiently, and much more quickly
than through successive manual simulations.
- writing the equations took us about 1 man-week in the case of
our small 40 nm opamp. The overall equation-based approach took
about 3 weeks total (including a 50% overhead to get familiar with
the tool). A traditional manual design of it would probably have
taken 2 weeks.
- In the case of the full-blown Audio ADC, writing the equations took
about 4 man-weeks. This compares to about 20 man-weeks spent on
determining the best circuit topology! Because the equations
subsequently sped up device sizing, the true design time overhead
writing equations was limited (of the order of 10%).
Still, the real time savings with Sabio/TAM are when the same equations are
*REUSED* in a new design (same topology but different spec and/or different
process). A new design based on the same topology in Sabio can literally
be completed in HOURS rather than WEEKS.
- In the case of our 40 nm opamp, a complete optimization run took
15 minutes over 3 corners and 45 minutes over all 17 corners.
- In the case of the Audio ADC, it took around 1 hour for a typical
corner and 13 hours for all 17 corners.
Another advantage of the equation methodology is that it imposes a rigorous
analysis of your circuit. Once completed, this typically results in a
better (i.e. faster, lower power, more robust etc.) design than what could
have been done through blind human circuit iterations.
Finally, I would like to point out the possibility of third parties writing
Sabio/TAM equations (or circuit "templates"), and trading or selling them.
This could even lead to a variation on the traditional IP business model.
Instead of buying a finalized IP block from a vendor, I could buy the
equations and optimize the design in TAM, to fit my particular application
in my particular process. (Or the IP company can run the tool for me, so I
don't even have to see the equations myself.)
> In reality, each new technology node will have new concerns that were not
> built into the original design.
>
> For example, at 65 nm, WPE and STI are primary, in your face, concerns
> where at 250 nm they did not exist. Building in the new parameterizations
> based on the new concerns takes more than 1 day but still definitely less
> than 12 weeks.
>
> Also, the new parameters for the "layout effects" at 45 nm and 40 nm will
> need to be built into the migration tool, taking more than one day but
> again most likely less than 12 weeks.
>
> - from http://www.deepchip.com/items/0476-04.html
It's true that advanced process technologies (65 nm and especially 45 nm and
below) do not make life any easier for analog designers. On the layout
side, "proximity effects" (poly, well and OD) create a new interactions
between devices, in addition to the more traditional RC parasitic effects.
Analyzing and designing around these proximity effects is a pain with or
without Titan tools. They typically require careful post-layout extraction
and re-simulation of completed circuits.
There are two fundamental ways to deal with proximity effects during the
design phase: one is to describe the effects and include them as part of
the optimization process. The other is to try to avoid the effects.
We can see how long-term, the Titan AM tool could provide hooks to describe
proximity effects (which are directly related to the layout). This might
involve using the Euclid floorplanning tool (which is part of Titan AM), to
feed back preliminary layout information to the optimizer. However, that
capability does not exist today.
But until such capability is actually built in, there are simpler ways to
side-step most proximity issues. The most obvious one is to make sure
analog components are spaced a certain minimum distance apart from each
other. Leaving an extra 1/2 or 1 um space between two critical transistors
of an op-amp will not result in the type of die area or cost penalty the
same spacing would cause in a minimum-spaced NAND gate. But it will
improve robustness and ease of design a lot.
A practical method to do this is through using simplified, "analog-only"
DRC rules. The idea is to lay out analog blocks using more conservative
rules (i.e. larger spacing) than what the process allows. The spacings are
chosen large enough so that proximity effects become negligible. Layouts
that pass the "analog-only rules" would automatically pass the more
aggressive process rules, but not the other way around.
By using these layout rules in conjunction with Titan Analog Migration, the
proximity issues can be largely sidestepped and robust, predictable analog
performance can be obtained. As a final check, the layout of the analog
block should be back-annotated for proximity effects and re-simulated
(which is what would be done anyway if TAM were not used.)
We have used this approach somewhat informally for our 40 nm sigma-delta
ADC design and found the overall area penalty to be 10-20% at most.
> Even though the entire known world is trying to reliably produce analog
> circuits at the smallest possible technology node, there are process
> die-to-die and intra-die variations exponentially increasing with each
> smaller process node. Where digital circuits have a somewhat built-in
> immunity to variation (you can build margin into your library of cells
> to reduce the variation penalties) this is not true for "edgy" analog
> circuits. Variation could define a window of operation between 500 MHz
> and a 5 GHz peak oscillation frequency when your target was 3 GHz. The
> key is going to be identifying the yield limiting variation at each
> technology node right down to the smallest contributor, quickly, and
> then being able to correct the problem without re-building the model.
>
> - from http://www.deepchip.com/items/0476-04.html
Die-to-die variation is a problem TAM solves efficiently and reliably. It
optimizes a design (i.e. sizes the components) across multiple process
corners simultaneously. The key word here is "simultaneously" because
after such an optimization, the circuit will be guaranteed to meet all specs
across all corners. The number of corners or scenarios can be increased at
will (more corners just take a longer time to run).
If the process variation becomes such that a particular spec cannot be
guaranteed across all corners, Titan Analog Migration will return that it's
"infeasible". Such "infeasible" designation is pretty much final. No human
designer will be able to do any better using the same topology. Either the
spec will have to be relaxed, or alternative methods will have to be
considered (like the use of calibration or trimming). FWIW, this result
saves a lot of time by showing that a design is "infeasible" early on. It
can also tell you which spec is most problematic.
For the record, tools like Sabio/Titan do not replace senior, knowledgeable
analog designers. They only automate certain repetitive aspects of a design
(like simulating over corners and finding the best sizing for all devices).
This in turn frees up the designer to focus on where he or she adds the most
value: the search for optimal circuit topologies and the detailed analysis
of these topologies.
- Eric Soenen
TSMC Austin, TX
Join
Index
Next->Item
|
|