( ESNUG 587 Item 1 ) ---------------------------------------------- [05/30/19]
Subject: A cheeky Sawicki invites Anirudh to Calibre 4,140 cloud CPUs lunch
We compared power, performance, area, (PPA) and runtime through 5 flows
using the TSMC CLN7FF library:
Tools Flow Name Comment
---------------------------------- --------------- ------------------
DC-> test ins -> ICC2 -> PT SNPS-All our old SNPS flow
SNPS Fusion Compiler -> PT SNPS-New new SNPS flow
DC-> test ins -> Innovus -> PT Innovus-PT old Innovus flow
DC-> test ins -> Innovus -> Tempus Innovus-Tempus old Innovus+Tempus
Genus-> Modus -> Innovus -> Tempus CDNS-All CDNS only flow
Our goal was to have "Mongo" with 3.0 M inst, a number of ARM cores and
hard macros, and some very tight power requirements reach 3.2 GHz (or
better) in TSMC CLN7FF.
Flows
|
Best Frequency Achieved
|
TNS left on table
|
Total Power
|
TAT
|
SNPS-All
|
2.87 Ghz
|
97 nsec
|
1,838 mW
|
14.7 days
|
SNPS-Fusion
|
2.67 Ghz
|
165 nsec
|
1,923 mW
|
12.4 days
|
Innovus-PT
|
3.06 Ghz
|
44 nsec
|
1,720 mW
|
11.7 days
|
Innovus-Tempus
|
3.12 Ghz
|
24 nsec
|
1,667 mW
|
9.8 days
|
CDNS-All
|
3.22 Ghz
|
0 nsec
|
1,586 mW
|
8.2 days
|
What we found is the "CDNS-All" flow consistently gave us better PPA in
the shortest runtime of any of the flows.
- from http://www.deepchip.com/items/0587-01.html
From: [ Joe Sawicki of Mentor/Siemens ]
Hi, John,
I wish to publically congratulate Anirudh for his All-CDNS Innovus PnR flow
against Aart's old ICC2 and new Fusion Compiler PnR flows in the user
benchmark on Friday in ESNUG 587 #1.
Anirudh's R&D team have done well. They deserve to be commended.
BUT, I would like to draw your attention to some user side comments in that
exact same ESNUG post. Specifically:
Other than using Calibre as our golden DRC/LVS signoff, my group doesn't
like to do piecemeal best-in-class point tools because of the inherent
support a mixed flow brings.
For example, if we saw a mismatch between what Synopsys DC-Topo says and
Cadence Tempus STA, the Synopsys guys will say "it's a Cadence Tempus
problem" while the Cadence guys will say "it's a Synopsys DC problem" and
we're helplessly stuck in between them. Our only exception to that rule
is using Calibre for golden sign-off. If Calibre finds a problem,
it must be fixed upstream from Calibre. For us, what Calibre says is
gospel, everything else is suspect.
- from http://www.deepchip.com/items/0587-01.html
Two years ago back in 2017 when Anirudh did his Pegasus (see Greek myth)
attack on Calibre (where he didn't even have fab certified run sets for
Pegasus yet), my R&D guys were already running Calibre on 2,048 CPUs.
Calibre scales 2,048 CPUs 16nm 700mm2 full chip DRC in 3.5 hours
http://www.deepchip.com/items/0577-02.html
And this was on the Amazon AWS cloud two years before "cloud" and "AWS" were
to become such industry buzzwords. (We were just doing engineering, and
that's what was readily available at the time.)
At this upcoming DAC'19 in Las Vegas, I'm happy to report that Mentor will
be having a luncheon from 12:00 to 1:30 on DAC Tuesday at the Westgate Hotel
Ballroom D-E -- where an AMD engineer will describe how AMD scaled Calibre
to 4,140 CPUs on Microsoft Azure's cloud to complete a full-chip DRC/LVS
physical verification run in 10 hours. This 7nm chip was 12+ billion
transistors.
And since I'll be sharing this data directly with Anirudh on the upcoming
DAC'19 Troublemakers Panel, I thought it would only be polite to invite both
Anirudh and you (and the DeepChip readers) to come see this for yourselves.
- Joe Sawicki
Mentor/Siemens Wilsonville, OR
---- ---- ---- ---- ---- ---- ----
Related Articles
Joe Sawicki smirks at Cadence Pegasus' 3 big critical DRC failings
Calibre scales 2,048 CPUs 16nm 700mm2 full chip DRC in 3.5 hours
Juan Rey -- The Most Interesting Man in EDA about the Future of DRC
Anirudh's 19 jabs at Joe Sawicki's Calibre with his Pegasus launch
Join
Index
Next->Item
|
|