DAC'14 Troublemakers Panel in San Francisco, CA
Cooley: "Dean. Speaking of Big Data. You did a thing about
Big Data on DeepChip in ESNUG 539 a few weeks before
DAC saying 'get ready for Big Data! get ready for Big
Data!' and a whole bunch of engineers replied to me
going: 'what does Big Data have to do with EDA?'"
Drako: "No one would have expected saving up logfiles from the
Amazon.com servers, and sifting through them would give
them increadable insights into the operations and the
optimizations of those servers. Basically they were
able to keep those machines up and running a heck of a
lot more than they ever were before. They were able to
reduce their costs significantly. And actually they
were able to fish out data about trends and purchases."
Sawicki: "We do a lot of business at MENT leveraging a lot of test
information. I'll give you an example, we had a 130 nm
customer who had a chip that was getting 92% yield; it
would be interesting if we got that higher.
By pooling off massive amounts of data on all their
tester log file information, spinning that through for
statistical analysis, and then going through and
correlating that to physical data -- in about a month's
worth of work they found out they had a CMP issue that
was causing via bridging in the lower right quadrant of
the wafer -- and they got 2 points of yield from one
engineer doing Big Data analysis for 1 month.
By grabbing massive amounts of tester data and doing
the analysis you can find amazing things. A tester does
a wonderful thing. It turns a production chip and turns
it into a test chip. Pull that stuff together, do the
analysis, and you can find amazing things. We've found
reasons for yield crashes, grabbed people 2 points of
yield, we've helped designers find out that they should
really use a better CMP tool because their current one
doesn't work very well. It's a cool trend."
Drako: "The interesting thing about what we do in EDA is we run
lots of DRC's and lots of simulations, an awful lot of
simulations. We look at the results and then throw it
all away -- because we're just trying to keep the final
thing."
Cooley: "But what you're talking about is saving Terabytes of
data -- if not more. That starts getting crazy."
Drako: "That's why it's called Big Data." [audience laughs]
"It gets big enough that you're not going to want to save
all this intermediate data in your net-outs because it's
probably going to cost you too much money.
But if you save all of this intermediate data, and then
you actually start to dig through it using a lot of the
commercial tools have developed for Big Data analytics,
you going to start to get insights into your design
process.
We all know if you can get designs done more quickly,
i.e. we can shave 2 weeks or 4 weeks off of a schedule
for the same quality, that's going to be a big win.
When you have 150 engineers all working on the same chip,
you don't know exactly what's going on, and who's working
on what, and what the hold-ups are, in different places.
If you can get Big Data analytics into what is really
going on, in a real-time fashion, you'll be able to
optimize your chip's design process."
|