( ESNUG 470 Item 7 ) -------------------------------------------- [10/31/07]
From: Andrea Fedeli <andrea.fedeli=user domain=st not calm>
Subject: One hands-on user's first impressions of the Calypto SLEC tool
Hi, John,
The basic idea of SLEC is a Sequential Logic Equivalence Checker, i.e. a
tool intended to compare two designs structurally different enough so that
no traditional Equivalence Checker (e.g. Formality, Conformal, etc.) can
make it. Actually SLEC has two distinct verification modes: RTLvsRTL and
SystemVsRTL. Ideally it could be used for SystemVsSystem as well, but we
haven't used it that way.
I started having some fun applying SLEC, on a spot basis, in 2005, and
enjoyed using it, more regularly, during 2006. I haven't had opportunities
to try its latest evolutions in 2007, so my report might be a bit dated.
SystemVsRTL, currently, works fine in combination with Mentor Catapult-C
and Forte's Cynthesizer, which guarantees much more predictability of their
final result than generic manual refinement. This is not to say you must
use SLEC with them, but it surely makes your life easier, mainly due to the
way you specify how inputs and outputs have to be paired in time.
The sequential equivalence is, in general, not meant to be satisfied on a
clock cycle-by-cycle basis, but rather in a more "relaxed" way, in which you
ask the two descriptions to preserve some event order relation (with events
that could even happen at not coincident points in time on reference and
implementation sides) so to have outputs matching on corresponding elements
of those two event sequences.
Such a correspondence is usually far easier to give if the refinement was
made by a High Level Synthesis tool. At the time of my trials SLEC had a
good integration with Mentor Catapult-C and a fair and rapidly improving
support for Forte's Cynthesizer. To be fair, I was not directly involved
in the utilization of SystemVsRTL, so I can't say much more than this.
Where I played my part was on RTLvsRTL, which had a great appeal due to the
amount of retiming and sequential optimizations we make on many of our
designs. In this field I had some success stories, being able to close the
verification of descriptions that tremendously challenged (with no final
success) other (combinational) equivalence checkers. There we succeeded
in showing the equivalence of designs with hundreds of thousands of gates,
and thousands of flops on each side. The job was never straightforward,
though, when the size of the design was above few hundred flops. The
experience collected brought me to identify the following critical points:
1) The optimizations performed are usually relying on assumptions in
designer's mind, often not that much documented; tuning the set of
(possibly sequential) constraints under which the two descriptions
are expected to be equivalent takes some times if you are not
side-by-side with the designer (days).
2) The way the tool works by default is to try all mapped outputs at a
time. When constraints are well defined, and intermediate information
are exploited at their maximum, this works great, but if you try to
get there with no extra knowledge, it could take you a very long
time (weeks).
3) Even though the tool contains a set of heuristics about how to organize
the verification, it can happen that a silly constraint error takes
quite some time to show up, just because the elements on which the tool
focused its attention is not good to show the issue in short terms.
4) The tools greatly benefit from information about matching of internal
points, allowing to cut off portions of the respective designs no more
needed to verify the equivalence of the remaining unverified parts.
All those points, combined with a neat tcl-based interface exposed by the
tool, lent me hints for the following methodology tuning and application,
which demonstrated to be quite effective:
a) I built the db representation only when not available (i.e. at first
run) yet, directly loading it otherwise. Usage of the db proved to
be reliable and quite time saving on the long run (especially in
combination with point b).
b) I usually ran the verification on one output at a time. Actually it
would have been very natural to distribute the verification over a
farm.
c) I executed rapid verification rounds with short timeouts, increasing
the timeout round by round (3 min, 10 min, 30 min), to separate
easy-to-check outputs from tougher ones.
d) As the tool reports information about topologically intermediate
proven equivalences, I arranged my scripts so to exploit the collected
information in subsequent runs, trying to cut away design portions as
much as possible (but not more than possible, which meant I
occasionally had to do a bit of overcutting backtracking, from trial
to trial).
e) When a bunch of constraints and internal points equivalence were
collected, I tried a more-than-one-output at at time run. This usually
worked well, significantly reducing the sum of verification runtimes,
especially by the end of the verification (I used it for a sort of
validation of the whole verification job).
Those are all quite naive ideas, and I submitted them to SLEC R&D to have
them integrated into the tool as much as possible, at least in terms of
automatically generated scripts to be run by the user. All were actual
facets of the same aspect, that I discussed with Calypto R&D and that I knew
they were seriously working on: raising the level of tool-internal
information sharing and reuse, whilst helping the tool in doing problem
decomposition.
A wise combination of points d and b, in particular, would have been
extremely beneficial, even with all the challenges that a distributed
execution poses in terms of process coordination, that at the time of my
trials would have been fully on user's shoulder and that, in my view, were
to be addressed from inside the tool development team to get maximum return
from engines' working knowledge.
Through the time I've been using SLEC I've seen its performances improving
dramatically, both in terms of time and memory requirements.
My final comment is that Calypto's SLEC technology appeared to be effective
and subject to wide further improvements both on SystemVsRTL (for what I
heard from colleagues using it) and RTLVsRTL (from my direct experience);
up to Oct 2006 it was not a push-button solution, it took from 2 to 4 weeks
of full time dedicated effort from a trained verification engineer for
RTLvsRTL applications of designs of tens to hundreds of thousands of gated
and thousands of flops.
- Andrea Fedeli
STMicroelectronics Milan, Italy
Index
Next->Item
|
|