( ESNUG 414 Item 5 ) -------------------------------------------- [06/18/03]

Subject: ( ESNUG 405 #6 ) Five Customers Discuss Averant's Solidify Tool

> We used Averant's Solidfy briefly a year ago.  Their property language
> was very powerful, but the tool failed to deliver results.  It is very
> easy to write complex properties for easier checks with the language.
> This made the tool spend days without any results.  Finally the design
> engineers gave up on using Solidify.
>
>     - from an anon Averant user in ESNUG 405 #6


From: Greg Blanck <gblanck=user  company=taunetworks spot calm>

Hi, John,

We definitely did use Averant's Solidify in production, and have produced
real live working devices with it in our flow.  My observation of these
class of tools is that there are some engineers that by nature have a hard
time thinking and working this way -- and others that find it extremely
effective and usable.

    - Greg Blanck
      Tau Networks                               Scotts Valley, CA

         ----    ----    ----    ----    ----    ----   ----

From: Ed Field <ejfield=user  company=iee.org>

Hi, John,

Over the last couple of months I've been evaluating a "Static Functional
Verification" tool called Solidify.   (Static Functional Verification is
sometimes known as "Model Checking" or "Property Checking").  I don't know
if you are familiar with such tools (you may know more than me), but this is
basically a tool which performs a mathematical analysis of the HDL & checks
whether it meets the specified requirements, or properties.  Note that it
does NOT perform simulation, so there is no timing information, and no
writing of testbenches is required.  Verification is therefore typically
very fast -- I found some errors which took hours of test bench writing and
simulation time in a few minutes.

I took the liberty of running a customer's UART code through Solidify.  The
good news is that it didn't throw up any huge glaring errors, though this is
not really surprising given the amount of simulation and verification that
this UART has already been through, and the amount of testing I did was
minimal and fairly basic.

What did strike me though was how easily and quickly Solidify picked up some
of the errors which I highlighted in the earlier verification exercise.
Given a good set of requirements, Solidify can very quickly eliminate coding
errors and test boundary conditions, and could therefore be a very useful
design aid as well as a verification tool.

I'm not trying to sell it, but if folk are serious about verification and
reliability, then in my opinion Solidify is well worth looking at.  It's not
the final answer by any means (and the tool is clearly in its early stages
of development and could be developed a lot further), but it's potentially
a very useful addition to the design/verification toolset.

I just wish I could afford it.  Lastly, the Solidify compiler threw some
VHDL warnings/errors when compiling the source files, which all look valid
to me - in fact it's surprising that Modelsim and (presumably) Leonardo
Spectrum didn't complain about them.

    - Ed Field
      Ixian Devices Ltd                          UK

         ----    ----    ----    ----    ----    ----   ----

From: [ Cousin It from the Addams Family ]

Hi John;

Keep me anonymous.

We have used Soldify version 2.5, 2.6 and 2.7.  The first version took a
while to be useful until its output (VCD, failing testcase) was more
compatible with existing (DeBussy) debugging tools.  The later versions
have a good path to utilizing DeBussy in addition to their own source code
browsing for debug.

We tried Solidify to prove an ALU adds correctly.  At first it produced
false failures due to few constraints on inputs (reset, stall, illegal op,
etc.)  Once we constrained these interfaces we were able to get more
realistic results -- this is the time consuming step.  If one does not have
a good constraint set on their inputs, it will take time to determine a
proper set from iterating with Soldify to remove illegal input combinations.

It then found a handful of problems with the ALU.  They ranged from unexpected
case selection values to conflicting corner case detection.  This component
had success due to a straightforward set of properties to verify and a
reasonable set of constraints needed to generate acceptable inputs.

We also tried Solidify on a memory interface block.  For this one the input
constraints were simpler to create however this didn't really help our
efforts.  Due to the large amount of random state and the fact that our
registers use clock enable signals we couldn't get Solidify to produce
good results.

Our first problem was due to the way Solidify interprets properties; later
versions supposedly have features which alleviates some of this problem.
However, at the time, they weren't available.

The solution we came up with for its clock enables problem was to tell
Solidify our stall inputs were never asserted.  This enabled the registers
to clock each cycle without any stalling.  Its enabled register problem
still is not solved.  We can ignore stalls, but this functionality is where
problems occur.  There needs to be a general way to show that a register's
next value will only occur when it will be enabled and not before.

The gotcha was the enable signals on the registers.  We had another design
that we evaluated Solidify with to find a bug we thought we fixed.  The new
design though used more  enabled registers and proved more difficult to
specify properties.

Overall Solidify found additional bugs that random tests would eventually
find but take much longer.

    - [ Cousin It from the Addams Family ]

         ----    ----    ----    ----    ----    ----   ----

From: Pete Johnson <pete=user  company=taunetworks spot calm>

Hi, John,

When writing RTL, our flow is that after coding you write the Solidify
properties and then run the code through Solidify.  When you go back to
modify the code, you alter the properties and rerun Solidify.  For most of
my blocks, I never ran a simulation at all -- not even to check syntax.  The
first time that VCS saw them was in a unit level simulation.  The great
thing about the Averant method is that you are checking the code that you
write when you write it.  Not sometime later when the simulation environment
has come up to speed.  I never want to design logic again without Solidify.

The unit level simulation environment was much easier for our verification
engineers to put together because they were being handed blocks which were
much more functional than normal.  Functional sims and debug actually took
less time than RTL design -- that was a first.


Last year one other logic designer and I designed a 35 M transistor network
scheduler chip.  It contained no embedded memories and 3+ million placeable
instances.  So needless to say there was a lot of logic.

Our chip was designed in Verilog but with a lot of custom code generators as
well.  We described in XML format all of the microprocessor register blocks
(there were thousands of microprocessor controlled registers) and used a
Perl program was used to generate Verilog code and Solidify properties.  The
Verilog code for scheduler unit (the heart of the chip) was generated by a C
program.  We also used a Perl based preprocessor for all hand coded Verilog
RTL.  This allowed macro "for" loops in our Verilog source code.  All the SW
mentioned were written by me or the other logic designer.

The role of our scheduler chip is to receive requests from 64 incoming ports
and arbitrate the use of a 64 way crossbar switch.  Basically, no 2 incoming
requests can be scheduled for an outgoing port at any one time.  Requests
come every 25 nsec and have 16 different priorities.  Requests are held in a
request queue and requests should be non-blocking.  For example if port 3
has a request of priority 1 (lower numbers are higher prioirty) for port 5
and port 4 has a request of priority 1 for port 5 and a request of priority
2 for port 6, then the priority 1 request in port 4 should not block the
priority 2 request.  As you can imagine, this type of chip has a lot of
priority selection, sorting and MUXing logic.


We used custom Perl scripts to extract Solidify properties.  This allowed us
to use the same parameter values as our Verilog and keeps the properties
close at hand when the RTL changed.

Our verification flow consisted of using Solidify on a module and unit level
and Specman/VCS on the unit and chip level.  This worked really well.  The
kinds of problems that Solidify seems to be particularly good tend to be
hard for simulation and vice versa.  Another way to think of it is that at
the module level we looked for good Solidify coverage and no simulation
coverage.  At the unit level we wanted some Solidify coverage and good
sim coverage.  We had virtually no Solidify coverage at the chip level.

Solidify is conceptually very different from simulation.  As a result it can
be hard to pick up.  If you understand things like Axiomatic Set Theory,
Predicate Calculus, or Peano Number Theory, then you will pick up Solidify
with no problem, if not then it will take some time to come up to speed on
using the tool.  To use Solidify, you really do need to understand
mathematical concepts like proof by induction and contradiction.  These
concepts aren't hard but they are very foreign to the average RTL designer.

Another issue with Solidify is runtime and memory space.  How you write your
properties has a massive impact on runtime and memory.  Fortunately Solidify
does a pretty good job up front of letting you know how long a property is
going to run.  After a bit of use I got to have a pretty good feel for what
would take a while and what was fast.  I don't have any properties that take
more than a minute to run.  I recommend using a Linux box rather than a Sun.
There is about a 4x difference in speed.

Most of my issues with Solidify fall into two categories.  First, the user
interface is kind of klunky.  The GUI is not terribly well thought out and
it is sometimes difficult to debug properties that fail.  It would be nice
to have an integrated waveform viewer or at least better hooks into an
existing tool.  When I first started using the tool the command line
interface was a joke.  It was clearly an afterthought rather than designed
in from the start.  I don't know why tool vendors like GUIs so much.  They
are fine for a novice user but impossible for real work.  There is just no
way to script a GUI tool for regressions.  The later releases had much
improved support for the command line interface.

Another problem is in the area of warnings.  They could be a lot smarter
about warnings that occur because of width mismatches.  For example, if you
have three 2-bit values a,b,c and one four bit value d and make the
assignment d = a*b+c you will get a width warning.  This is also true for
constants, so if you say d = 3*a+1 you will get a warning that 4-bits is
not large enough to hold the expression even though it is.  In our test
cases this led to a lot of warning messages that we had to wade through.

I started using Solidify at release 2.4.  It has been about 6 months since
I have done any real RTL design and I last used version 2.5.31.  There were
quite a number of performance enhancements between 2.4 and 2.5.31.

I have attached an example below of a variable priority selector and the
Solidify properties to verify it.  This module is basically a priority
encoder with the added twist that you can select the low priority input.
For example, if low_prio is set to 31 then priority is 0-31 where 0 has
highest priority.  If low_prio is 30 then the priority shifts to 31,0-30,
and in general priority is (low_prio+1)-31,0-low_prio.

Example:

   module select (low_prio, decoded, selected, valid);

     input [3-1:0] low_prio;
     input [8-1:0] decoded;
     output [3-1:0] selected;
     output valid;

     reg [3-1:0] selected;
     reg valid;

     wire [15:0] prio_vector;
     wire [7:0] prio_mask;
     assign prio_mask[0] = (0 > low_prio);
     assign prio_vector[0] = decoded[0] && prio_mask[0];
     assign prio_vector[8] = decoded[0] && ~prio_mask[0];
     assign prio_mask[1] = (1 > low_prio);
     assign prio_vector[1] = decoded[1] && prio_mask[1];
     assign prio_vector[9] = decoded[1] && ~prio_mask[1];
     assign prio_mask[2] = (2 > low_prio);
     assign prio_vector[2] = decoded[2] && prio_mask[2];
     assign prio_vector[10] = decoded[2] && ~prio_mask[2];
     assign prio_mask[3] = (3 > low_prio);
     assign prio_vector[3] = decoded[3] && prio_mask[3];
     assign prio_vector[11] = decoded[3] && ~prio_mask[3];
     assign prio_mask[4] = (4 > low_prio);
     assign prio_vector[4] = decoded[4] && prio_mask[4];
     assign prio_vector[12] = decoded[4] && ~prio_mask[4];
     assign prio_mask[5] = (5 > low_prio);
     assign prio_vector[5] = decoded[5] && prio_mask[5];
     assign prio_vector[13] = decoded[5] && ~prio_mask[5];
     assign prio_mask[6] = (6 > low_prio);
     assign prio_vector[6] = decoded[6] && prio_mask[6];
     assign prio_vector[14] = decoded[6] && ~prio_mask[6];
     assign prio_mask[7] = (7 > low_prio);
     assign prio_vector[7] = decoded[7] && prio_mask[7];
     assign prio_vector[15] = decoded[7] && ~prio_mask[7];
     reg [3:0] prio_wide_encoded;

   // BEGIN GENERATED PRIORITY ENCODER
   // priority_encoder("prio_vector", "prio_wide_encoded",
   //                  "valid", 16, "prio_");

     reg [7:0] prio_valid_1;
     reg [7:0] prio_stage_1;

   always @(prio_vector) begin
     prio_stage_1[0] = !prio_vector[0];
     prio_valid_1[0] = prio_vector[0] | prio_vector[1];
     prio_stage_1[1] = !prio_vector[2];
     prio_valid_1[1] = prio_vector[2] | prio_vector[3];
     prio_stage_1[2] = !prio_vector[4];
     prio_valid_1[2] = prio_vector[4] | prio_vector[5];
     prio_stage_1[3] = !prio_vector[6];
     prio_valid_1[3] = prio_vector[6] | prio_vector[7];
     prio_stage_1[4] = !prio_vector[8];
     prio_valid_1[4] = prio_vector[8] | prio_vector[9];
     prio_stage_1[5] = !prio_vector[10];
     prio_valid_1[5] = prio_vector[10] | prio_vector[11];
     prio_stage_1[6] = !prio_vector[12];
     prio_valid_1[6] = prio_vector[12] | prio_vector[13];
     prio_stage_1[7] = !prio_vector[14];
     prio_valid_1[7] = prio_vector[14] | prio_vector[15];
   end

     reg [3:0] prio_valid_2;
     reg [7:0] prio_stage_2;

   always @(prio_valid_1 or prio_stage_1) begin
     prio_stage_2[1:0] = {!prio_valid_1[0],
       (prio_valid_1[0] ? prio_stage_1[0:0] : prio_stage_1[1:1])};
     prio_valid_2[0] = prio_valid_1[0] | prio_valid_1[1];
     prio_stage_2[3:2] = {!prio_valid_1[2],
       (prio_valid_1[2] ? prio_stage_1[2:2] : prio_stage_1[3:3])};
     prio_valid_2[1] = prio_valid_1[2] | prio_valid_1[3];
     prio_stage_2[5:4] = {!prio_valid_1[4],
       (prio_valid_1[4] ? prio_stage_1[4:4] : prio_stage_1[5:5])};
     prio_valid_2[2] = prio_valid_1[4] | prio_valid_1[5];
     prio_stage_2[7:6] = {!prio_valid_1[6],
       (prio_valid_1[6] ? prio_stage_1[6:6] : prio_stage_1[7:7])};
     prio_valid_2[3] = prio_valid_1[6] | prio_valid_1[7];
   end

     reg [1:0] prio_valid_3;
     reg [5:0] prio_stage_3;

   always @(prio_valid_2 or prio_stage_2) begin
     prio_stage_3[2:0] = {!prio_valid_2[0],
       (prio_valid_2[0] ? prio_stage_2[1:0] : prio_stage_2[3:2])};
     prio_valid_3[0] = prio_valid_2[0] | prio_valid_2[1];
     prio_stage_3[5:3] = {!prio_valid_2[2],
       (prio_valid_2[2] ? prio_stage_2[5:4] : prio_stage_2[7:6])};
     prio_valid_3[1] = prio_valid_2[2] | prio_valid_2[3];
   end

     reg [0:0] prio_valid_4;
     reg [3:0] prio_stage_4;

   always @(prio_valid_3 or prio_stage_3) begin
     prio_stage_4[3:0] = {!prio_valid_3[0],
       (prio_valid_3[0] ? prio_stage_3[2:0] : prio_stage_3[5:3])};
     prio_valid_4[0] = prio_valid_3[0] | prio_valid_3[1];
   end

   always @(prio_stage_4) prio_wide_encoded = prio_stage_4;
   always @(prio_valid_4) valid = prio_valid_4;

   // END GENERATED PRIORITY ENCODER

   always @(prio_wide_encoded) 
     selected = (prio_wide_encoded >= 8) ? prio_wide_encoded - 8 : 
                prio_wide_encoded;

   endmodule


   select.p:

   reg [3-1:0] m;

   if (!valid)
     // If not valid then there is no input request. 
      decoded[m] == 0;
   else
     begin
      // If valid, then there was a corresponding request
      decoded[selected];
      if (selected <= low_prio)
        // Then high priority requests are lower than selected
        // or higher than low_prio. There are no higher priority
        // requests.
        (m<selected || m>low_prio) => !decoded[m];
      else
        // Then high priority requests are between selected and
        // low_prio. There are no higher priority requests.
        (m<selected && m>low_prio) => !decoded[m];
     end


The Solidify properties appear after the RTL code above.  The example
is only 8-bits wide and runs in under 1 second of CPU time.  Expanding
it to 32-bits increases CPU time to just under 1 minute.  Still very
manageable.

The properties give a good example of typical Solidify properties.  One
thing that is common is reasoning back from the output.  In this case
you see things like if the valid output is low then all input bits are
low.  This is coded as

                   if (valid)
                      decoded[m] == 0;

Note the use of a free variable m.  This tells the proof engine that it
can choose any value of m and the property should still hold.

In the else part of the conditional (valid is true) we tell Solidify that
the request bit associated with the selected output must be set.

This is a good example of the classic reasining in Solidify.  If some
value came out then these input conditions must have been true.

The last conditional verifies the priority aspect.  In a nutshell it proves
that there were no higher priority requests than the one selected.

Solidify also allows reasoning over time as well.  So you can say things
like if this happens now then that must have happened two clocks earlier.
This makes it useful for checking out pipelines and state machines.

The power as a designer with this technique is that now I know that
this module implements the function that I want and nobody will ever
waste one minute of time debugging a simulation problem related to
this.

Overall

In all, I am quite happy with Solidify.  As a designer it lets you fix bugs
very early on which saves a lot of time later on in the verification flow.
Solidify is not a replacement for simulation by any means, but I think the
converse is true also.  Simulation is not a replacement for Solidify.  I
think that Solidify significantly shortened our verification cycle and it
has contributed to our first pass silicon success.

    - Pete Johnson
      Tau Networks                               Scotts Valley, CA

         ----    ----    ----    ----    ----    ----   ----

From: [ The Artful Dodger ]

Hi, John,

I gotta be anon yet again.

Our experience from using Averant's Solidify has been much better, with
proofs typically taking seconds or minutes.  When we first evaluated the
tool we also found that the flexibility of the HPL language could allow
you to write properties that were inefficient to prove.  However, we found
big speed ups when writing most properties in the form:

                      conditions => requirements

This says that whenever the conditions hold, the requirements must be
satisfied.  The "conditions" and "requirements" can be temporal expressions,
but do not themselves contain an "=>" operator (implies).  This standard
form makes the proofs very quick and also allows Solidify to do some cool
sanity checks (e.g. it can tell you that the design can never meet the
conditions -- so you have a dud property).

Stuff we like about Averant's Solidify are:

  +1) Automatic constraint generation to avoid false negatives (and proof
      of those constraints).  This part of the tool has improved a lot
      recently.
  +2) Automatic Property Checks.  It's a push-button tool that has found
      a lot of design bugs for us for very little effort.  It's also
      found problems with other EDA tools we use!
  +3) Property synthesis to Simulation Assertions.  Failing proofs can
      be false negatives, but if they fail as simulation assertions then
      there's a real design bug -- we've been successful with this
      (finding bugs when the design has passed all regressions).
  +4) Tranlation of OVL assertions into HPL properties
  +5) Hierarchical proof mechanism
  +6) Property coverage tool

Stuff we don't like about Solidify are:

  -1) Automatic cone-cutting can be dangerous unless you know what you
      are doing (we turn it off by default)
  -2) Properties are concise in HPL, but not easy to name -- making logging
      of results difficult.  We've got around this by using HPL "propsets".
  -3) You have to rewrite the property to do sophisticated proofs, e.g. an
      inductive proof involves rewriting property P to "P => 'X(P)", which
      can get messy.  Again, we have got around this with propsets.

For us, Solidify speed is not an issue -- we've found that proofs can take
seconds when applied to blocks up to 100k gates, and Solidify has good
support for hierarchical proofs.  However, the biggest issue with any formal
property tool is false negatives (reported failure that cannot happen) which
can be either:

  a) Initialisation: proof starts in an unreachable state
  b) Interface:      block driven by an illegal input sequences

If an EDA vendor tells you that their formal tool will not give you false
negatives then it's not checking what you think (either it's not doing an
exhaustive formal proof or it's not going to be able to handle anything
other than small designs).  You need to ask some probing questions (see
below) and walk away if their only answer is "trust me, I'm an EDA
salesman."

The great thing about formal property checking is that a pass is the only
validation technique that gives a 100% exhaustive proof of correctness.  An
exhaustive pass means you don't have a design bug (but has also shown up
bugs in other tools in our flow!)  However, you need to be sure that the
property checker you use can do this.  If it's a proof from reset or an
initialised state then it's only going to be partial (useful for bug hunting
but wont give you exhaustive proofs).

The downside is false negatives.  If you know exactly what are illegal
states and illegal input sequences then you can tell the tool to ignore
them, but you're not going to know this for complex blocks or a block that's
part of a complex system.  Doing an iterative debug where you add such
constraints is very inefficient (and soul destroying).

The Solidify constraint-generator does a good job of eliminating some false
negatives due to unreachable states.  On top of Solidify we have built up a
methodlogy that tries a number of proofs steps to do an exhaustive proof
before going onto a partial proof from a known good state.  We have scripted
up regressions that run overnight and tell us which proofs are exhaustive
and which are partial proofs (or failures from reset).  Any partial
proofs/fails are then synthesised as simulation assertions to run on top of
billions of cycles of sim regressions; any failure here is a real bug.

A word on formal languages.  There's a lot out there and some real language
wars (with some good efforts by Accellera to try and produce a standard).
However, I don't think that the language is the most important point;
methodology is the biggest issue with formal property checking.  The most
exciting development is the current interest in simulation assertions
(e.g. OVL, OVA, 0-in, ...).  If designers start adding these to their blocks
and interfaces for simulation/emulation, then you can reuse them for formal.
OVL doesn't have all the bells and whistles of same languages/libraries, but
probably enough for most designers and is EDA neutral (being Verilog it can
be used by any tool, although some tools translate it into an equivalent
library in their language to gain access to other features).

If you want to try out formal property checking then I'd suggest you start
with the automatic properties that are extracted from your design.  Several
formal EDA tools have these sort of checks -- in Solidify they are called
"Autochecks".  They can find design bugs cheaply, and can help debug other
validation work.  The exhaustive nature of the proofs can also find problems
with other non-formal tools...

One Autocheck in Solidify is called "deadcode", which gives exhaustive
proofs for RTL lines that are unreachable.  Deadcode reports should be
checked if they are:

  a) just redundant RTL, which can be safely removed
  b) RTL that should be used, indicating a design bug

The designer might be surprised that the code is redundant, but sometimes
functionality is handled by another block.  The deadcode check is also good
for explaining code coverage reports -- if you don't have 100% code coverage
then it could be that some lines are unreachable.  We've recently found that
it also works the other way, some lines that are reported as being covered
have been exhaustively proven to be unreachable!  We're currently
investigating if it's a bug in the code coverage tools or our usage of
them e.g. glitches causing a transient to an unreachable state or not
switching code coverage off during intialisation.

    - [ The Artful Dodger ]


 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)