In reality, of course, we don't don't build hardware containing uninterpreted functions. What we would like to do is to verify the design for an arbitrary function f, and then ``plug in'' some specific definition of f (for example, f might represent binary addition). We can declare an interpretation for our function f as follows:
function f(x : WORD, y : WORD) : WORD := x + y;
That is, f is a function that takes two operands of type WORD and yields a WORD, and further, by definition, f(x,y) = x + y. SMV gives us the option, in proving any given property, to either use this interpretation of f, or to leave f uninterpreted, as we did above.
For example, in our simple pipeline, we might like to split the proof of correctness into two steps. First, we show that the ALU in our pipeline implements the function f (using f as an interpreted function). Then we show that the pipeline implements its specification, leaving f uninterpreted, as above. This allows us to completely separate the problems of correctness of the pipeline design and correctness of the ALU. Reasoning about the pipeline is simpler, because we can treat WORD as a symmetric type.
Here's how we do that in practice. We would like WORD to be a symmetric type, to simplify reasoning about the pipeline. On the other hand, we want WORD to be a bit vector type, since our machine operates on bit vectors. Fortunately, SMV allows us to treat either scalars or bit vectors as symmetric types, so let's declare WORD as follows:
scalarset WORD array 31..0 of boolean;
Now, here is the declaration of function f:
breaking(WORD) function f(x : WORD, y : WORD) : WORD := x + y;
We had to use the declaration breaking(WORD) because our interpretation of f breaks the symmetry of type WORD. This means that we if we are relying on the symmetry of WORD to prove a property, we won't be able to use the interpretation of f. Now, lets make a module that implements a ripple-carry binary adder. Here is one possible version:
module adder32(x,y,z){ input x : WORD; input y : WORD; output z : WORD; carry : array 32..0 of boolean; carry[0] := 0; breaking(WORD){ forall(i = 0..31){ z[i] := x[i] ^ y[i] ^ carry[i]; carry[i+1] := (x[i] & y[i]) | (x[i] & carry[i]) | (y[i] & carry[i]); } } }
Notice the breaking definition that tells SMV we are going to do something non-symmetric with type WORD. Now, let's use that module in our implementation. We replace this:
alu_output := f(stage1.opra,stage1.oprb);with this:
alu : adder32(stage1.opra,stage1.oprb,alu_output);Now our implementation has a bit-level adder. First, let's show that this is equivalent to using function f, as before. We can do this by writing a ``layer'' for the alu output, as follows:
layer alu_ok: alu_output := f(stage1.opra,stage1.oprb);To use this layer to prove our original specification of alu_output in lemma2, we need to add the following declaration:
alu_ok refines lemma2;To prove this specification of alu_output, we will need to use the interpretation of f. This in turns requires that we break the symmetry of WORD. We will also need to free the inputs of the ALU, otherwise the cone will contain the entire implementation at the bit level. Thus, we use the following declaration:
breaking(WORD) using interpreted(f), stage1//free prove alu_output//alu_ok;The clause interpreted(f) indicates that we want to use the interpretation of function f in proving the property. By default, functions are left uninterpreted. Because we have broken the symmetry of WORD, we will get one property to prove for each bit of alu_output.
Now, open this version. For the property alu_output//lemma2[a][b][c], you'll notice that we have the same six cases to prove as before. If you select one of these cases, and look in the Cone pane, you'll see that layer alu_ok has been used for alu_output. That is, SMV has used the function f for the ALU instead of our ripple-carry adder. This is the only available choice, since the ripple-carry adder breaks the symmetry of WORD. Further, by default, function f has been left uninterpreted. Thus, the verification of alu_output//lemma2[a][b][c] is exactly the same as in the previous example.
On the other hand, we now also have to prove alu_output[31..0]//alu_ok. That is, we must show that the ALU actually implements function f at the bit level. If you select one of these properties, you can observe in the cone pane that there are up to 64 combinational variables (the ALU inputs) but no state variables, since these are eliminated by the cone-of-influence reduction. Since binary addition is easily handled by BDD's, we can verify these properties in a very short time. If our ALU function were, say, multiplication instead of addition, we would face much greater difficulty. Matters would be somewhat simplified, however, by the fact that we do not have to think about the pipeline when verifying the ALU.
Try Prop|Verify All to check the proof. It should take just a few seconds.
In the rest of the tutorial, we will mostly use function symbols without interpretations. However, it should be understood that it is always possible to ``plug in'' interpretations for these functions without changing the high level proof, in the same manner that we have done here.