Verifying Wasm Functions: Part 2 | $i64.reverse_bytes

by Rikard Hjort and Stephen Skeirik

Runtime Verification
Jan 8 · 11 min read

This blog post is the second in a series of posts written by Runtime Verification, Inc. and sponsored by dlab on web assembly (Wasm), and more specifically KWasm, a tool for doing formal verification of Wasm programs. If you missed the first post, feel free to take a look!

Today, we will be finishing our mini-series on exploring how to verify a fragment of the WRC20 contract, i.e., a Wasm implementation of the Ethereum smart contract ERC20, which provides an API for token exchange that is usable by other kinds of smart contracts (e.g., digital wallets). This is relevant because the Ethereum virtual machine (EVM) is migrating to Wasm implementation dubbed EWasm.

The first part of the mini-series was a deep dive into what formal verification is and how we can use the K Framework to help automate this process. In this final part, we will show how we can formally verify a fragment of a program (i.e., the WRC20 smart contract) which is written in web assembly (Wasm).

Our desire is that you will not only enjoy this technical adventure but also walk away knowing just a bit more about Wasm and how the formal verification process proceeds in practice.

WRC20 Verification: first step

WRC20 is a Wasm version of an ERC20. The WRC20 program can be found here. It’s simpler than an ERC20: it only has a function for transferring (the caller’s) funds to another address, and a function for querying the balance of any address. Keep in mind also that Ewasm is part of Ethereum 2.0, phase 2. It’s still a work in progress, so exactly what Ewasm will look like is unclear. This is based on an early work-in-progress specification of Ewasm.

In the end, we want to verify the behavior of the two external functions, transfer and getBalance. To do so we will need to introduce K rules about the Ethereum state. That’s a topic for the next blog post. This time, we will focus on a helper function: $i64.reverse_bytes. This function does a simple thing: it takes an i64 as a parameter and returns the i64 you get by reversing the order of the bytes.

x_0 x_1 x_2 x_3 x_4 x_5 x_6 x_7

becomes

x_7 x_6 x_5 x_4 x_3 x_2 x_1 x_0

What’s the point of this? Wasm stores numbers to memory as little endian, meaning the least significant byte goes in the lowest memory slot. So for example, storing the i64 value 1 to memory address 0 means that the memory looks like this

Ewasm call data is by convention interpreted as big endian, meaning the bytes are in the opposite order. So the contract must reverse all integer balances: in transfer, it must reverse the amount to be sent, and in getBalance it must reverse the result before returning.

$i64.revserse_bytes Source Code

Wasm Text Format

Here’s how our function looks like when rendered using the Wasm text format.

Recall that Wasm instructions are for a stack machine. The operations in the function above are manipulating a stack and a set of local variables. The return value of the function is the last remaining value on the stack. Like other assembly languages, because Wasm is low-level, it can be quite hard to read. Let’s try to improve the situation by rewriting this program in an easier to read format.

Wasm Pseudo-code

Notice that the sub-expression (i64.const 56) (local.get 1) (i64.const 8) i64.mul i64.sub i64.shl (with added parentheses for disambiguation) actually appears twice. Unfortunately, without changing the Wasm semantics or function declaration, we cannot simplify out this expression to a local, temporary variable. However, for the purposes of understanding how this function works, we would prefer to use a pseudo-code representation anyway. The exact details of how we obtain the pseudo-code aren’t important for this article, so we just present it below.

(func $i64.reverse_bytes(i64 input)
i64 local[1] = 0
i64 local[2] = 0
while True:
if local[1] >= 8:
break
bits = 56 - (local[1] * 8)
local[2] = local[2] + (((input << bits) >> 56) << bits)
local[1] = local[1] + 1
return local[2]
)

This is much more readable. As is usual, operations like + and * can overflow; if they do, the result is undefined. The >> and << bit-shifting operations operate just as in other languages, unless the shift width is greater than or equal to the operand size, i.e., here 64, in which case, the shift width modulo 64 would be used. In this program, we can prove that no operator can ever overflow (because the shift widths are all provably less than 64) and because adding any unsigned number to 0 cannot overflow.

In this notation, we see that we are copying the bytes from the input variable to our output local[2], one byte at a time. The bytes of input are successively extracted by the incantation ((input << bits) >> 56) which zeros out all bits of input except for the local[1]th byte (which is now shifted all the way to right). These extracted bytes are then successively added to local[2] in reverse order, by first left-shifting by bits.

Finally, remember that pseudo-code is just for the reader; the formal verification process only concerns real code.

The proof obligation

Without further ado, here is what we are going to try to prove[1]:

The interesting parts are:

• the <k> cell,

• the <memInst> cell group, and

• the pre- and postconditions, requires and ensures.

The <k> cell

Here we first declare the function, which we have saved, in pre-parsed form, as a macro. After that (remember, ~> should be read as “and then”), we run a few Wasm instructions that load 8 bytes from memory, invokes the i64.reverese_bytes function, and stores the result back to the same address.

The <memInst> cell

The <memInst> cell simply states that there is a memory with address MEMADDR, the same as int <memAddrs> in <moduleInst>, which makes this the memory belonging to the module we are currently executing in. We also state that the memory gets rewritten, from BM to BM’. Every part of the state that we do not state get rewritten will be assumed to stay the same.

The pre- and postconditions

The requires and ensures sections say what we assume to be true when at the outset of the proof and what we need to prove at the end of the proof. Note that some pre- and postconditions are expressed in the rewrite rules themselves, such as the value in <memAddrs> of the current module matching the <mAddr> of the <memInst> that gets changed (precondition) and that the program in the <k> gets consumed (postcondition). The requires and ensures clauses are simply for stating facts that we can’t express directly in the rewrite.

The first 4 requirements are really boilerplate relevant to the technicalities of the semantics. The first states that the type of the i64.reverse_bytes function has not already been declared. The second, third and fourth rules all make sure that integers, whether constants or stored in the byte map, are in the allowed range. Without these assumptions the prover assumes the values are unbounded integers.

The final clause in the requires section states that our memory accesses are within bounds. This is why we need the SIZE of the memory. A separate (but less interesting) proof would show that the function causes a trap if this precondition is violated.

The ensures section is straightforward. We are simply asking the prover to ensure that the final memory has correctly flipped the bytes.

Helping the prover along

Simply stating the above proof obligation and giving it to the prover will result in inconclusive results. The prover will fail without having proven or disproven the claim.

One reason for this failure is that under the hood, there is a good deal of modular arithmetic going on. This happens when we transition from the bytes in memory to integer values, and back. K does not (yet) have much support for reasoning about modular arithmetic. This presents an excellent opportunity to add that support. We will extend the set of axioms K knows, triple checking each (so that we don’t introduce unsoundness), directed by the places where the prover gets stuck.

Axiom Engineering: avoiding infinite rewrites

The new axioms need to be designed with care. Apart from making sure they are correct, we want to ensure that they are simplifications: that is, they reduce the state somehow. This can happen by making expressions or values smaller. The reason is that we want to ensure we cannot get explosive growth. The prover will look for all axioms it knows and try to apply them. Therefore, if we were to add statements like X +Int Y => Y +Int X, which does not reduce the state, we are in trouble — even though this is perfectly sound. Can you guess why?

If we were to add a rule that says X +Int Y => Y +Int X, then the prover will apply this rewrite wherever it can, over every addition it sees. But once it is done, there are still just as many additions in the state, and the prover will again apply this rewrite over all additions it sees.

This kind of careful engineering is necessary for axioms we want to add to K’s reasoning capabilities for all programs. For verifying specific languages or programs we may get away with being a little less rigor. However, it is good practice to try to ensure that your axioms are causing your prover to loop forever.

Adding new axioms

When we conduct the proof without adding anything to the semantics, the prover manages will manage to run the program to termination. The contents of the <k> cell are gone, rewrites have happened in all the expected places.

However, the memory array in <mdata>, BM, will have been replaced by a behemoth of a symbolic expression. Just look at the size, there is no real point in scrolling through it all.

The reason for the size is that the function returns the following large expression, which is the reversed i64, and repeats it 8 times, once for each byte that gets inserted into memory.

To make the proof go through, we will need to tell K how to reduce both the above i64 expression a bit, and how to reduce the resulting #set expressions.

Now, the rewrite part of our proof obligation states only that BM rewrites to something — even if it ends up being a really big expression.

<mdata> BM => BM' </mdata>

The crux comes in the ensures section. K cannot immediately see that the postconditions hold. It is not immediately obvious to that, for example, #get(BM, ADDR +Int 1) ==Int #get(BM’, ADDR +Int 6). K then discharges the proof obligation to Z3 and asks it to prove the postcondition. But since Z3 does not understand our byte map functions, #get and #set, nor the bit shifts over integers, it doesn’t stand a chance.

So it’s time for us to start doing axiom engineering. Our job is to ask ourselves: “What are some true things that the prover doesn’t seem to get?” and the expand its knowledge base while sticking to the 3 principles we set out for ourselves.

Let’s look at part of the resulting expression. The algorithm started with <i64> 0 in the local 2. After one iteration of the loop, the value has changed to.

(((((( #getRange(BM, ADDR, 8) <<Int 56) modInt 18446744073709551616) >>Int 56) <<Int 56) modInt 18446744073709551616) +Int 0) modInt 18446744073709551616

Here it is as a syntax tree, with 18446744073709551616 converted to

Looking at this expression, there are some obvious structural changes that we can tell K about. We don’t include a proof of our new axioms here, but there is a formal proof included with the proof in our lemmas file.

First, let’s get rid of the +Int 0. Z3 of course knows that x + 0 is x, but until today, we haven’t given these reasoning capabilities to K directly. We are moving to adding more reasoning to K for a variety of reasons. So we open by adding the following, obvious claim to our set of axioms.

rule X +Int 0 => X

Then, we have a modInt outside a modInt. They are even modulus the same number. We could say (X modInt N) modInt N => X modInt N, but let’s be a bit more general:

rule (X modInt M) modInt N => X modInt M
requires M >Int 0
andBool M <=Int N

We also have a right-shift followed by a modulus. In unbounded integers, shifting by N is the same as multiplying by 2^N. That gives us the following rule:

rule (X <<Int N) modInt POW => (X modInt (POW /Int (2 ^Int N))) <<Int N
requires N >=Int 0
andBool POW >Int 0
andBool POW modInt (2 ^Int N) ==Int 0

This gives us a much smaller state:

This present a nice opportunity to get rid of some shifts. Again, recall that these are unbounded integers, so shifting left does not get rid of any bits of information.

rule (X <<Int N) >>Int M => X <<Int (N -Int M) requires N >=Int M

The state is further reduced, by deleting two of the shifts. This exposes yet another situation where we have a (X modInt 2 ^Int 8) modInt 2 ^Int 8 expression, which get simplified.

We also tell K something about the way getting values from (little-endian) memory works:

rule #getRange(BM, ADDR, WIDTH) modInt 256 => #get(BM, ADDR)
requires WIDTH =/=Int 0
andBool #isByteMap(BM)
ensures 0 <=Int #get(BM, ADDR)
andBool #get(BM, ADDR) <Int 256

In the end, adding these rule leaves us with our final expression (for now):

You may recognize this as getting the least significant byte of the stored value and putting it in the position of the most significant one in the resulting i64 value. A good start!

The full set of extensions

In the end, to get the proof to pass we added 40 new axioms. You can see them all in our lemmas file.

• 24 of these are general lemmas that can be upstreamed into K’s reasoning capabilities.

• 7 relate to the #get and #set operations of KWasm, and can be used in any KWasm verification

• 8 are specific to the proof we just wrote.

Of the 23 general ones, many are just trivial copies of each other. For example, we need both the rules.

rule X +Int 0 => X rule 0 +Int X => X

This repeats for all rules over addition. We can’t tell K directly that addition commutes, because, if you recall, the rule X +Int Y => Y +Int X would cause infinite rewrites. So when it comes to stating true, simple things that K should know about, we need to be a little repetitive.

That’s it

Hopefully we have given you some insight into the process of formal verification. As you may have noticed, it is a lot of work to verify something seemingly simple. This goes to show what our CEO expressed recently on Twitter:

1/2 “Formal verification” is now a buzzword in the blockchain, but it will not be done properly unless people understand that it takes significantly more work to formally verify a program than to write the program first place. Think 9x more for smart contracts!

— Grigore Rosu (@RosuGrigore) May 31, 2019

Formal verification takes time. Luckily, it doesn’t have to take this much time and effort in the future. With the new axioms we added we can do more automatic reasoning about modular arithmetic and bit shifts in general, and of Wasm memory in particular.[⁷] Note that this Wasm semantics project is still in an early phase. As we write more proofs, the set of axioms will grow, and with it the KWasm prover’s reasoning power.

To see the full set of axioms we use in KWasm, see this file in the GitHub repository. If you want to use K or KWasm for anything, feel free to open issues or contact us in the open K chat.

dlab/diary

Blockchain news, analysis, research, and opinions from the lab

Thanks to Stephen Skeirik

Runtime Verification

Written by

Runtime Verification Inc. is a technology startup providing cutting edge formal verification tools and services for aerospace, automotive, and the blockchain.

dlab/diary

Blockchain news, analysis, research, and opinions from the lab

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade