KV-Witness

an alternative approach to block witnesses

Igor Mandrigin
7 min readJul 14, 2020

There is currently a proposed format for block witnesses in Stateless Ethereum, that has a spec in a GitHub repo. It is based on opcodes. You can imagine a tiny programming language with just a few commands that generate a Merkle Mupltiproof.

This post researches an alternative approach to block witness construction. It is based on a list of keys and values and has a simpler semantics.

In this post I will try to answer the following questions:

  • What is the KV witness format and how it is different from currently proposed (opcode-based) Witness format?
  • What are the pros and cons of the KV comparing to OP?
  • How efficient is this format is in terms of network bandwidth?

Requirements

Any block witness must satisfy the following requirements:

  • Correctness. We should be able to run any block from Ethereum mainnet using this format.
  • Efficiency. We should use the minimum network footprint to transfer this witness.
  • Merkelization. The witness format must support Code Merkelization.
  • Arity-agnostic. The witness format must support both hexary and binary Merkle Tries.

Semantics

Witness format I describe in this part of the post is semantics. I’m not talking about the exact byte layout here.

Later on, I will touch more on the exact binary format of the witness that was used for testing.

KV-Witness
witness ::= header, body, proofs  
header ::= version : byte, trie_arity : byte
body ::= array of [ { type: byte key : []byte, value : []byte } ]
proofs ::= map { <type>: [ { prefix : []byte, hash : []byte } ] }

Witness body

Witness body conists of 2 elements:

  1. Data.
    The key can be: an account address, a storage key or a code key and the value is an account, a storage item or a code chunk respectively. This part of the witness body is completely agnostic to the arity of a Merkle Trie that is used to validate its correctness. Moreover, if we use something else for correctness check this part won’t change.
  2. Proof(s).
    The key is the Merkle path and the value is a hash. The proof is dependent on the trie arity and will be different for hex and bin tries. This semantics allows including multiple proofs with different types in the same witness. So, we can theoretically make a witness that supports both hex and bin tries.

body is sorted lexicographically by key, making sure that:

  • shorter keys are earlier in the list (so we are rebuilding a trie from top to bottom);
  • when keys are equal (that could happen for account and code) account always goes first.

Parsing Algorithm

  1. Validate witness version and the proof arity (make sure that the witness proof arity matches the trie arity that that block requires).
  2. Validate the witness hash (if canonical witness is there).
  3. Create an empty trie of the right arity.
  4. Go though the data, just insert data into this trie in the order it comes (witness should be lexicographically sorted).
  5. Insert proofs into the trie.
  6. Validate root hash (should match the root hash of the previous block).

Pros & Cons

Here is the list of pros and cons comparing KV-Witness vs the current opcode-based witness format.

Pros

  • It matches the flat DB structure, can be inserted immediately if the canonical witness hash is valid (no need to verify Merkle Root).
  • It can be used for snapshot sync.
  • The witness data is independent from the validity proof method we choose: Merkle Tries or Polynomial Commitments or something else.

Cons

  • Potentially higher network footprint on binary tries due to byte alignment (e.g. the proof key 0b01, that is 2 bits will take a byte of storage space).
  • Potentially slower witness parsing.

Network Efficiency Research

KV Witness Example Implementation

We need to be able to prove the correctness of the format. It should be able to run on all blocks of Ethereum Mainnet.

To do that, I have implemented this witness format in a branch of turbo-geth repository: kv-witness-research.

This implementation was tested in Google Cloud on Ethereum mainnet blocks from 5.000.000–8.000.000.

How to repeat the experiment?

You need at least 200GB of free space and least 32gbs of RAM (the code is a PoC and isn’t optimized much).

1) clone kv-witness-research branch of turbo-geth (commit aa6b4dc609b3d871c778597a71ac08601f17de53

2) (took 1 day in my case) sync block headers and bodies for mainnet: go run ./cmd/geth --syncmode staged --datadir ~/stateless-chaindata/

3) (took 17 days in my case) run stateless prototype on this data

go run ./cmd/state stateless --blockSource ~/stateless-chaindata/geth/chaindata --statefile ~/kv_witness_statefile --witnessInterval 5000000 --statsfile ~/stats_kv_witness.csv 2>&1 | tee debug.log

That way, in stats_kv_witness.csv you should obtain the same stats as I have in this document.

Witness binary format

The witness starts with a single-byte header that contains version: header.

Then, there is a witness body that consists of the size (1-4 bytes dependent on the number of witness elements) and the elements themselves.

Each element starts with a single-byte type, followed by key field that is a byte array of arbitrary size prefixed by the size byte (just like body itself) followed by the actual data.

Data depends on what type of element is there:

  • for storage leafs it is a byte array of arbitrary size refixed by the size bytes;
  • for code leafs it is a byte array of arbitrary size prefixed by the size bytes too;
  • for proofs it is a fixed-size 32 byte hash value w/o any size prefix;

For accounts it is more complicated, but basically it is

  • flags byte (mask that tells if account elements don’t have their default values)
  • nonce, 8 bytes (if nonce != 0)
  • balance (if balance != 0), byte array of arbitrary size, prefixed by its size;
  • storage root hash (if not empty root hash), 32 bytes, byte array of fixed size;
  • code hash (if code not empty), 32 bytes, byte array of fixed size.

In the end we get something like this: this: [<header> <witness_elements_count> <element1_type> <element1_key_size> <element1_key_byte1> ... <element1_value_size> ... <element2_type> ...]

KV-Witness

Optimization: Key prefix deduplication

Key is a Merkle Path that consists of nibbles, not bytes. A single nibble size is 4 bits for a hex trie and 1 bit for a binary trie. That way, we can see that sometimes a key can be a non-integer amount bytes: (for example 12 bits is 8,5 bytes).

The key is encoded as []byte, where it is aligned by byte (so a key with a len between 4 and 5 bytes will always be 5 bytes). Also, an additional "mask" byte is added to the beginning that shows which bits are "active" in the last byte.

  • a key 0xFF (1 byte): [00001000 11111111] <== 8 significant bits
  • a key 0b11(2 bits): [00000010 11000000] <== 2 significant bits
  • a key 0b10(2 bits): [00000010 10000000] <== 2 significant bits
  • a key 0b_10101010_01010101_1101(2 bytes 4 bits): [11110000 10101010 01010101 11010000]

Then, we can add a simple optimization that allows us to mitigate repeating prefixes in both data and proofs.

To increase compression efficiency, we will intermix both data and proof in the same ordered map.

Data and proof are together sorted lexicographically

The highest 4 bits of a header byte mean a byte offset of a previous key.

Since in our semantics the keys are sorted, we can use the first 4 bytes describe the followng cases:

  • the key is the same as the previous one, the whole key can then be reduced to 1 byte: [10000000]
  • the key doesn’t share any bytes with the previous one ([0000xxxx xxxxxxxx ...])
  • the key shares up to 14 prefix bytes with the previous one ([????xxxx xxxxxxxx ...]): ??? is big-endian encoded number 1 (001) to 14 (1110).
  • the key shares 15 or more bytes with the previous one ([1111xxxx ???????? xxxxxxxx ...]): where ???????? is an addition to 15.
  • for 15 bytes it will be [1111xxxx 00000000 ... ] (15 + 0)
  • for 16 bytes it will be [1111xxxx 00000001 ... ] (15 + 1)
  • for 32 byte it will be [1111xxxx 00010001 ... ] (15 + 17)
KV-Witness Compression. The number in parenthesis tells how many bits to reuse from the previous key.

Network Efficienty Research Results

Raw data for the results is available in this GitHub repo: https://github.com/mandrigin/ethereum-mainnet-kv-witness-data

To give a comprehensive image in terms of network efficiency, we compare 3 types of witnesses, all of them are running on hexary tries:

1) Opcode witness, the existing one. (Data is from my previous experiments).

2) KV Witness(uncompressed): no key prefix deduplication.

3) KV Witness(compressed): key prefix deduplication is included.

Tests were conducted on blocks 5.000.000–8.000.000.

Absolute size comparison

Absolute size comparison. Compressed KV witness behaves pretty much like the opcode witness.

Quantile analysis

Conclusion

Key prefix deduplication does provide significant improvements to the KV witness. With it enabled, we have almost the same numbers between these two formats.

The KV Witness has a couple of advantages.

The main one is its simplicity. It is much easier to make a spec for a data format, essentially a dictionary, then a more complicated almost-programming-language approach.

The second advantage is that proof is semantically separate from the data, so when we want to change trie arity (from hexary to binary) or when we want to change the proof mechanics completely, it is will require less changes with the KV-Witness approach.

I think it is definitely a worthy contender for the witness spec standard.

--

--

Igor Mandrigin

independent developer & security researcher with passion for blockchain: https://ffconsulting.org