Tries, Radix Trees and Ethereum

Tries, and their space-optimised cousin radix trees, provide a means to implement efficient and immutable key value stores. These data structures have been around for quite some time, but recently found great application in the Ethereum block chain where they are used to store the state of the Ethereum Virtual Machine. This post will go over the basics of these two data structures and also provide a sample implementation in golang. The full source code can be found here.

Key Value Stores

Key value stores are a simple data storage paradigm that allows us to associate some arbitrary data (the value) with an identifier (the key). They provide efficient methods to;

  1. insert a new key-value pair,
  2. look up the value stored for a given key.

Data structures providing this functionality are all over the place. They are a standard feature in programming languages; such as Python’s dictionary or Go’s map. It is hard to imagine programming without these tools at ones disposal.

Another common use for key-value stores is NoSQL databases, such as Dynamo and Redis. While these databases generally lack the query power of SQL, they are fast and massively scalable. Because of this they have found countless application in big data and real time applications.


Before covering the detail of some key value implementations we should briefly mention the importance of immutable data.

The idea of immutable data is that once data is created it doesn’t ever change. Functional programmers have been evangelical about the use of immutable data for decades as they have found it makes their programs easier to reason about, debug and parallelise.

Immutable data structures allow a program to ‘update’ large complex data structures (such as key-value stores) efficiently, while leaving the previous state of the data structure unaffected. The data structures are never actually updated in place, rather new data is created that is similar to the old data, but with some small modifications applied. Additionally, the new state will reuse the memory from the old state if the data has not changed. This makes such immutable data structures memory efficient if a full history of previous states need to be retained. The following diagram demonstrates how tree based structures allow for this efficient update and sharing of memory.

Tree-based data structures allow for sharing of memory

The tree Si is modified by changing the data stored node G. Even if the tree contains many nodes we can get to G efficiently (provided the tree is balanced). Once G is replaced with G’ the ancestors also need to be updated to point to the correct child (D becomes D’ and F becomes F’). All the other nodes in the tree are unaffected and do not need to be copied, so the new state can refer to data from the old state (shown as red edges in the diagram). If the tree is balanced the number of ancestors will be very small, even if the tree contains many nodes.

This immutability is vital to Ethereum’s state storage mechanism as previous states need to be retained so transactions can be verified.


Lets start to look at concrete implementations of key value stores. A trie (as in retrieval, re-trie-val) is a type of tree based key value store. The keys are a sequence of elements from some fixed alphabet, for example strings.

A trie has the property that all descendants of a node have keys with a common prefix. This means insert and look up operations will run in O(k) where k is the length of the key.

Lets walk through an example, broken into a few steps. Refer to the images below to get sense of how the data structure works.

  1. Create an empty trie. The root of the tree is a null pointer.
  2. Insert the pair (“dog”, 1). Each letter gets a node, and the leaf node contains the value 1.
  3. Insert the pair (“cat”, 2). There is no shared prefix, so the new data branches after the root.
  4. Insert the pair (“doge”, 3). This key shares a prefix with the existing key “dog”, so the new node is added as a child to the existing leaf.
  5. Insert the pair (“canape”, 4). This shares a prefix with “cat”, so the new nodes are added as children to the existing branch.


Lets implement this data structure. We will assume the keys are sequences of hexadecimal digits (0–15) and the data is a byte array, as is used in Ethereum.


Lets start by creating a struct for the nodes.

We pre-allocate an array to represent the children of a node. This allows lookup and insert functions to descend to the appropriate child without performing comparisons.

We will need a shallow copy function;


Now lets make the lookup function. When performing the lookup function we always have 3 cases;

  1. Empty trie so return no data,
  2. there is no remaining key to search so return the data at the current node,
  3. otherwise descend to the appropriate child and continue searching with a shortened key.


Now lets make the insert function. When performing the insert function we also have 3 cases;

  1. Empty trie so create an empty node and insert here,
  2. there is no remaining key so add the value to the current node,
  3. otherwise descend to the appropriate child and continue.

Public Interface

Lets hide the implementation behind a public interface.

Example Usage

Let’s write a few basic tests. We will check the basic insert and look up functionality. We will also ensure that the structure is immutable.

Radix Tree

The simple trie implementation provided above suffers from at least one glaring issue; memory usage. Every node allocates enough memory to point to 16 children regarless of how many children it has.

It is possible to trade some time efficiency for space efficiency by replacing the slice of children with a hash map or list of (prefix, node) pairs. But even after doing so a long key with no shared prefix will require each value in the key to be represented by a unique node. A common optimisation is to compress the trie by merging nodes that have only one child. Applying this idea to the example from above we can easily see the difference.

Radix trees are space-optimised tries

Now only the branch points need to allocate memory for children, and long sequence of key data can be stored optimally. The radix tree will see its largest benefit when there are long prefixes of keys without any branches.


A radix tree implementation is signifigantly more complex than a trie. We will base this implementation on that found in Ethereum.


We use several different structs to represent different possible nodes. They are as follows;

You can see this data structure doesn’t directly map to the high-level example of a radix tree given above. It still benefits from the compression of shared prefixes and reduction of null children, but the branch points can only contain a single hexadecimal digit. The following diagram demonstrates how we would represent the previous example concretely. Red diamonds represent the fullNode, blue circles represent the shortNode and green circles represent the valueNode.

This implementation of a radix tree has single-digits at the branch points


The look up function has to deal with each of the new node types;

  1. The radix tree is empty;

return nil.

2. The node is a valueNode;

(i) The key is empty; return the value,

(ii) otherwise return nil.

3. The node is a fullNode;

(i) The key is empty; return the value at this node,

(ii) Otherwise call lookup on the appropriate child node.

4. The node is a shortNode;

(i) There is no shared prefix of the keys; return nil,

(ii) The shortNode.Key is a prefix of key; call insert on child


Insert also has to deal with the new types of node;

  1. The radix tree is empty;

(i) The key is empty; insert a value node,

(ii) Insert a shortNode with the key, call insert on the child.

2. The node is a valueNode;

(i) The key is empty; replace the value,

(ii) Create a branch with the existing data as a child, call insert on the branch.

3. The node is a fullNode;

(i) The key is empty; insert on the value child,

(ii) Call insert on the appropriate child node.

4. The node is a shortNode

(i) The shortNode.Key is a prefix of key; call insert on child,

(ii) There is no shared prefix of the keys; create a branch and insert short nodes child and the new (key, value) pair on the branch,

(iii) Create a branch as above, but insert it under a shortNode containing the shared prefix.

Public Interface

We can put it behind the same public interface as the trie. We can also reuse the tests as the expected behaviour should be identical.


To compare the time and space behaviour of these two data structures I will also present some basic benchmarks. The programs were run on my Asus zenbook, with an i5–6200U CPU and 8GB of RAM running Ubuntu 16.04.

I generated some random data where each key was a random sequence of 8 hexadecimal digits, each value was a random 32 bit integer. Data sets of size 100, 1000, 10000, 100000 and 1000000 pairs were generated (3 instances of each size). Both a trie and radix tree were generated for each data set and the time to generate the data structure and the memory required to hold the data structure were measured. The following was observed;

Execution time for many inserts
Memory usage after many inserts
Execution times as a ratio
Memory usage as a ratio

As expected the Radix trees used less memory than tries. For the data sets of size 100, 1000, 10000 and 100000 the radix tree was also constructed in less time. For data sets of size 1000000 the trie was constructed faster. The radix tree benefits more on smaller data sets because the path from the root to the leaf is shorter, but as the tree fills out the path length increases and the benefit seems to be out weighed by the additional complexity of the insert algorithm.


Tries and radix trees provide an immutable key value store, exactly what is needed to represent memory for Ethereum’s virtual machine. While tries and radix trees provide the same functionality, tries were seen to use over 4 times as much memory and perform at half the speed.

The present implementation is slightly simplified from the implementation found in Ethereum. For the sake of simplicity I deliberately omitted;

  1. merkle proofs,
  2. encoding keys,
  3. paging data out of RAM.