Consistent Hashing with Clojure

Suvrat Apte
helpshift-engineering
8 min readDec 31, 2020

In this post, let’s try and understand what Consistent Hashing is, when it is needed and how to implement it in Clojure. Consistent hashing has many use cases. I have chosen the use case of distributed caching for this post. One other common use case is for sharding in databases.

Caching

Almost all applications today use some kind of caching. Caches reduce the number of requests directly served by your database and improve latency. You start with one cache node sitting over your database. On read paths, you check if the data is available on the cache node, if not, you go to the database, serve the request and store the data on your cache node. On write paths, you update the database and then update (or invalidate) your cache. In such a setup, your cache provides fast access to items in your DB, and reduces load on the DB.

But as your application usage grows, your cache node is also going to get overwhelmed and soon enough, you will need multiple cache nodes. When you have multiple nodes, you will need to decide how you are going to divide data between those nodes.

Distributed Caching

One very simple strategy to divide data between cache nodes is to take an integer hash of the cache key and then take the mod by the number of cache nodes.

For example, if the hash of the cache key came out to be 86696499 and if we have 4 servers, then (86696499 mod 4) = 3; so the data will go to the node with index 3 (0 based indexes).

This is very simple to implement. For simplicity, we will use email addresses of users as cache keys.

Mod n hashing

Let’s go through the code above.

emails (line #3) is just a collection of emails.

get-node (line #7) tells us which object should go on which node. This was the logic that we discussed previously. We are first taking a hash and then taking mod n of that hash.

get-distribution (line #13) just runs get-node on emails and returns a map which has node names as keys and corresponding values as list of keys which would reside on those nodes.

The last line prints how emails will be divided on cache nodes if we had 4 cache nodes. It prints the following map:

{"node-0" ["Bob@example.com" "Steve@example.com"
"Zack@example.com"],
"node-1" ["Carry@example.com" "Daisy@example.com"
"George@example.com"],
"node-2" ["Mark@example.com"],
"node-3" ["Heidi@example.com"]}

Okay so this seems to be working well! Now every time we have to fetch some data, we will just check on which node that data is expected and then we will try to fetch it from that node. So far so good!

In today’s distributed infrastructure, the total number of cache nodes can easily change. There could be multiple reasons such as, needing a few extra nodes when more traffic is expected, or a node going down due to some error.

We can simulate the above two situations.

Let’s say we add another cache node in our cluster. We can simulate this by running:

(->> emails (get-distribution 5) (into (sorted-map)) pp/pprint)

We will get the following:

{"node-0" ["Bob@example.com"],  
"node-1" ["Daisy@example.com"],
"node-2" ["Steve@example.com"],
"node-3" ["Carry@example.com" "George@example.com"
"Mark@example.com"],
"node-4" ["Heidi@example.com" "Zack@example.com"]}

If we compare the distribution of keys for 4 nodes vs 5 nodes, we can see that 6 out of 8 keys have different nodes now.

The same will happen if a node goes down. Let’s simulate this by running:

(->> emails (get-distribution 3) (into (sorted-map)) pp/pprint)

This produces:

{"node-0" ["Zack@example.com"],
"node-1" ["Bob@example.com" "Daisy@example.com"
"George@example.com"],
"node-2" ["Carry@example.com" "Heidi@example.com"
"Mark@example.com" "Steve@example.com"]}

In this case as well, 5 out of 8 keys now have a different node.

Both of these cases create a really bad situation for our databases. Initially, our data was distributed across 4 nodes. On adding or removing nodes, almost all of the keys get relocated to other nodes. This results in a flurry of cache misses and all the missed requests go to our databases. Further, our cache nodes now have stale data with no way for us to clean it up. We have to wait for the stale data to expire.

This is clearly undesirable and in extreme situations, this could even take our entire system down.

Let’s understand why this is happening. We need to remember that our logic to select nodes ( get-node) for data includes number of nodes as a parameter. So when our number of nodes changes, clearly the output of get-node is most likely to change.

We need to find a strategy which will not directly depend on the number of nodes that we have.

Consistent Hashing

Consistent hashing is a simple method of hashing which does not depend on the number of nodes we have. In consistent hashing, we imagine our nodes to be placed on a ring. The ring is made up of the range of our hash function. For example, if our hash function produced hashes over the entire range of integers, then the ring would go from the minimum integer to the maximum integer.

We will generate hashes for nodes using some unique property of nodes, say IP addresses. These hashes will be the locations of our nodes on the ring.

Consitent hashing ring

To insert or retrieve data, we will hash the caching key and use the node which is closest to the caching key hash in the clockwise direction. (Clockwise is just a convention we are using for this post. Anti-clockwise will also work.)

Key distribution

What benefit has this given us?

Well, so far it does not look useful. In fact, we are doing more work to find out which data goes to which node. We will now consider the two cases that we discussed for mod n hashing.

Let’s say we add a 5th node to our ring.

If the 5th node gets placed between the 1st and the 2nd node, think about which keys will get relocated. Only the keys between 1st and 5th node will be relocated to the 5th node. All the keys on the rest of the ring will remain where they were.

Rebalancing on addition of a node

Similarly, if one of our nodes, say the 4th node, goes down; then only the keys between the 3rd and 4th will get relocated.

Rebalancing on removal of a node

When nodes are added or removed, only count(keys) / count(nodes) number of keys will be relocated. This will reduce the number of cache misses by a huge amount and save our databases from having to serve a large number of requests!
(There is a small caveat here which is discussed later.)

Implementation in Clojure

Let’s write a simple API for consistent hashing. Here are the functions we need:

  1. create-ring, add-node, remove-node: These functions will build and manage the ring.
  2. get-node: This function plays the same role as it did in the mod-n hashing.

Let’s look at create-ring first.

Create consistent hashing ring

We are taking the set of nodes to make sure there are no duplicate entries. Then we get the hash values for all the nodes and sort them in ascending order. This sorted list will be used to find the closest node in the ring. We also create a lookup table hash->node which gives us the node corresponding to a given hash. Finally, we return both of these so that they can be passed to other functions in our API.

Let’s see how add-node and remove-node work.

Add node and Remove node

As you can see, these are very simple functions which just get current-nodes from the ring and then add or remove a node and call create-ring again. This works because our hash function is repeatable. So creating the ring again generates the same hash values for existing nodes.

Let’s look at the last function in our API, get-node.

Get node for key

The main logic in this function is to find the closest node to the cache-key, in clockwise direction.

(or (->> node-hashes         
(drop-while #(< % key-hash))
first)
(first node-hashes))

node-hashes is the sorted list of hashes of our nodes. We are dropping the nodes which have hashes lesser than the hash of the cache-key. We will stop dropping once we find a value greater than key-hash. This value will be the hash of the closest node in clockwise direction. If key-hash is greater then all the values in node-hashes, we wrap around and select (first node-hashes). For simplicity, I have used sequential search for finding the closest hash. Binary search could be used for making this more efficient.

Now our API for consistent hashing is complete and ready for use!

Let’s use it for the same example scenarios that we used for mod n hashing.

Consistent Hashing

Result with 4 nodes:

{"node-0" ["Daisy@example.com" "George@example.com"],
"node-1" ["Mark@example.com" "Zack@example.com"],
"node-2" ["Bob@example.com" "Heidi@example.com"],
"node-3" ["Carry@example.com" "Steve@example.com"]}

With a node added:

(->> emails 
(get-distribution (conj nodes "node-4"))
(into (sorted-map))
pp/pprint)
{"node-0" ["Daisy@example.com" "George@example.com"],
"node-1" ["Mark@example.com" "Zack@example.com"],
"node-2" ["Heidi@example.com"],
"node-3" ["Carry@example.com" "Steve@example.com"],
"node-4" ["Bob@example.com"]}

Only 1 out of 8 keys got relocated!

With node-3 removed:

(->> emails 
(get-distribution (drop-last nodes))
(into (sorted-map))
pp/pprint)
{"node-0" ["Daisy@example.com" "George@example.com"],
"node-1" ["Mark@example.com" "Zack@example.com"],
"node-2" ["Bob@example.com" "Carry@example.com" "Heidi@example.com"
"Steve@example.com"]}

Only 2 out of 8 keys got relocated. We can see that node-3 keys are now with node-2. Keys with node-0 and node-1 have not changed at all.

Caveats

Our implementation above is not something that can be used in production. The problem is that when we have only a few nodes, we could land up in a situation like this:

Poorly distributed nodes

In the above situation, most of the keys will go to Node 1.

The solution is simple. Instead of having just one hash per node, we could have multiple hashes which map to the same node. This will ensure more randomness and a better distribution of keys. It will look like this:

Virtual nodes

One more advantage of using virtual nodes is that you can have a heterogenous cluster where some nodes are more powerful than others. In such clusters, you can assign more virtual nodes to the powerful nodes for better utilization of resources.

Conclusion

  • Consistent hashing is a simple and great caching strategy to make sure your databases are protected from hotspot in a distributed environment.
  • Consistent hashing is able to achieve this by getting rid of number of nodes as a parameter to hashing.
  • When nodes are added or removed, at most count(keys) / count(nodes) number of keys will be relocated.
  • Persistent databases like Cassandra and Dynamodb use consistent hashing for sharding data.
  • Many in-memory datastores today use consistent hashing.

--

--