Read this before building a Distributed System

This time, we are going through the most used Peer to Peer protocols which could be used in blockchain or any kind of distributed systems, all of which we have used in our journey till now.

What is a DHT

Hashing is a technique that is used to uniquely identify a specific object from a group of similar objects. A distributed hash table (DHT) is a class of decentralised distributed system that has (key, value) pairs and any participating node can efficiently retrieve the value associated with a given key. This is similar to the working of a hash table that forms a data structure that implements an associative array abstract data type. Nodes have the responsibility to manage the mapping from keys to values in a way that creates minimal disruption in the case of change of participants. This allows DHT to scale to a very large number of nodes and can afford constant arrivals and departures as well as intermittent node failures.

DHTs characteristically emphasize the following properties:

  • Autonomy and Decentralization: the nodes collectively form the system without any central coordination.
  • Fault Tolerance: the system should be reliable (in some sense) even with nodes continuously joining, leaving, and failing.
  • Scalability: the system should function efficiently even with thousands or millions of nodes.
Distributed Hash Tables (courtesy: Wikipedia)

Notable distributed networks that use DHTs include BitTorrent’s distributed tracker, the Coral Content Distributed Network, the Kad network, the Storm botnet, the Tox Instant Messenger, Freenet and the YaCy search engine.


Chord is one of the four original DHT protocols, along with CAN, Tapestry and Pastry, and was developed at the MIT. Chord is also an algorithm for P2P DHT. It specifies how keys are assigned to nodes, and how a node can discover the value for a given key by first locating the node responsible for that key.

Chord queries a key from a client (usually a node) to find the successor(k). If the key can’t be found locally, the query is passed to a node’s successor, which leads to a O(N) query time where N is the number of nodes in the ring.

The linear search above is avoided by implementing a faster search method that requires each node to keep a finger table that contains up to ‘m’ entries, where ‘m’ is the number of entries in the hash key. With this kind of a finger table, the number of nodes that must be contacted in an N-node network to find a successor becomes O(log N).

What happens when a new node joins?

On the joining of a new node, three invariants are to be maintained:

Chord Basic
1. Each node’s successor points to its immediate successor correctly.
2. Each key is stored in successor(k).
3. Each node’s ‘finger table’ should be correct.

From the above list of invariants, the first two are maintained to ensure correctness and the third one is meant for making querying fast.

Potential uses of the Chord protocol:

  1. Cooperative Mirroring: This is a load balancing mechanism by a local network that hosts information available to computers that are external to the network.
  2. Distributed Indices: Distributed indices, as the name implies, allow retrieval of files over the network within a searchable database.
  3. Large scale combinatorial searches: Keys are candidate solutions to a problem and each key maps to the node that is responsible for evaluating them as a solution.
Finger Tables in the Chord protocol


Pastry, similar to Chord, is an overlay and routing network for the implementation of a DHT. Pastry uses a redundant P2P network of connected Internet hosts that store key-value pairs. Because of its redundant and decentralized nature there is no single point of failure and any single node can leave the network at any time without warning and with little or no chance of data loss.

The routing overlay network built on top of the DHT concept allows Pastry to realise the scalability and fault tolerance of other networks while reducing the overall cost of routing a packet from one node to another by avoiding the need to flood packets. An added advantage in Pastry is the ability to use a routing metric provided by an external program such as ping or traceroute to determine the best routes to store in its routing table.

Pastry’s hash table has a circular key-space, just like Chord’s hash table. 128-bit unsigned integers are used for Node IDs that represent position in the circular key-space. These are chosen randomly so as to ensure that adjacent node IDs represent geographically diverse peers.

Potential uses of the Pastry protocol:

Routing in Pastry upon coordinator node failure

Pastry specifies how keys are distributed and how the node responsible for holding a key can be found. This is used as an underlying layer for a higher protocol that enables implantation of functionality such as a subscription system, a distributed file system. Any system that needs to implement storage of values and then retrieving them later can use Pastry very efficiently.


Tapestry is also a P2P overlay network that provides a DHT, routing, and multicasting infrastructure for distributed applications.

Tapestry’s USP is the offering of an efficient, scalable, location-aware and self-repairing routing to nearby resources, which proves to be very needed in a distributed network of nodes. Unlike early P2P applications such as Gnutella or Napster, a P2P overlay network such as Tapestry implements a basic key-based routing mechanism. This is similar to Pastry as it adopts the same routing algorithm (Plaxton et. al.). This kind of routing allows for deterministic routing of messages along with adaptation to node failures in the overlay network.

The Tapestry protocol

Tapestry implements an extensible infrastructure for providing decentralised object location and routing that focuses on efficiency and minimising message latency. It also allows applications to implement multicasting in the overlay network.

Potential uses of the Tapestry protocol:

Tapestry provides an overlay routing network that is stable under a variety of network conditions, keeping optimum efficiency. This gives an ideal infrastructure for distributed applications.

Some sample applications based on Tapestry are:

  1. Spamwatch: This is an implementation of a decentralised spam filter.
  2. Mnemosyne: This is a steganographic file system.
  3. Bayeux: Bayeux is a self-organising multicasting application.


Kademlia, designed by Petar and David in 2002, is also a distributed hash table (DHT) for decentralised P2P computer networks. Kademlia uses UDP for for communication between its nodes and specifies the structure of the network and the exchange of information through node lookups. Similar to Pastry, each node is identified by a Node ID, which the Kademlia algorithm uses to locate values that are usually file hashes or keywords. An overlay network is formed by the participating nodes. The Kademlia algorithm needs to know the associated key for the value being searched, post which the network is explored in several steps. Nodes that are closer to the key are found at each step until the contacted node returns the value being searched for, or no more closer nodes are found.

While searching for ’n’ nodes in a system, Kademlia only contacts O(log(n)) nodes, which is very efficient. Unlike first or second generation P2P file sharing networks such as Napster or the Gnutella, Kademlia uses DHTs to look up files in the network. A DHT, as discussed above, stores resource locations throughout the network, and a major criterion for these protocols is to locate the desired nodes ‘quickly’.

An example of lookup in the Kademlia Search (node 0011 searching for node 1110)

Kademlia uses a ‘distance’ calculation between two nodes which is computed as the XOR of the two node IDs and the result is taken as an integer number. In the case of node IDs, the selection is fairly random which allows for two nodes from separate parts of the world to be neighbouring nodes.

Potential uses of Kademlia:

Kademlia’s decentralised structure increases the resistance denial-of-service attacks because even if multiple nodes are flooded, the effect on the network’s availability would be very limited as the network will recover itself by knitting the network around these attacked nodes.

Some implementation instances of the Kademlia algorithm are the following public networks:

IPFS, BitTorrent, Kad Network, Retroshare, Tox, Gnutella DHT

Certainly, the above public networks are incompatible amongst themselves.

Conclusively, while developing Karachain, we tried out all of the above P2P protocols for distributed data sharing, specifically because our focus has been on a key-value based system where searching for a value associated with a key in a pool of distributed nodes is the motive.

Join our Telegram group to learn more about P2P protocols:

Please visit us to know more about how we are bringing developers and the blockchain closer: