The P2P Bucketing System in Witnet

Gorka Irazoqui Apecechea
The Witnet Oracle Blog
13 min readJul 15, 2019

Research conducted over the last years has shown us the consequences of a poorly designed P2P system, especially within a blockchain-based decentralized network.

For example, imagine a node whose connections are entirely monopolized by an attacker. The attacker could:

  • Perform a double-spending attack, by sending one transaction to the victim peer and another to the network.
  • Eliminate the effect of the mining power of the victim nodes in the network.
Eclipsing implies monopolizing all the connections made by a peer.

In this case, the attacker has launched an Eclipse Attack. For an introductory explanation on how eclipse attacks are performed, we recommend taking a look at this article. Usually, in order to perform selfish mining or double-spend attacks on an eclipsed node, an attacker must monopolize all its connections (incoming and outgoing).

In response, networks such as Bitcoin and Ethereum have changed the way peers connect to each other in order to become less susceptible to such attacks [1][2].

In Witnet, the Eclipse Attack scenarios also require that an attacker monopolizes all the connections from the victim. However, as Witnet does not implement a Proof of Work (PoW) or Proof of Stake (PoS) consensus mechanism, any malicious actor can easily pre-construct a chain bigger than the one already in the network and convince an eclipsed node it is the real chain. We observe two scenarios where this might happen:

  • Bootstrapping, i.e., when the nodes sync for the first time. In this case, the node will likely start establishing outgoing connections through the DNS seeders.
  • Disconnections due to reboot, power failures, DOS attacks, etc.

So, the first necessary precondition when designing the Witnet P2P network is clear: incoming connections SHOULD NOT play a role during synchronizing, or we would be handing a huge advantage to the attacker.

In the above cases, the blockchain tip in Witnet is decided by consensus. Still, in the case of pure majority consensus, an attacker only needs to monopolize half plus one of the outbound connections made by the peer. Thus, for the remainder of this article, we will refer to eclipsing as the action of monopolizing enough connections to convince a node to synchronize to a wrong chain tip.

There is a further reflection we can make at this point. At bootstrapping, it is expected that the IP addresses to which the outgoing connections will be made are obtained through a seeding process. In consequence, we expect the connections made at this point to be honest, and thus, that a new node synchronizes with the correct chain tip. If not, this would most likely imply that the attacker has polluted the bootstrap nodes from which we are getting addresses.

We focus on the case whereby an already synchronized node reboots (for whatever reason) and has to synchronize again. In this case, in order to avoid performing the seeding process again, the node should utilize the information it gathered from previous connections. The node uses tables that act like caches to store information about peers it has previously established successful connections with.

Remember that, upon reboot, the node will start syncing to the blockchain tip that the majority of the outgoing connections dictates. If an attacker is trying to eclipse a rebooted node it would need to control at least n/2+1 of its outgoing connections. Our design will focus on this latter case, i.e. making costly for an attacker to monopolize n/2+1 outgoing connections from a victim. Note that, by extension, monopolizing the n outgoing connections (which is necessary to perform double spend or selfish mining attacks) becomes increasingly difficult.

Using buckets to store network information

Each node stores addresses it has already seen, serving as a cache for network information. We base our analysis on the same structure as in [1], where network state is stored in two tables tried, new. The tried table stores peer addresses the node has already connected to, while the new table stores those of potential peer candidates.

Bucketing system example for tried and new tables

Similarly to how memory chunks are stored in caches, tables can be divided into buckets, each containing 64 slots. There are two main advantages of using bucketing systems for address storage:

  • Checking the existence of an address in a table distributed in buckets is more efficient, as we do not have to go over the whole table, but rather the bucket maps to.
  • A bucketing system prevents an adversary from polluting all the buckets with addresses from the same IP network range.

The aforementioned advantages are achieved thanks to the way the bucket to which each IP maps to is selected. For instance, in the tried table, an IP group (/16 address) can only map to 4 buckets in tried, while each specific IP maps to a single bucket. The following picture defines how the bucket to which an IP maps is selected.

Bucket mapping for tried table as implemented in Bitcoin core

In contrast, the bucket mapping in the new table is substantially different, as it also takes into account the source IP group notifying the addresses to be inserted. In this case, each (Source group, IP group) pair maps to up to 32 buckets, while each (Source group, IP) maps to a single bucket.

Bucket mapping for new table as implemented in Bitcoin core

When the node needs additional outgoing connections, addresses are selected from one of these tables. Initially our analysis will put a special focus in the tried table. Note that, when addresses are chosen randomly from both tables, extrapolating the results to both tables means, in the best scenario, increasing the number of buckets that need to be filled.

Difficulty on eclipsing connections from the tried table

To start, let’s adopt the first two countermeasures proposed by [1], i.e. addresses are randomly evicted and selected from the tried table. Assuming all outgoing connections come from the tried table, we start analyzing the portion of the tried table the attacker would need to fill, for different maximum number of outgoing connection sizes. Let’s model the outgoing connections monopolized by the attacker with the random variable X, which corresponds to a binomial with probability T_a, i.e. the proportion of the tried table he owns.

X refers to the number of outgoing connections monopolized by the attacker
Probability of an attacker eclipsing a node at synchronization time

The following graph represents the success rate an attacker would obtain for different proportions of the tried table controlled and different number of outgoing connections.

Success rate of an attacker with random eviction/selection for different proportions of the tried table controlled and different number of outgoing connections

As expected, the attacker gains a big advantage when it monopolizes a number of addresses larger than 50% of the table. In fact, the more outgoing connections we put in place, the more advantage the attacker will have. Therefore, an attacker can easily pollute the tried table through the incoming connections.

But how many addresses would an attacker need to monopolize; for example, 50% of the tried table? We adopt the model specified by [1] in this case, whereby the number of attacker addresses in the tried table is modeled by the random variable Y while the number of addresses they possess is t and table_size is the total size of the table.

Expected number of addresses in tried when t addresses are trying to be inserted. Equation taken from [1].

The following graph presents the number of adversarial addresses that need to be owned by the attacker to fill different percentages of the tried table, with various sample sizes. In this case, we assume the tried table has m buckets, each of which has 64 available slots.

Number of adversarial addresses needed to fill the table for different table sizes

As seen, filling 60% of the tried table with a considerable size (e.g. 64 buckets) would take 3750 addresses. As this number of addresses seems rather low for a powerful attacker, we need to adopt further countermeasures to prevent an Eclipse Attack from happening. In consequence, we introduce the 3rd countermeasure proposed by [1]: the test-before-evict procedure. Before evicting an address from the tried table, we perform a connection to it. If this is successful, the address is not evicted.

Note that one could think that increasing the size of the tried table requires the attacker to own a bigger number of addresses. Although this statement is true, with test-before-evict, it seems almost more important that the table is capable of fitting the number of honest peers, as this would be the worst case for the attacker. If the table is substantially bigger than the number of honest peers, this means the attacker already gained several slots for itself (those not occupied by honest peers), biasing the probability in its favor. In consequence, test-before-evict suggests that increasing the number of buckets might indeed benefit the attacker.

One of the most important facts that this countermeasure brings to P2P networks is the bound that it imposes to attackers’ success probability. As analyzed in [1], assuming h honest addresses exist in the tried table with a responsiveness of p, the attacker is bounded by:

The probability X is bounded by the number of honest addresses this time
Probability of eclipsing with test-before-evict

We tested for different numbers of outgoing connections and probability bounds, and the proportion of the table (p*h/table_size) that on average will need to be unevictable by an attacker (even with infinite number of addresses under his control). The results are as follows:

Attacker probability bound for different honest address proportions and maximum outgoing peers

As expected, the lower we want to bound an attackers’ probability of success (upper line), the more responsive addresses we need to already have in the table. According to [1], a 28% responsiveness is a good estimate for fairly recent addresses. Taking that into account, we can already say the results are not very optimistic, as even when the table is full of honest addresses the low responsiveness bounds the attacker to a success probability of 80% with 10 outgoing peers. In order to further protect from such a highly bounded attacker, we incorporate the following countermeasure: parameterize the consensus percentage needed to synchronize a chain. Let’s call this parameter c. This time, an attacker needs to own at least c percentage of the outgoing connections to convince a node to synchronize with the wrong tip.

Probability of eclipsing when c*n monopolized connections are needed with the test-before-evict countermeasure

The following graph represents the bound for the attackers probability for different proportions of unevictable address when 80% consensus is required to synchronize with the correct chain tip.

Attacker bounded probability for different number of outgoing peers requiring at least 80%.

Although the results look better, we still need further countermeasures to lower even more the attackers probability. Aiming at providing better protection, we add the anchor connections countermeasure: a table that stores the currently established outgoing connections. Upon reboot, the node reserves some of its outgoing connections to the anchor table. These are expected to be honest, assuming the previous outgoing connections were legitimate. We test the same scenario as before, but adding a 20% more connections that come from the anchor table.

Attacker bounded probability for different number of outgoing peers requiring at least 80% consensus with 20% anchor connections

This time we observe much nicer results. For instance, we observe that with 15 outgoing connections (thus 3 anchors), a 45% rightfully full tried table with a 28% responsiveness already bounds the attacker to 10% success probability. This is already better than what was reported in [1], where to bound an attacker to a 10% probability a 67% filled table was needed. However, we can make the attackers’ life even harder by protecting the new table.

Considering the "new” table

We just spoke about the tried table, arguing that if addresses are selected from tried and new at random, the only effect this would have is to increase the number of buckets (thus requiring more attacker addresses ). The former is true as long as the new table behaves similarly to the tried table. We can restrict an attacker even more with two additional countermeasures in the new table:

  • Incoming connections are not allowed to send addresses to be inserted into the new table (ADDR messages). That is, it is the peer itself who can request these messages (whenever it realizes the new table is sufficiently empty) and only to the outgoing connections. This prevents the pollution of the tried table as long as the outgoing connections are honest.
  • A feeler connection is in charge of establishing short-lived connections with addresses in the new table. If those succeed, they are inserted in tried (if test-before-evict permits).

While the first countermeasure prevents an incoming malicious connection from polluting the new table with his own addresses, the second one validates the healthiness of the connections hosted in new. These two countermeasures have an impact on the approach selected by the attacker. Note: under the assumption that addresses in new are legitimate, an attacker not only needs to pollute the tried table but it needs to be lucky enough that at least c percent of the outgoing connections are selected from tried, since c is the consensus percentage needed to synchronize. Modeling the selection from the tables as a binomial Z with probability 0.5, the success probability for an attacker is as follows:

Number of connections selected from tried
Probability of eclipsing when addresses are selected from tried and new randomly and addresses in new are honest

As before, we plot the bounded probability for the attacker in such a scenario for different number of outgoing peers, c=0.8 and additional 20% anchor connections.

Attacker bounded probability for different number of outgoing peers requiring at least 80% consensus with 20% anchor connections and addresses randomly selected from tried and new

We observe a much nicer probability bound here. Indeed, with 10 outgoing connections (plus 2 anchors), we observe that an attacker can be bounded to a 0.1% probability with as few as 3% of the table filled with honest addresses.

The aforementioned analysis was made assuming addresses coming from new are honest. We argue that this is likely to be true for the following reasons:

  • Addresses in new are only inserted upon request, and this solicitation is only made to the outgoing connections. It is thus likely that although the tried table can be easily polluted, the new table contains a significant amount of legitimate addresses.
  • If an attacker wants to execute a multi-round (several reboot attack) it will face two major challenges. First, the new table needs to be empty enough so that the node asks the outgoing connections for addresses to be inserted. The former already requires that an attacker monopolizes a big portion of the outgoing connections, even in the case that honest peers report repeated addresses, as the bucket selection in new also takes into account the source IP addresses. Second, if the new table is empty enough is because the feeler connection has succeeded to insert sufficient responsive honest addresses into the tried table. Thus, honest addresses monopolize now the tried table.
  • The cost, time and IP addresses to fill a majority of both tables with attacker addresses should already discourage the attacker from even trying it

For a system requiring 80% of consensus with 20% of anchor connections (assuming these are honest) an attacker would need to monopolize every single outgoing connection.

Table 1 shows the bounded attack probability with just 0.01 addresses (p*h/tablesize) for different consensus percentage parameters.

Consensus required vs probability of an attacker being able to make eclipsed node fork

As the consensus is parametric, we recommend a minimum consensus percentage of 60%.

Takeaways — Witnet bucketing parameters

Taking into account the former discussions, Witnet is considering to adopt the bucketing system as follows:

  • Three tables are implemented, new, tried and anchor.
  • The new table contains addresses that have not been tried (or that have previously been evicted from tried). The tried contains addresses to which the peer has recently established a connection. The anchor table contains references to the outgoing connections established.
  • The new table can only be populated upon requests from the peer to its outgoing connections, which will only happen if the new table is sufficiently empty. The number of addresses that are sent is specified in [1].
  • 64 buckets will be featured in tried, while 256 buckets will be featured in new. Each will contain 64 slots, and the IP-bucket mapping will be made as in [1]. The anchor table will only contain the outgoing addresses.
  • 12 outgoing connections will be featured, 2 of which come from the anchor table.
  • Eviction and selection of addresses, both from tables and buckets, are chosen at random.
  • Before evicting an address from tried, the potentially evicted addresses is tested. If successful, the address is not evicted.
  • A single feeler connection will establish short lived connections with addresses in new. Upon success, these will be inserted in tried.
  • The number of incoming connections will remain as in Bitcoin.

Conclusions

The analysis made helped us understand the necessary countermeasures to be adopted to improve the security of our P2P system. One of the first conclusions we can take is that if our design protects against an attacker trying to eclipse the synchronization process, it should also protect our P2P system against an attacker trying to monopolize all the outgoing connections, e.g. to perform a double-spend attack. Note however that the consensus requirement will be parameterizable, i.e. nodes that would like to initialize faster can reduce it at the expense of being less protected.

Another fact we would like to remark is that our table sizes (especially tried) are designed with the intention that sufficient honest addresses will exist to fill them easily. As of today, we take the 10,000 nodes Bitcoin has as a good metric. In any case, we expect to attract more nodes due to the low requirements for running a Witnet node.

If the aforementioned countermeasures are not sufficient to deter a malicious attacker, we can limit the rate at which addresses are inserted into tried. This reduces the rate at which the tried table is populated, but offers higher security guarantees.

We hope our analysis clarifies the reasons behind the decisions made with respect to our P2P bucketing system.

References

[1] Eclipse attacks on Bitcoin’s P2P network
[2] Low-resource eclipse attacks on Ethereum’s P2P network

Thanks you for taking the time to read!

This post is a joint effort with Mario Cao and Adán Sánchez de Pedro. Please let us know if you have any questions or comments below. You can follow Witnet or myself on Twitter and stay up to date on our blog.

--

--