What makes blockchains secure? (1/5)

Tarun Chitra
Gauntlet

--

I recently found myself stuck in a rabbit hole that involved academic work on blockchains. I’ve been interested in this topic because I never felt at ease with the security and performance guarantees of blockchain protocols — after all, even though these protocols are relatively easy to describe and implement, they appear to have far more attack vectors than a standard distributed system, such as a distributed database or file system. Moreover, the probabilistic nature of blockchains suggests that there might be some deeper phenomena that underly the empirical success and stability of Bitcoin and Ethereum. As blockchains combine probability, game theory, cryptography, and distributed systems, it is natural for a probabilist to ask questions about phase transitions, stable Nash equilibria, and random walks on miner graphs. For instance, one might ask questions like, “Does the law for the length of the Bitcoin chain obey a Galton-Watson process? Is this process far from criticality?” The answers to these questions are likely to reveal some hidden secrets about security, transaction rate, and quality of service within blockchains and will help us engineer more complicated distributed block data structures. This series of blog posts will focus on the illuminating facets of the following questions:

  1. What is the relationship between blockchain security analysis and the traditional analysis of distributed systems?
  2. How does a blockchain’s security parameters control transaction rate, quality of service, and other practical metrics for distributed algorithms?
  3. How do we define centralization in a way that respects all of the different attack vectors of a blockchain?
  4. Can we connect the likelihood of grinding attacks in Proof of Stake systems to a traditional probabilistic object (e.g. a mixing time of some Markov Chain)?

I aim to use a combination of the academic literature on blockchains and the theory of discrete random walks as a starting point for developing a precise relationship between probability and blockchains. This post will focus on the first question and will be less technical. The upcoming posts will focus more on technical details and posit probabilistic conjectures about blockchains. I should note that I initially wrote a much longer post that aimed to answer all of these questions in one post, but I soon realized that there was too much to say in one sitting!

Warning: We are likely a long way off from having a great tome such as Probability on Trees and Networks by Lyons and Peres for blockchains. I just wanted to get some ideas that I’ve been sitting on out in the open 🤓

A Galton-Watson tree near criticality; the color represents distance from the root (picture: Igor Kortchemski)

In Bitcoin, trust comes in the form of a cryptographic proof that all parties involved in a transaction willingly participated and that most Bitcoin miners deem said transaction valid. In many ways, this is equivalent to having a check that is signed by all relevant parties and endorsed by their respective banks, where the banks are analogous to miners. Just as we can repeatedly interact in good faith with a new client or customer via a trusted financial entity that provides us security and/or insurance, Bitcoin’s cryptographic trust lets us interact with previously unknown entities without worrying about double spending and/or engagement in other forms of fraud. The key innovation of Bitcoin that helped achieve this was to add the game theoretic notion of incentive compliance to a standard distributed transaction database. More precisely, Bitcoin introduced the concept of designing incentives such that the probability that a member node of a database would act in an adversarial, destructive way that harms the overall security or trust of results provided by the database is small or even negligible. This means that even if someone is attempting to make an adversarial attack on the Bitcoin network, they are statistically unlikely to actually succeed. This guarantee stems from the rest of the network being disincentivized to collude with this adversary, as doing so will often lower their chances of mining new Bitcoin. Contrast this with the situation in the modern day financial system — there is a strong amount of identity-driven trust built into the system, whether it comes from ‘Know your customer’ rules or from credit scores built up over many years of interactions between consumers and financial entities. Moreover, banks have a lot of their incentives to prevent fraud and malicious activity tied to legal enforcement of various laws as well as their filial relationship with central banks. But what exactly is incentive compliance in Bitcoin and how does this provide trust comparable to that of a bank? How do we know that this trust is secure, even if a single entity owns a large fraction of the mining power or if there are a very large number of miners such that it takes a long time to agree on a transaction?

The most straightforward form of insecurity in Bitcoin corresponds to the famous 51% attack. We first postulate that there exists a miner that has more than 50% of the overall mining power, and as such, gets to collect mining rewards for verifying more than 50% of the blocks (e.g. packets of verified transactions) accepted by the network. If an organization had such power, then it could create a transaction from any other entity to themselves and this transaction would have a large chance of being accepted, as this entity has majority control of the acceptance mechanism. The way that the Bitcoin protocol does this is by using a system in which miners compete to endorse transactions. If a miner is the first one to successfully endorse a transaction, then they publish their endorsement to everyone else in the network. The percentage of the mining power that a single entity has is related to the probability that they successfully endorse a transaction: if you have more mining power, then you will endorse more transactions. Plus, if you have more than 50% of the mining capacity, you can also successfully block other people from endorsing transactions. In banking terms, this would be as if Citi owned more than 50% of US consumer deposits and would occasionally say, “hey, we need a billion dollars, let’s endorse a transaction from Morgan Stanley to Citi — they most likely can’t stop us!” When Morgan Stanley attempts to block said transaction, Citi makes sure that all of its mining power accepts this transaction. In the modern financial system, this wouldn’t happen because central banks administer and manage large interbank transfers and serve as a trusted third-party intermediary that would not let such a transaction go through. However, in Bitcoin, having a large amount of mining power is statistically tantamount to serving as a central bank, which more or less gets the final say in transaction verification. In an abstract setting where Bitcoin is the dominant payment methodology, all miners are rational and purely selfish, and there is a 51% miner, this attack seems inevitable. However, the Bitcoin network has had mining pools that have had more than 50% of the power and yet we have never observed a successful double spending attack. This suggests that security is more complicated than the fraction of mining power espoused within a single entity.

Another way to view the interaction between security and competition within Bitcoin is as a methodology for endorsing transactions that utilizes a randomized voting system that elects a ‘leader’ to mine a block. Within this setting, miners are constantly flipping biased coins until they land on a heads. The bias of each miner’s flip — the probability that a coin lands on heads— corresponds to miner’s capacity or stake. In other words, if you have the largest percentage of the network mining power, then you will be most likely to flip a heads on any single instance. The first miner to land on a heads is ‘elected leader’ — that is, they get to publish the block and claim the reward. Note that miners, unlike voters, do not need to identify themselves; one need only participate in a single transaction or mine a block to be known to the rest of the network. Moreover, the number of miners can fluctuate wildly and the precise identities of entities participating in an election need not be revealed. When viewed through this lens, blockchains correspond to append-only databases that ensure security via repeated, random, and identity-less elections. In this setup, the 51% percent attack corresponds to having more than 50% of the vote in most elections. You can double spend in this scenario by transacting with someone else, while repeatedly eliding the transaction from a block of transactions or adding another transaction that reverts this transaction. I have always found that the voting analogy also makes it clear that all miners and participants within the network do not have to have the same copy of the chain and/or agree on every transaction. This is a large shift from traditional distributed databases, where gedanken such as Leslie Lamport’s Byzantine Generals Problem attempt to mark the line between deterministic consensus and an inability to reach consensus. Normally, the analysis of distributed systems focuses on security in an all-or-nothing fashion and requires that every node agrees on identities, transaction ordering, and validity — i.e. achieves deterministic consensus. In these systems, one also performs leader election — the Byzantine generals algorithm performs a recursive leader selection algorithm — but there is usually a single, non-random election that assume that all identities are known at the beginning of an election [0]. In blockchains, we do not know the number of dishonest miners nor do we know the identities of miners who will be participating in a ‘vote’. This is similar to human systems such as currencies, which often operate in a probabilistic manner, as a society can agree and abide by rules that a significant (but not uniquely identifiable) subpopulation disagrees with. In this setting, one can see that the voting-based component of Bitcoin security can be quantified as the maximum size of a disagreeing subpopulation such that the network can continue to accept transactions, even if this subpopulation disagrees with the rest of the network and adversarially adds fraudulent transactions (which could be thought of as a form of voter fraud).

While Bitcoin’s novel usage of incentive compliance has led to a burst of academic analysis of game theoretic issues related to voting, other security issues that show up in traditional distributed databases can still cause problems. One of the biggest problems in keeping distributed databases consistent is network delays and asynchrony— it takes time for nodes to receive requisite transactions and they might receive these transactions in different orders. In the database world, one usually attempts to design systems that are invariant to the ordering in which a transaction arrives. However, in the blockchain world, having a correct ordering of blocks is crucial to security and consistency. Most blockchains and peer-to-peer protocols communicate via gossip protocols— you tell your neighbors about a new block, then they tell their neighbors, and so on and so forth. What happens if two people publish the same block of transactions at the same time? In Bitcoin, the protocol forks and a single chain is split into a tree that has two terminal blocks. As the blocks traverse the network, miners pick whichever block they received first to grow from. In theory, the blockchain could fork repeatedly and look like the random Galton-Watson tree in the picture above. However, this would be really bad — blockchains that look like trees end up spending more computational power repeatedly revalidating transactions in branches that terminate. The original analysis of Satoshi Nakamoto shows that the probability that both branches stay the same height decreases exponentially in the number of blocks mined by the network [1]. In order to recover from a fork, the network has to wait until one of the branches grows significantly and is the longest chain for most miners. The temporal penalty that the network pays is related to how much confidence a user needs in order to believe that their transaction was accepted by the network. For instance, the original Bitcoin paper suggests that one should wait for six blocks to become children of a block that contains one’s transaction before deeming a transaction as accepted by the network. Since the average Bitcoin block takes 10 minutes to be mined, one would have to wait for an hour before he or she can deem their transaction valid! This trade-off between security level (in this case, the choice of the security parameter six) and time, suggests that network delays can affect more than just security; here are a few examples of other issues:

1. Quality of Service: Even if the expected time for you transaction to be accepted by a large portion of the network is low, the variance in transaction rate can be deadly. In IPFS, which is “Dropbox on the Blockchain”, a block corresponds to a set of files; if you have a high variance in acceptance rate between blocks, then you will find that it takes you a while to upload a folder with a large number of files [2]. This can be an unacceptable penalty for users who are accustomed to centralized systems like Dropbox that provide low variance and low expected time.

2. Transaction Costs: Distributed exchanges aim to remove centralized market makers from an exchange by using a mining reward to incentivize miners (which function as market makers) to keep order books consistent. For instance, miners get rewards for providing liquidity to uncross an order book [3]. However, since it can be very profitable for market makers to keep the order book unsynchronized (e.g. this article), one has to design incentives such that market makers do not both collect mining rewards and multiple tick spreads due to asynchrony.

3. “Rich-get-Richer” Effects: Many blockchains use DHT-like gossip algorithms, such as Kademlia. In these algorithms, nodes keep track of their ‘nearest neighbors’ based on who sends them the largest number of blocks when a request is made. A high stake and/or high capacity miner could take advantage of how these algorithms work to ensure that they are always the ‘closest’ neighbor to most other miners. This would ensure that their blocks reach the network before those of other miners and this can help them gradually gain a persistent edge over miners with similar mining capacity. We will explore this further in a future blog post on routing attacks.

Network delays and asynchrony represent the communication costs of having a truly decentralized system. To illustrate this more clearly, let us look at how the variance of retrieval times for a single file can vary between Dropbox and IPFS. Let’s assume that we are in an idealized world where Dropbox is a single server and there is precisely one route from your computer to Dropbox. In this case, all of the variance in retrieval time comes from the variance within this single path. Conversely, in an IPFS setup with N nodes and K nodes that have a copy of your file. In this situation, we have two additional sources of variance:

  1. Path Selection/Routing: Since gossip protocols attempt to create expander-like graphs, it is unlikely that there is a unique path (especially when K~ N/2) [4]
  2. Node Uptime: If a node dies and we attempt to retrieve a file from it, we will end up having to retry our retrieval request, which can double our retrieval time

Furthermore, note that these additional sources of variance scale with N and K, whereas the variance within a single path doesn’t vary with N and K. From this idealized gedanken, we see that there is an inherent reduction of quality of service in blockchain systems, which suggests that any system that is secure and performant will need some amount of centralization. Can we connect this observation to traditional database theory and get some hints about how to analyze this trade-off?

Interestingly enough, Shostak, Pease, and Lamport cover the trade-off between centralization and quality of service in their seminal paper, Byzantine Generals Problem. In the last section, they analyze communication graphs that have the property of being d-regular [5]. At a high level, this property corresponds to having no pair of nodes v, w such that all paths from v to another node k has to go through either w or any neighbor of w. They show that one can still be resistant to Byzantine adversaries, although one will have to pay a penalty proportional to the diameter of the communication graph. This suggests that if one can construct a randomized version of this property [6], then we can use it to analyze blockchain security. Such analysis has been done in randomized binary Byzantine protocols [7] and is likely useful to the practical blockchain probabilist. Moreover, studying the worst-case behavior of random walks on DHT graphs should likely help us recover standard distributed systems guarantees for blockchains, albeit with a spectral gap or diameter-based penalty.

I hope that this post has illustrated how the formal probabilistic analysis of blockchains can benefit from the nearly forty years of work on distributed systems. Even though much of the analysis of Byzantine errors in distributed systems is deterministic, much of it can be brought into the probabilistic world via a careful observation of concentration phenomena and random walks on graphs. In the next post, we will discuss how various authors in the cryptography world have performed probabilistic analyses on various simplified blockchain models with idealized adversaries. This post will also attempt to illustrate (via example) that randomized versions of deterministic algorithms can avoid seemingly insurmountable pitfalls like Arrow’s Impossibility Theorem.

Please let me know if you have any comments or feedback!

Thanks to

, Jayanth Garlapati, and Yi Sun for helpful comments and suggestions.

[0] For instance, if we know that there are m dishonest miners in a deterministic Byzantine scheme, then Lamport’s algorithm provides an Ω(m) algorithm for honest voting, assuming that we can determine the identities of all voters.

[1] This analysis turns out to be flawed, as the precise graph that describes the network is important to quantifying this probability. In a future post of this series, we will dive into an example of this type computation and how it affected the algorithmic design choices within Ethereum.

[2] Note that this is not unique to blockchains — distributed file systems like NFS and Amazon S3 also have issues with large numbers of files. However, these systems have controlled network topologies and can mitigate a lot of the problems that arise from the random routing of Blockchain networks

[3] Interestingly enough, the US equities market more or less enforces this via Regulation NMS

[4] While the graphs that algorithms like Kademlia construct are not proven to be expanders, they often have properties that ensure that the diameter of the graph is always Ω(log N)

[5] This is distinct from the standard graph theory definition of regularity, where a graph is d-regular if every vertex has d edges!

[6] A first attempt at a randomized version: Consider two nodes v and k and let ε > 0. For any node w∊ ∂v and any pair of paths 𝛾, 𝜁 of length at most εN that start at w and end at k, Pr[|𝛾 ∩𝜁|>2] = O(f(N, ε)) for some quickly decaying function f.

[7] See Ben-Or, Another Advantage of Free Choice: Completely Asynchronous Agreement Protocols and Micali, Byzantine Agreement, Made Trivial

--

--

Tarun Chitra
Gauntlet

"[Tarun] begrudgingly believes in Occam's Razor" // I write about: Probability, Physics, Hardware, Trading, Crypto, Minimal Techno.