A lot of us are jerks: Adversarial environments
Exploring the Byzantine Generals Problem
Spend enough time in the blockchain space and one begins to pick up on how people discuss attacks, attack surface, weaknesses, vulnerabilities and the general assumption that someone might screw you over (see Vitalik’s ‘Problem of Trust’).
This is to be distinguished from general security concerns (data privacy breaches also discussed briefly here) which we’ve all had, pre-crypto. I’d also distinguish this from some of the outright scams we’ve seen in the ICO space (unfortunately, a growing list) .
What’s referenced here is the general assumption, in cryptoland, of an adversarial environment that is the context for public blockchain.
Where do these assumptions come from?
It turns out, this is a fundamental problem in computer science; specifically, in distributed systems. One of the papers in a16z’s crypto canon, the classic, ‘The Byzantine Generals Problem’ (Lamport, Shostak & Pease, 1982), is foundational for understanding these assumptions.
The Byzantine Generals Problem (BGP) is basically a metaphor for how a reliable distributed computer system should work. Basically, if a node in a distributed system fails, the rest of the system needs to react to prevent a shutdown of the whole system. The other nodes need to reach consensus on whether that component has failed and thus needs to be removed (when a system can do this, it is Byzantine Fault Tolerant).
In a previous posts, we’ve explored the distributed nature of nodes who must coordinate (and cooperate) to secure the blockchain (i.e., proof-of-work consensus mechanism), without having to know each other’s identity or necessarily trusting each other. This is essentially the Byzantine Generals Problem that all consensus-seeking blockchains must address.
This paper describes why this is challenging and why Bitcoin, which practically solves this problem, represents a breakthrough innovation.
Many descriptions of how BGP is related to blockchain have been provided (here, here), so I’d like to focus on a specific part of the paper and highlight how Sathoshi Nakamoto, in solving BGP, birthed cryptoeconomics — a topic we will be exploring in much greater depth in subsequent posts.
Cryptoeconomics is born
The BGP describes a fictitious situation where Byzantine generals are laying siege to an enemy city. They have to coordinate among several camps to either attack or retreat, all at once, not half-heartedly for that would mean certain defeat. The commander has to send messages to his generals (e.g., “attack at 10am” or “retreat at 3pm”), all in different camps surrounding the city.
If all the generals were honest, this would be a simple coordination activity, but in this scenario, and here is the application for distributed computing, there are dishonest, traitorous generals (and even spies) who may choose to sabotage the mission by sending faulty messages to confuse the troops on the ground.
Essentially, all generals (nodes) have to come to consensus on the message from the commander, in spite of potential traitors/spies who actively send faulty messages (see here, here for more details).
As Lamport, Shostak & Pease, 1982, describe BGP solutions for computing systems, several assumptions caught my attention:
- There is an assumption that nonfaulty nodes (honest generals) who are able to communicate, will deliver (nonforged) messages. In Bitcoin, preventing the forgery of signed messages is the achieved through digital signatures.
- A faulty node (dishonest general) cannot impersonate a nonfaulty one (honest general). Public-key cryptography helps us meet this condition (i.e., no one can impersonate you, if you do not give out your private keys).
- The absence of a message can be detected by its failure to arrive within a fixed length of time. Nakamoto addresses this through the time-stamping system (see here how this time-stamping is in proof-of-work).
Nakamoto used cryptography to address the conditions for solving BGP and goes a step further in providing incentives to discourage dishonest (future) behavior. Proof-of-work, not only address the time synchronization issue, but more importantly, intentionally forces people to expend tremendous computing power and energy to discourage them from any behavior that would deviate from consensus (e.g., the absence of a message due to starting a new chain).
This discussion assumes a mixture of honest and dishonest nodes in the network. Lamport et al., (1982) suggested ‘at least two thirds’ of the nodes had to be honest; others have rejected this honest majority assumption which provides insight into adversarial environment assumption.
Personally, I don’t think people in this space believe people are fundamentally corrupt, but rather, when designing large scale systems, it’s better to “minimize the need to rely on the goodwill of strangers”. (see also)
Finally, it’s not only that Nakamoto solved the Byzantine Generals Problem, but how he solved the problem, using a combination of cryptography and economic incentives that would set the stage for all subsequent innovation in this space.