What makes blockchains secure? (5/5)

Tarun Chitra
Gauntlet
25 min readMay 19, 2019

--

Medium, Medium Staff: Please add LaTeX support 🙏

If you want to read the math natively written in LaTeX, please follow the link at the bottom of the blog post to a Dropbox Paper link.

In the past four posts, we have broadly covered the interaction between statistics and consensus. The first two posts aimed to give a bit of background about how to think about the analyses of consensus protocols from a probabilistic mindset. On the other hand, the latter two posts focused on trying to evince when there exist probabilistic equivalences between seemingly different protocols. In particular, the fourth post covered how one can classify protocols in a manner similar to how physical systems are classified. We argued that when two protocols are deemed not to be equivalent, there must exist a phase transition that makes a macroscopic observable (such as the expected Gini coefficient or average branching factor) have a discontinuous jump when we try to smoothly move from one protocol to the other [0]. For instance, Bitcoin will have a phase transition from block trees whose height is Ω(N), where N is the number of blocks produced, to block trees whose height is Ω(log N) (e.g. balanced trees), as we make the ratio of the block production rate to the network delay diameter smaller [1].

On the other hand, when blockchain protocols are in the wild, they appear to avoid crossing phase transition points. At a very coarse level, much of this empirical success is due to the formal Byzantine-resistance guarantees of the consensus protocol. However, reality has far more actors than those that are purely Byzantine and those that are purely honest, which means that the formal guarantees of a consensus protocol cannot fully describe practical implementations of consensus. This was most recently seen in Bitcoin with the Binance hack, where Binance’s CEO openly discussed bribing miners to revert a theft. The economics of a protocol — the block rewards, inflation rate, transaction fees, ease of speculation, access to leverage, existing liquidity — can dramatically affect the security of the protocol. This post will aim to illustrate that the game theory of the entire system — coupling both the consensus layer and the economic layer — works to keep the system away from phase transitions. Moving to a view that incorporates both layers dramatically changes the security and threat models and involves incorporating game theoretic considerations that can exponentially blow up the size of an environment [2].

As protocols increase in complexity and add in governance, maintenance of this probabilistic stability becomes increasingly important. For provenance, governance in cryptocurrencies refers to how the balance of power is split between the different users of a cryptocurrency. The simplest version of this power to change protocol behavior is split between developers (e.g. people contributing to the open source code base) and investors (e.g. people buying the tokens on an exchange). One of the main criticisms of Bitcoin and Ethereum is that they don’t admit governance. In Bitcoin and Ethereum, the authors of the code in the Bitcoin core and Ethereum client repositories completely dictate the economic and security decisions for the network. For Bitcoin, this has worked out well because Bitcoin aims to maximize security and minimize transaction complexity. In particular, Bitcoin is designed to not need governance, because its economics are meant to be non-malleable.

On the other hand, Ethereum has had a lot of issues with governance, such as the DAO hack and the Parity wallet hack. These issues affect people building on these platforms too — the controversy at stable lending platform MakerDAO has also led to questions about how much governance is needed within these systems. In MakerDAO, certain token holders vote on an analogue of interest rate of the system, which naturally affects token value. Some participants might want a larger interest rate (they are net lenders), whereas others might want a lower interest rate (they are net borrowers). Governance allows for the system to continuously vote on these types of parameters and reward people for voting / positive participation. These votes, however, can cause dramatic changes to stability and agent composition, which add protocol turmoil. A lot of governance issues come from the increased complexity of trying to build the ‘world computer’ as opposed to a simple transaction mechanism. Still, the idea that a community can develop an economic system that forces protocols to avoid phase transitions and is invariant to perturbations of agent types to stay within a universality class is fascinating in its own right.

In order to understand how to correctly model the entire ecosystem around a protocol or contract, we will need to figure out which statistical and empirical tools best apply to crypto networks. Our goal is to find a natural definition of economic equilibrium that meets the computational, adversarial, and statistical constraints that exist within crypto networks. We will begin by exploring other attempts to quantify adversarial behavior in distributed and multi-agent systems. First, we will briefly look at the history of algorithmic game theory and describe the different equilibria that are studied in this field. These equilibria will turn out to be too difficult for realistic users of crypto networks to compute, which will lead us to explore easier-to-compute equilibria that are studied in the field of Systems Engineering. Unfortunately, these equilibria will turn out to have other flaws regarding their interpretability and robustness. We will conclude by exploring Agent-Based Modeling, which aims to provide empirically computable and interpretable equilibrium estimates. I hope that this voyage through computation, game theory, and statistics will convince you of both the beauty and necessity of simulation within the study of crypto networks.

Multi-agent systems have been quantitatively studied in earnest for the past 75 years. The first attempts at quantitatively describing such systems began with Daniel Bernoulli’s 18th century explorations into games of chance. He was the first to describe the probabilistic concept of a martingale, which expressed how to compute conditional expectations in games of chance, such as coin-flipping in the St. Petersburg Paradox. These games of chance represented simple betting games that were common in Europe (and in particular, France) at that time. However, this inquiry into the nature of purely random games ignores what happens when agents can learn and adapt their strategies. Strategies, like much of early economics, were defined in terms of an agent’s preferences for different outcomes that arose executing a strategy on a certain state. For instance, an poker-playing agent might strongly prefer to fold when their maximum card is a 5 and they have no repeated cards. In order to express strategies via an agent’s preferences over a set of possible actions, we need a way of mathematically expressing an agent’s preferences. There are two ways that people have historically done this:

  1. Ordinality: There is a non-random ordering or preference among actions that a player can take when the game is in a certain state
  2. Cardinality: There are strict relative preferences between every pair of actions that a player can take

Ordinal preferences are easier to study, as one can reduce the problem of picking an optimal strategy to elementary combinatorics. However, ordinality doesn’t allow for a game analyst to compare the difference of payoff magnitudes between two competing strategies. For instance, suppose that I know that one poker hand is better than another, but I don’t know how much it increases my chance of winning. Would this really be useful for poker strategy development? On the other hand, cardinal preferences convey information about payoff magnitudes but introduce a variety of analytic considerations and non-convexity that make analyses difficult. Cardinal preferences are represented by a utility function, which maps states to the real line which provides an ordering and a magnitude.

Behavioral studies have shown that humans tend to reason about the world via ordinal preferences — after all, you don’t choose between two items that are priced the same via price alone. It was only in the 20th century that mathematicians invented general formalisms could convert strategies dependent on ordinal preferences to ones dependent on cardinal preferences. This formalism allowed for rigorous strategy development for games. This formalism’s main results from the 1940s, the von Neumann-Morgenstern theorem and the minimax theorem, set off an explosion of study into the mathematical foundations of what we now call game theory. By the end of that decade, economists had a formal definition of equilibrium for a system of agents — when an agent, conditional on all other agent’s preferences, cannot improve their utility by switching to a more preferred state — and John Nash had proved the existence of this equilibrium for all finite player games with a finite number of pure strategies [3]. At this point, mathematicians (especially real analysts) began to flood into economics and flesh out the expanse of game theory.

However, most of the improvements from the ’50s to the ’80s did not come from a deeper theoretical understanding of the agent-based model that Nash and von Neumann worked with. Instead, many of the developments within game theory, such as the existence of Bayes-Nash equilibrium, the description of coordinated games and Schelling points, and various ‘impossibility of equilibrium’ results came from better models and an improved understanding where impossibility theorems would arise. The Stanford Encyclopedia of Philosophy has an amazing article on the history of game theory that chronicles the epistemological evolution of game theory and the theoretical hurdles that were met as game theorists tried to make models more realistic. An understanding of why there were so many impossibility results only came in the ’90s, when theoretical computer scientists tried to ask the questions, ‘how complex is it to compute a Nash equilibrium?’

Gödel prize winner Christos Papadimitrou proved that a new complexity class, PPAD, was necessary to describe the complexity of computing Nash Equilibria. This was done by extending a known pivoting algorithm called the Lemke-Howson algorithm [4] to a more generic class of algorithms for finding fixed points [5]. These complexity results, coupled with a spate of examples of empirically observed market equilibria that did not fit into the mold of Nash Equilibria, suggested that humans interacting with games and institutions find different, less complicated equilibria. The empirical market success of the second price auction (and it’s generalizations) at Google and Facebook hinted that simplicity is preferred to precision when it comes to observable equilibria. As such, the tools of algorithmic game theory alone will not suffice for describing crypto network equilibria and we will explore another avenue: Systems Engineering.

In tandem with the high brow game theoretic results of the late 20th century was the development of the mathematically pedestrian yet pragmatic field of Systems Engineering (SE). This field, whose genesis was spurned by the complex systems pariah of electrical engineering, mathematics, mechanical engineering, and physics departments, aimed to find the simplest explanations for complex phenomena. At its best, the field produces novel, falsifiable syntheses of heuristics and insights from many fields, exemplified by research at places like the Santa Fe Institute. On the other hand, the field is also notable for producing a number of non-falsifiable, “not even wrong” results that get used in production to the tune of disaster (this paper details a number of examples of this). The extremely simplicity of the models (think: simple harmonic oscillators) used often can get some qualitative phenomena correct, but miss critical points and phase transitions that are where adversarial conditions lie. An explicit example of this can be found in the following Wired article. The article goes through an incident where GM had an order of magnitude increase in the failure rate of a critical component because they used a simple model to estimate the melting point of a complex material. This simple model dramatically misestimated when a phase transition due to overheating would occur and this caused a dramatic spike in the failure rate of their part. Their simple model, when compared to the Vextec finite-element model that is described in the article, provides an example of what happens when models that need to accurately represent phase transitions are underspecified.

A SE perspective on crypto networks (in the guise of differential games [6]) would likely start by taking a simple model of agent behavior, such as representing an agent’s utility function by an oscillator, and asking what happens if we deterministically combine N of these simple models. This provides a natural evolution operator, the Hamiltonian, for evolving the state of the all actors in the system. The SE literature usually refers to the combination of a state evolution operator and (potentially non-convex) constraints as a “generalized state space model.” The upshot of doing this analysis is that natural correlation lengths and time scales can be analytically computed or estimated to high precision. In this context, a “length” scale can refer to a physical length — a physical or graph distance between validators — or an abstract distance, such as a non-Euclidean distance between oscillator parameters [7]. These length and time scales help parametrize how stable the system is as N varies and give intuition as to whether a protocol configuration is “at the top of a hill” (unstable equilibrium) or “in the bottom of a valley” (stable equilibrium). This notion of equilibrium is easy to compute and the associated length and time scales provide intuitive conditions for when the system is at equilibrium. For instance, Bitcoin is at a stable equilibrium when the ratio of block production rate to delay diameter is greater than a constant, which couples a time scale (block production rate) to a distance measure (delay diameter) [8].

Unfortunately, these equilibrium scales and other scaling exponents (such as Lyapunov exponents) are usually presented in a way that is hard to correlate with real data. Moreover, they are statistically difficult to measure as a realistic crypto network is unlikely to cleanly fit into the mold of such state space models because of two qualities: adversarial agents and heteroskedastic, non-stationary data. In [9], I give a precise mathematical example of how adversarial agents can exponentially decay the statistical quality of a correlation length or exponent in an SE model by using random adversaries. An adversary that has any capacity to learn would do much better than this example, which illustrates how weak these types of models are for describing realistic crypto network agents. We note that there has been some work in the blockchain space that attempts to apply these techniques without considering these pitfalls, rendering their results statistically useless.

This dialectic story of game theory and SE naturally begs the following question: How can we get the best of both worlds? Can we get the rigor and statistical fortitude of game theoretic results while preserving the simplicity and intuition of methods from Systems Engineering? I believe that the only way to do this is via Agent-Based Modeling.

Agent-Based Modeling (ABM) aims to construct all of the different actors in a complex system via models that make statistics transparent and convergence rates interpretable. ABM is most useful when one is dealing with a system where it is hard to understand the effect of a certain policy and the cost of experimentation is high. By imbuing an ABM framework with realistic models of adversaries and semi-adversaries, and by utilizing real-world data, we argue that the results from ABM should reflect reality. It is most commonly used when backtesting algorithmic trading strategies via simulation — one models their strategy as an agent and other agents in the market (e.g. TWAP engines) and has them play a game against historical market data. This output of this simulation is then used to construct an objective function that can be used for optimizing a strategy. The usage of simulation allows for a strategy developer to avoid costly losses in live trading by stress testing against historical data and other actors in the system. Moreover, it lets the developer test against hard to realize edge cases (e.g. the flash crash) and search a larger portion of the strategy space. However, agent-based modeling has been used in a variety of contexts, including but not limited to: artificial intelligence, autonomous vehicles, cybersecurity, economics, and self-driving cars. Recently, there have been a variety of prominent empirical successes in the field including (but are not limited to):

The theoretical and empirical study of agent-based models resembles the construction of environments from the second blog post in this series, where we create an idealized simulation that does the following:

  • Instantiate N agents from T different agent classes
  • Instantiate an oracle that governs how they are allowed to interact with one another
  • For some number of rounds:
  • The oracle randomly selects a subset of agents
  • The selected agents are allowed to utilize bounded compute resources [10a] to evaluate a utility or value function based on global/public and local/private data
  • This allows them to take an action
  • The oracle propagates that action to the rest of the players

Figuring out how to model agents and/or estimate stable equilibria is the main difficulty that one has to handle. In all of the following ABM examples, we aim to straddle the line between modeling outcomes for a fixed distribution of agents, which would neglect arbitrary adversarial behavior, and solving for a Bayesian Nash Equilibrium, which is intractable. We do this by taking a game, a distribution over agent types, and an evolution/learning algorithm and aim to analyze the stable states of this system. This is the closest that one can get to answering the questions, “given some known agents, who are locally rational but can perform some random moves, and an initial condition, what is the closest Nash equilibrium that they can reach?” [10b]

Let’s evince this by trying to describe an idealized simulation for online Texas Hold’em:

  • Instantiate 6 agents with the same initial wealth from 3 agent classes:
  • Aggressive: Willing to go all in with >50% confidence
  • Neutral: Willing to go all in with >99.7% confidence
  • Conservative: Never goes all in
  • Instantiate an oracle: The dealer
  • Play until at least one person has run out of money:
  • The dealer puts out community cards and calls for bets / flops
  • The participants evaluate a reward function based on the community cards (global, public state) and their hands (local, private state)
  • They take an action — fold, double down, etc.
  • The oracle evaluates who wins and redistributes the bets

These idealized simulations can be studied theoretically, in a manner similar to how environments are used in Algorand or Avalanche, or empirically via a Monte Carlo method. The main difference is that we now add in an explicit utility function (with computational constraints) and provide a way for an agent to execute policies, e.g. select an action based on this utility function. Thus, we need to figure out constraints on designing utility functions and policies. Common choices for constraints on the model design space are:

  • Expected utility maximization: This is the most traditional type of agent and represents a completely rational and short-term agent. [11]
  • Rank-dependent utility optimization: This is the agent model invented by Daniel Kahneman [12] and Amos Tversky that aimed to replicate empirical paradoxes, such as the Allais Paradox and the Ellsberg Paradox, by allowing for human preferences to bias ordinality relative to cardinality
  • Reinforcement Learning: Adjust the best unbiased expected utility function by computing a value function, which is a decayed and conditional expected future utility. These agents need to learn their utility function at the same time that they make decisions (policies) that depend on it. One can use Bellman iteration, if the search space is small, Monte Carlo methods (e.g. REINFORCE), genetic algorithms, or deep learning to try to learn this dynamic utility function.
  • Black Box methods: Treat the agents as players in a multi-armed bandit problem and use a combination of Monte Carlo Tree Search and Upper-Confidence Bound estimators to find local minima that minimize regret. The main difference between these methods and reinforcement learning methodologies is that they do not utilize temporally extended data (e.g. minibatches and windows).

In order to create a single environment that combines these methods with those of cryptographic simulation proofs, one needs to provide all data sources that an agent needs to reason about the action space:

  • Private: Agent’s local utility/value function, Agent’s ranked preferences, Agent’s shared preferences (e.g. if they are in a pool/cartel), Agent’s local Blockchain view (which can differ up to K Blocks from the public chain, by prefix consistency) [13]
  • Public: Blockchain data, Prices of cryptocurrencies / derivatives

A realistic, rational agent in a crypto network will be much more complicated than a Byzantine adversary in a consensus protocol — this agent might be honest for a number of rounds, but then when there is a large deviation in the price of the underlying token and/or a future, she will suddenly become Byzantine. Rational agents are also more complex than adaptive Byzantine agents, because they can have arbitrarily long periods of honest followed by bursts of Byzantine and off-chain behavior [14]. Finally, encoding computational constraints within these agents allows for us to emulate the behavioral economics of other online marketplaces (such as ads), where the enforcement of computational limits can dramatically affect mechanism design. Understanding this trade-off between computational complexity and statistical accuracy has been a major part of the advancement of deep learning and very likely plays a role in crypto network development.

The two other desiderata that SE-based methods fail to deliver on were interpretability and statistical accuracy. Why and how does ABM provide these? Interpretability is a bit more direct: a user is specifying the agent classes of the system and how prevalent each of them are. For instance, in MakerDAO, there are four major agent types:

  • CDP holders
  • MKR holders
  • Keepers
  • DAI arbitrageurs

If we define a statistical model of “risky” and “risk-averse” for each of these agent types, then we have 8 agents. Concretely, a risky CDP holder might be someone whose probability of closing their CDP doesn’t decay until the stability fee (a coarse analogue of interest rate) is >10% and a risk-averse agent might be one who only keeps their CDP open until the stability fee is less than 2x the fee on Compound Finance. We can then run a simulation with 12.5% of each agent type and see if the expected rewards (measured over many simulations) that they accrue are fair (relative to the resources they contribute). This is directly interpretable as we can attribute “PNL gain” — getting more rewards than one’s fair share — to each agent type. In fact, this is precisely why backtesting in algorithmic trading performs simulations of this form — it is hard to fix an economic system whose erratic behavior doesn’t have an attribution mechanism.

Human interpretability aside, one might still question whether the statistics provided by ABM converge in any meaningful manner and whether these results reflect reality. As defined above, ABMs rely on a few things outside of the protocol definition:

  • The types of agents, represented by a utility or value function
  • The quantity of each type of agent
  • e.g. 10% Byzantine, 20% risk taking, 30% risk averse, 40% honest and altruistic
  • The initial resource distribution
  • e.g. does one agent type own 50% of the hash power, staking token, or governance token?

Conditional on these parameters, we can compute meaningful expectations that can be compared to real data. Let’s zoom out for a second — what type of equilibria are we looking for? Given that we’re coupling two systems — the consensus mechanism and an external market — it is clear that Nash equilibria and correlated equilibria are too simplistic [15]. Moreover, conditioning on real data, which is heteroskedastic, is very likely to only admit local equilibria. An explicit example of this is the Bitcoin transaction fee market — there surge pricing when Bitcoin prices increase rapidly, clogging mempool and around halvenings. We will use this example to illustrate how ABM provides meaningful predictions:

  • Suppose that we have the following hypothesis
  • Most Bitcoin miners are relatively altruistic with surge pricing (they don’t optimize minimum transaction fees) because they’ve entered into income smoothing instruments (e.g. sold futures or entered bilateral FX accumulator contracts)
  • We can imagine a system with three utility functions:
  • U₁: Honest, altruistic miners that accept any transaction fees provided that their cost basis is positive (e.g. energy costs are covered by the block rewards and transaction fees)
  • U₂: Rational miners who only accept transactions with fee > k
  • U₃: Rational miners who perform a first-price auction and only accept the top fee transactions (there can be multiple of these). If this miner is big enough, they implicitly provide the market with an iterated auction mechanism
  • We have the following parameters:
  • p₁: Percentage of miners using U₁
  • p₂: Percentage of miners using U₂
  • p₃: Percentage of miners using U₃
  • k: Minimum fee in U₂
  • Since p₁ +p₂ + p₃ = 1, we actually only have three parameters.
  • Next, we try to find p₁, p₂, k such that the empirically observed fee distribution is most closely replicated. Call this point (p₁*,p₂*, k*)
  • Depending on the loss function used to define, “most closely replicated,” there maybe degenerate minima. These minima correspond to two tuples (p₁,p₂, k), (q₁*,q₂*, k*) ∈ [0,1]² x $$\mathbb{N}$$ that have the same loss function, but are distinct. We can try to find multiple minima by sampling p ~ Dirichlet(1/3, 1/3, 1/3) and use these samples to do something akin to a simulated tempering step
  • Finally, compute the level set expectations, f(α) = E[fee |p₁*,p₂*, k*, fee >α], where the expectation is over protocol randomness [16]

These expectations f(α) can be used to assess how fees are likely to respond for the mixture of miners that most closely replicates observed data. Moreover, we can search for a phase transition in α, p₁, p₂, k to understand when edge conditions occur such that the market is unstable and/or the ROI for miners is too low for the network to survive. The nice thing about ABMs is that they make the connection between theoretical results and live data explicit. For instance, the expectation in f(α) depends on Bitcoin price, mempool size, and the transaction size distribution. With ABM, one can define theoretical models that specify these features (e.g. making the Bitcoin price a sample path from Geometric Brownian Motion) and compute phase transition boundaries or use historical data for these features to compute an empirical estimation of phase transition boundaries. This ability to connect predictions to historical data is one of the reasons ABM provides statistically meaningful results.

While not as good as formal proof, statistical model of economic security in protocols and mechanisms serve as an important stress test for smart contracts and consensus protocols. I wanted to end this series on blockchain security with a dive into the economic aspects of security as this is where I believe all of the interesting unsolved questions lie. I was so intrigued by this question that I left my job to try to solve this puzzle. A lot of the probabilistic phenomena that were mentioned in this series — phase transitions, percolation, random walks on graphs, financial mathematics, and stochastic calculus — are usually found in isolation. However, I hope that I have convinced that you that in order to properly study crypto networks, one is forced to use all of these tools to generate a statistically meaningful picture.

The next generation of blockchains aim to do more — they want to be a decentralized, censorship resistant version of both banks and cloud providers — and there is an even larger surface area for struggles between different classes of users. For these systems, financial holders, users, maintainers, and developers need much more confidence when they vote on changes to the system that can affect their protocol’s value. But these systems, just like financial systems, are very hard to design to be stable and this is where Gauntlet’s simulation technology comes into play. When equities markets design exchanges, developers run millions of simulations to ensure that the market they’ve design is fair and equitable, encourages market makers and provides them tangible ROI, and is frictionless to use. We’re trying to take these tools to the crypto space to bring statistical certainty to a field that has had anything but conviction.

I want to acknowledge the amazing reviewing by Haseeb Qureshi, Uthsav Chitra, Adam Lerer, Yi Sun, John Morrow, Niraj Pant, and Hasu

[0] “smoothly move from protocol A to protocol B” means “$$\exists$$ a continuous path $$\gamma : [0,1] \rightarrow \mathcal{P}$$, $$\gamma(0) = A, \gamma(1) = B$$”, where $$\mathcal{P}$$ is the space of protocols. I are implicitly assuming that you can represent $$\mathcal{P}$$ as a topological space so that the notion ‘continuous’ is well-defined. My (non-formally proved) belief, based on all papers that I’ve seen, is that there exists some countable set $$\mathcal{A}$$ that contains the set of universality classes of protocols and that each element $$A \in \mathcal{A}$$ is the Cartesian product of

a. A manifold parametrized by the continuous parameters of a protocol (e.g. block production rate)

b. Discrete parameters (e.g. difficulty interval in Bitcoin)

I haven’t seen any description of a blockchain protocol in which there was a topologically non-trivial element (e.g. some continuous parameter that represents a winding number or a Chern class). The only place that this might occur is in the zk-STARK world, where approximate FFT algorithms (for Fast Reed-Solomon Interactive proofs) might provide twiddle factors $$e^{2\pi i k}, e^{2\pi i \ell} \in S¹ \subset \mathbb{C}$$ that make two protocols topologically equivalent iff $$k | \ell \vee \ell | k$$. So far, these have been orthogonal to consensus, so I haven’t included those.

[1] This was implicitly proved in the GHOST paper and formalized in Pass, Shelat, and Seeman.

[2] Please read the first two posts to understand what an environment is and how it is used to construct the security and threat models for a decentralized protocol

[3] Technically, Nash’s theorem works if a) we allow for mixed strategies (e.g. probability distributions over strategies) and b) if the set of choices is a compact set with continuous payoff. In the case of the discrete topology, any function is continuous (as singletons are open sets), so any finite, discrete model is included.

[4] Akin to the Simplex Algorithm for linear programming

[5] Recall that an endomorphism f : XX has a fixed point at x if f(x) = x. This point is a Brouwer fixed point if it is the one that exists if X is compact (via Brouwer’s fixed point theorem).

[6] Differential games do not admit the same existence and uniqueness theorems that their discrete counterparts have. This is because there is a hidden asymmetry in how Nash Equilibria are defined for such games, as there exist feedback loops that affect the computation of Nash. The asymmetry leads to two types of Nash Equilibria: Open loop and Closed loop equilibria, which need not always exist nor be unique. These loop equilibria are sometimes equivalent, but the choice of state variables needed for equivalence makes it very difficult to connect an adversarial strategy to reality. Let’s me try to illustrate this with a simple example. Suppose that I have a differential game on a state space X that is a $$C^{\infty}$$-manifold with connection $$\nabla : TX \rightarrow TX$$ and game evolution operator $$D_t : TX \rightarrow TX$$. The game evolution operator can be stochastic, but needs to be a bounded operator on a locally compact Hilbert space, such as $$L²([0,1])$$). Assume that we have a state evolution of $$(u(t), v(t))$$ for our two players. Theorems such as the one cited prove that there exists a diffeomorphism $$\varphi : X \rightarrow X$$ such that open and closed equilibria are equivalent for $$(u_{*\varphi}(t), v_{*\varphi}(t))$$, the pushforwards of u, v under $$\varphi$$. The problem is that we cannot connect our the equilibria state evolution to real data without knowing the explicit form of $$\varphi$$ (which, as specified in the literature, can easily be non-computable) and can dramatically affect the variance of an empirical estimator $$\hat{u}(t)$$ that aims to connect real data u. In the discrete world, this is less prevalent, as we are constructing our states and actions to directly reflect empirical data. This is one of the main reasons that differential games have not had the empirical success of ABMs.

[7] What exactly do I mean by correlation lengths and times? Suppose that we have N agents on a graph G, where each agent’s state is described by a Gaussian and two agents are correlated if there is an edge between them. The natural correlation length is the distance d (in the graph metric) such that any two agents who are greater than distance d are correlated less than ε. A correlation time is similar: if I take a random walk starting at one agent and propagate an agent’s state, how long does it take until (1-ε)N nodes have been reached? These definitions come from an analogy to how length and time scales are computed in probabilistic graphical models such as the Ising model or mean-field spin glasses

[8] This was first studied rigorously in the GHOST paper and is still being improved (especially as Ethereum 2.0 moves to LMD-GHOST)

[9] Imagine that we have a discrete representation of the state evolution operator as a matrix $$A \in \mathbb{C}^{n\times n}$$, so that if our initial state was $$X_0 \in \mathbb{C}^n$$, then the state at time $$t$$ is $$X_t = A^t X_0$$. Complex, chaotic dynamics in such systems arises because different choices of $$X_0$$ can lead to dramatically different dynamics, analogous to how dual pendulums can have chaotic dynamics. However, without the elision of adversarial agents, one is forced to deal with a state evolution operator that mutates time evolves, so that $$A_t \neq A^t$$. Let’s consider a very simple model of a random adversary: $$A_t(\alpha) = (1-\alpha) A_{t-1}(\alpha) + \alpha R_t$$, $$A_0 \in \mathbb{C}$$ is a Markov evolution operator with $$\mathsf{Gap}(A_0) = \Omega(n)$$, $$\alpha$$ is a moving average parameter, and $$R_t$$ is a random doubly stochastic matrix. Recall that for Markov operators $$P$$ with stationary distribution $$\pi$$, the spectral gap controls how fast the operator reaches the stationary distribution from any initial condition:

$$\sup_x \lVert P^t x — \pi \rVert_{TV} \leq K e^{-\mathsf{Gap}(P)t}$$

Thus, $$A_t(0)$$ converges exponentially quickly (in $$n$$ and $$t$$) to its stationary distribution. However, when $$\alpha \neq 0$$, $$A_t(\alpha)$$ can actually have $$\mathsf{E}[\mathsf{Gap}(A_t(\alpha))] = O(\log n)$$ so that we converge to a stationary distribution polynomially, $$O(\frac{1}{n^t})$$. This is justified by utilizing a result from a recent paper in the Annals of Probability which showed that random doubly stochastic matrices obey a circular law. This can be combined with a Poincaré inequality for mixtures of probability distributions to get the decayed spectral gap. Intuitively, we can think of each random element $$R_t$$ as increasing the incoherence of $$A_t$$ and $$A_{t-1}$$ by a polynomial in $$\alpha$$ and $$\mathsf{Var}(R_t)$$, which ensures that the expected gap shrinks monotonically in $$t$$. Thanks to Yi Sun for some clarification on the doubly stochastic law.

[10a] I use the phrase ‘bounded compute resources’ to try to mimic the polynomial computation constraint in verifiable delay functions. This constraint is natural for any system utilizing cryptography, including crypto networks, as “cryptography exists when you find an exponential gap” — so adversaries with exponentially more compute power will likely always win.

[10b] I want to Adam Lerer for an excellent discussion about this point.

[11] We assume that the utility is integrable and can be defined relative to some risk-neutral (Martingale) measure. Moreover, we assume that every agent’s pushforward measure (induced by a choice of utility function) is absolutely continuous with regard to a single martingale measure. This is a somewhat strong condition, but it allows for us to make relative utility comparisons.

[12] You might find Kahneman’s name to be familiar — he is the author of the best-selling book Thinking Fast and Slow and he won the Nobel Memorial Prize in Economics for his work on rank-biased agents

[13] See the first blog post of this series for an example of this

[14] For instance, derivatives that are settled using miner hash power or modified versions of FX accumulators can lead to odd incentives that enforce the aphorism, ‘market can stay irrational longer than you can stay solvent’

[15] The simplest example that shows that real markets are not well-described by Nash Equilibria is the Myerson-Satterthwaite Theorem. This theorem, like the CAP theorem, is a ‘k out of n’ theorem, in that our cannot have the following four properties simultaneously hold:

a. ε-Nash

b. Pareto Optimality — the bidder with the highest internal valuation wins the auction

c. Auction / Exchange doesn’t need to spend money to enable transactions

d. Rationality — all participants have positive expected utility

This suggests that realistic markets have a different notion of stability that is needed to encourage liquidity.

[16] The protocol has a variety of sources of randomness that we average over (think of these as element of the environment; see the first post for more details):

--

--

Tarun Chitra
Gauntlet

"[Tarun] begrudgingly believes in Occam's Razor" // I write about: Probability, Physics, Hardware, Trading, Crypto, Minimal Techno.