Your Token Model is a Beautiful Water Wheel

Wayne Chang
Alpine Intel
Published in
7 min readDec 26, 2018

The mechanism design purist’s view, which holds that the only consistent approach is to develop theoretically “optimal” mechanisms, is not useful in practice. Even if we could incorporate all the features described above, our models of human behavior are not nearly accurate enough for use in optimization. Behavior is neither perfectly stable over time, nor the same across individuals, nor completely predictable for any single individual. Useful analyses must be cognizant of these realities.

Milgrom, Paul. Putting Auction Theory to Work. Cambridge University Press.

Token models are the rules of the road for interaction with global digital assets (i.e., “tokens”) and their related systems. Many teams that spawn tokens into the world will give significant thought into how the tokens should be used, who should use them, and broad strokes around token supply management. This is good. Focus on customers, their needs, and how to satisfy them are undoubtedly the surest ways to build thriving ecosystems, markets, and businesses.

However, the token model rabbit hole is deep, and those in freefall may conjure cryptic Cyrillic scribblings: differential equations, Nash equilibria, agent-based models, control systems, and of course, mathematical proofs. While such tools are undoubtedly useful in modeling certain systems and risks, usefulness begins and ends with certain guaranteed behaviors or constraints that restrict models to small and limited sets of moves.

Math can be used to demonstrate validity given a specific set of assumptions. These assumptions must hold true in order for the results to also hold true. In business, economics, and other social domains, assumptions shift and gyrate wildly in unpredictable patterns. This chaos is especially rampant in the process of discovering new patterns of specialization and trade, such as in entrepreneurship or innovation.

Spinning Your Wheels

Many models of token ecosystems produced by professional economists will stipulate that “sufficient demand” is critical to their model and also that the “demand function is unknown.” In other words, there is often a big predicate that amounts to “we’ll have a lot of users and traffic.” Given that the conditions are met, the models typically check out and perform as expected across simulations.

However, the creation of huge volumes of users and traffic is the singular objective, sine qua non #1, and Big Hairy Audacious Goal of virtually all Internet entrepreneurs today, so taking this output for granted as a model input is an insult to highly skilled product teams everywhere and incongruent to how high-growth businesses and ecosystems are typically formed. It is an extremely difficult problem worth billions of dollars when solved well. No one can promise product-market fit just because they have a generalized equation.

As statisticians say, “all models are wrong (but some are useful).” It is likely that most published token models sporting “mathematical rigor” and pages upon pages of Greek are wrong and also useless when it comes to delivering profit or ecosystem growth — a throwback to naive founders broaching venture capitalists with spreadsheets that detail wild success should they capture “but 0.5% of the $1.5T automotive industry.” Even worse, during the 2017 “ICO boom,” some teams attempted to jerry-rig as much mathematical notation into their whitepapers as possible to signal technical sophistication to unsuspecting investors.

This is an example of a utility function found on the Internet as part of a token usage model. Note the plug-ins for “expected demand” and “expected number of experts”:

A genuine attempt at modeling with big scary assumptions

Here is an example of a creature gone wild, probably created to impress investors:

A not-so-genuine attempt at modeling with even bigger and scarier assumptions

Can some of these models actually improve systems? Absolutely, but only if the exact economic engine that was ideated, synthesized, and simulated in a vacuum is the same one that is used in the final product. In the pivot-laden land of startups, this almost never happens. It is the norm to throw out functioning products wholesale and begin anew in the precarious pursuit of product-market fit.

Therefore, eschewing thorough investigation of customer needs while also designing a mythical engine that works comprehensively in theory is akin to the special case of spherical cows or assumptions of can openers. It is like designing and building a provably efficient water wheel without even a precursory check for nearby running streams. It is at best self-gratification and at worst hubris.

Bitcoin’s Model

In contrast to the equation soup above, this is all the math that the Bitcoin whitepaper used:

Bitcoin’s Simple Use of Probability Theory

Many first-year engineering students will understand why the well-known Poisson distribution was chosen to model the solving of blocks via brute force guessing. A Poisson distribution models the frequency of independent events, which means it is also a good model for guessing how many patients are admitted into a hospital over 24 hours or average expected wait times for the metro.

This example is a clear and specific application of math to illustrate one prominent risk measure that is meaningful to the security of the network. There is even a small executable simulation that is able to produce concrete numbers for approximating risk. This component-wise approach may be among the best ways to incorporate quantitative analysis today.

Promising Usage

Of course, modeling is far from useless, lest statisticians and economists would not be so prized in lucrative fields across finance, marketing, insurance, and healthcare. While generalized models that claim to describe complete system behaviors will likely prove extremely fragile (e.g., profits are an overall system output), what models might be most useful for token economies?

Existing traffic volumes. Ethereum’s gas is a good candidate for modeling because the system is simple (users bid for gas on the market, block producers accept bids) and there are plenty of live transactions to test and tune the accuracy of models. Gas calculators (software models) are usually accurate. Changes to the model can be meaningfully expressed using variables, such as in a proposal to institute decaying minimum fees and block waiting times:

Component-wise analysis. It may be useful to model components that have clear performance specifications. For example Proofs of Space aim to provide guarantees around dedicated disk space as opposed to computation (such as in Proofs of Work). The mandate is clear: describe a rule system such that it’s far cheaper to actually store generated data than to fake them. In this model, there are no grand assumptions around product usage or asset appreciation. It only aims to establish a low ratio of cost of storage to cost of spoofing via selection of algorithms.

High stakes and a small set of moves. In the MakerDAO project, control theory is used to model Dai stability. Dai stability is a good candidate for this type of modeling because it is essentially a negative feedback loop with few inputs and outputs: target price of Dai, the market price of Dai, deflation rate, and market noise. The primary forces are simply supply and demand of Dai. Notable limitations include assumptions around linearity in price movement, Gaussian noise, and CDP creation rate as a function of price action. This analysis is further simplified by omitting adjacent systems such as the MKR payment and governance token.

Asset distribution. Historically, high stakes auctions have leveraged mathematical modeling and mechanism design. For example, economists and game theorists around the world regularly make appearances to help design radio spectrum auctions, which are important and infrequent events with billions of dollars on the line. This same approach can also be used to manage token sales to ensure fair and equal access, but with less on the line, it is possible to take more experimental and iterative approaches.

Tools and Iteration

In summary, modeling is a great tool that can be used to explore and mitigate risks. All tools have limitations, and proficiency in their use requires that the limitations are well-understood. For the modeling of token economies, it is hapless to search for an all-encompassing equation as most such equations are riddled with assumptions that deviate dramatically from reality — especially around user behavior and engagement. Promising use cases tend to exhibit some combination of existing traffic volumes, component-wise analysis, small sets of moves, and high stakes.

Consider that many formal constructs are discovered only long after active usage. During the 1940s, medical school residency matching was broken, resulting in few happy placements. In 1952 Hardy Hendren with a group of fellow students at Harvard Medical School proposed a new method for a clearinghouse which resulted in a stable matching.

However, it wasn’t until ten years later in 1962 that the idea of a stable match was actually formulated. David Gale and Lloyd Shapley published an article titled “College Admissions and the Stability of Marriage,” discovering the deferred acceptance algorithm. Twenty years after that, in 1982 Alvin Roth found that the residency matching system invented in 1952 was an equivalent algorithm. It was thirty years between the first system implementation and the first formalization of the system. In the same vein of discovery and exploration, the creation of meaningful token-powered ecosystems today depends more on iteration, good engineering, and empathy for user concerns than on the practice of central planning.

***

Special thanks to Matt Condon, Francis Tseng, and folks at ConsenSys for their reviews of the draft.

Nothing in this article should be taken as legal or investment advice.

--

--