Radical Markets for Elephants

Robin Hanson’s LMSR for Prediction Markets

Gian Balsamo
GnosisDAO
9 min readJul 13, 2018

--

Speaking the Truth

I owe the pachyderm in my title, “Radical Markets for Elephants,” to Robin Hanson. We are masters at deception, Hanson and Kevin Simler write in their recent book, The Elephant in the Brain. Some of the hidden motives driving our social behavior — hidden, on occasion, to ourselves! — are fraudulent, manipulative, and self-serving. “And they aren’t mere mouse-sized motives… They are elephant-sized motives large enough to leave footprints in national economic data.”

There you go: the “elephants” in the title are the billowy smoke signals whereby we divert attention, even our own, from our hopes and fears.

Yet, Robin Hanson tells us also that money incentives may trick us into speaking, or signaling, the truth of our expectations. Hanson’s incentives are a sort of Invisible Hand redux, if you think of it — except that Adam Smith didn’t have the right math at his disposal, nor had he heard of elephants in the brain.

Logarithmic Market Scoring Rule, LMSR for short, is the name of Hanson’s distinctive solution to the following problem: how to use financial incentives to elicit a bunch of people’s truthful and jealously guarded opinions about a future event. Gnosis uses the LMSR market maker for prediction markets.

My goal today is to translate that mouthful of an acronym, LMSR, into plain English: how does it work, why does it work better than any alternative in prediction markets, and what are its drawbacks?

What’s a Scoring Rule?

It’s difficult to make honest predictions. Predictions are based on expectations, but expectations can hardly be disentangled from preferences. If you wish for a tax cut, and presidential candidate A is promising a tax cut, while candidate B is menacing a tax increase, it is difficult for you to objectively forecast the election’s result. Your values impinge on your beliefs.

Betting on horse racing is different. You can be enamored with the elegance of American Pharaoh, but when it comes to placing your bet on the 2018 Triple Crown winner, it’s with unapologetic relief that you hear yourself utter to the teller: “Win on Justify.” Your disincentive to waste money made you set aside preference and side with expectation. Greed made you honest.

Indeed, betting markets are quite good at revealing the probability estimates of people playing the ponies. The problem with betting markets is that they are zero-sum games, where my gain is your loss: either the participants are irrational, or they have privileged information about the race. In the latter case, a rational person would not gamble against them.

Ordinary speculative markets, such as the Stock Market, are excellent at aggregating the dominant relevant opinions and information into market prices, which express, in turn, collective probability estimates. But the moment your market grows thin, as in the case, say, with the market for rotten mushrooms — where most people wouldn’t be caught dead trading — there is a liquidity problem: too few people engage in trading, therefore market prices do not represent significant collective probability estimates.

Scoring rules are very successful at eliciting individual assessment of event-related probabilities. A scoring rule is based on a mercenary principle: the better I forecast a future event, the higher is my score, and the larger is the monetary compensation I receive for it. A patron is paying my reward in return for the elicited information. However, scoring rules are affected by the thick market problem: aggregating or pooling different people’s estimates into a single consensus is remarkably tricky, and partially unreliable in its results.

What Does the Logarithmic Scoring Rule Look Like?

I might be in the position of the tax-cut-loving voter mentioned above, unable to disentangle preference from expectation. The scoring rule is proper if it constrains me to maximize my expected score by reporting my probabilistic assessment truthfully, regardless of my unsolved dilemma between values and beliefs.

The best scoring rule is Hanson’s favorite one, the logarithmic scoring rule, because it can be used both to reward my truthful report and to gauge the statistical likelihood of my prediction. It looks like this:

When it comes to rewards, equation (1) works this way: if out of several mutually exclusive events, event i happened, and if I had previously assigned probability ri to event i, my compensation will be si. Math details aside, what matters is that in case I nailed my prediction 100%, my compensation will amount to ai. (We will discuss the factor b in a bit.)

As regards the likelihood of my prediction: I may be as honest as the next guy, but if the next guy is more likely to make a correct prediction than I, the LMSR will enable the patron to weigh his views more than mine.

The term “market” appears in LMSR as a signpost, to point out that Hanson designed the Logarithmic Market Scoring Rule as a market maker mechanism. Which brings us to the no-frills question: what’s a market maker?

A market maker is the institution (or the human, or, in the case of Gnosis’ automated market maker, the application) that sets prices for buy (bid) and sell (ask) orders, bears the risk of each trade — since all transactions occur with the market maker as buyer or seller — and may incur losses from trades. By its very presence, the market maker turns the market into a positive-sum game, incentivizing thereby rational traders to participate. And since any participant can trade with the market maker whenever s/he finds the current price attractive, the thin-market obstacle is overcome.

The Nitty-Gritty of How LMSR Works

This market trades in assets of the form “pay $1 if event i occurs.” To each asset i corresponds an event i. There are (in our simple descriptive model) n exhaustive and mutually exclusive assets and corresponding events. Final settlements occur after market closure, when a certain event i has occurred, and thereby each share of asset i is worth $1, and each share of the other (n-1) assets is worth $0.

The market maker provides the initial liquidity in return for a fee on each transaction. Math details aside, the logarithmic scoring rule function (1) entails a maximum loss of b log(n) for the market maker, where n is the number of exhaustive and mutually exclusive events. Thereby, the market maker increases or decreases its maximum loss by changing the factor b.

The greater the sum risked by the market maker, the deeper the market, i.e. the larger will be the number and size of feasible trades — notice that the initial liquidity sets an implicit upper limit for size orders. We’ll presently see that the deeper is the market, the less relevant becomes the price slippage caused by any robust trade.

The opposite is true, alas, for markets with small initial liquidity.

When deciding how much seed funding to put up, the market maker must thereby engage in a two-horned maximization: its goal is a market whose animation will be likely to bring in (more than) sufficient fees to compensate for the money put at risk.

This market has a continuous, inventory-based internal state; in other words, it is constantly characterized by a “net sales so-far” vector of n elements, where each element counts the number of shares which have been sold of this or that asset. At market closure, the state vector reports the final consensus on the probability estimates — which conveys the probabilistic prediction of the market vis-à-vis the factual turn of events.

For each asset i in the LMSR automated market maker, the price of the pertinent share is an exponential function of factor b from the reward equation (1) and of all outstanding shares’ quantities as well.

Let’s model, for instance, a market with just two assets, asset 1 and asset 2, and assume that the market maker has sold q1 shares of asset 1 and q2 shares of asset 2 (with thanks to David Pennock’s math). The price function is as simple as:

For infinitesimal trades, the bid-ask spread is zero. For any concrete, non-infinitesimal trade, the slippage between expected and paid price will be, roughly speaking, inversely proportional to the market’s depth.

This price slippage depends on the cost of the trade, which depends, in turn, on the trade size as well as on the price equation (2). In our two-asset-market model, the cost function looks like this:

If a trader wants to buy x shares of asset 1, this transaction will cost them:

After this trade has changed the outstanding quantity of share 1, the price function (2) will compute the latter’s new current price for infinitesimal trades. A price change applies to share 2 as well, of course, since the total cost of a bundle of shares holding one share per asset is equal to $1. For each asset, the price slippage owing to the latest transaction amounts to the difference between the previous and current price.

Notice that each price in American cents is a punctual indication of a probability estimate. (There is convergence therefore between individual prices and the elements of the internal state vector.)

Whenever my trade causes a price slippage, it is because I paid the scoring rule payment associated with the prevailing probability estimate of some event i and, at once, implicitly declared myself ready to auction off any of my portfolio’s shares to whoever is willing to offer the scoring rule payments associated with the new probability estimates. In doing so, I affected the current overall consensus on probability estimates.

After the market is closed and, out of n possible, exhaustive and mutually exclusive events, event i has demonstrably happened, it’s time for the final settlements. All traders whose portfolio holds shares of asset i will get $1 per share from the market maker. All other shares are worth $0. The difference between the number of outstanding shares of asset i and the market maker’s total revenue from transactions and fees determines its final balance.

We are just left with a tiny yet hugely important detail. In the case of Gnosis’ LMSR automated market marker, anyone can start their own prediction market and mint the pertinent set of assets, as long as the latter are collateralized with seed funding in the event contract. Anyone can act as market maker, in sum. Gnosis’ application automates the process of minting and trading.

Elephants in a Radical Market

Time to wrap things up.

I neglected to mention that I owe the root of my title, “Radical Markets for Elephants,” to Posner & Weyl’s new book, Radical Markets.

Posner and Weyl advocate the auction as the quintessential Radical Market: by having people bid against one another for a given asset, this asset will end up in the hands of the person who is most likely to valorize it to the fullest — which is their favorite instance of optimal allocation. And when it comes to the optimal expression of individual preferences, Posner and Weyl advocate “quadratic voting”: a vote whose cost grows exponentially with respect to its preponderance is a better source of valuations than the one-person, one-vote rule.

In the context of Robin Hanson’s LMSR market maker, as discussed, the marginal cost of the next share grows proportionally to the number of outstanding shares — which entails that casting one more (long) vote on the future of a certain state of affairs is an exponential expression, moneywise, of my (positive) expectations regarding that state of affairs. This leads to an optimal allocation of shares, or, which is the same, a most reliable expression of expectations.

The mathematical convergence between Hanson’s and Posner and Weyl‘s respective approaches to the intensity of preference and its optimal expression is striking.

They both tame the elephant in the brain, I dare say.

Remember the movie Spartacus?

I am the Invisible Hand. You are the Invisible Hand. We all are the Invisible Hand!

With thanks to Friederike Ernst, Alan Lu, and Nadja Beneš.

--

--