Monty Hall, Algorithms, and Community

Information crunching machines might be the future of collaboration.

Ezra Weller
Ezra’s Wellspring
12 min readMay 31, 2018

--

A game show riddle suggests that our future could be governed by algorithms. Problems like environmental destruction and world hunger, which we’ve hoped to address by educating or convincing millions to the cause, may only be solvable by machines that can handle the multitudes of data they contain.

The game show riddle is the Monty Hall Dilemma:

You’re a contestant on Monty’s Let’s Make a Deal way back in 1964. Monty presents you with three doors. Behind one is a new car, and behind the other two are goats, mysteriously. You get to choose one door: pick the car door and you win the car; pick a goat and you get nada. Before you open your chosen door, there’s a twist. Monty opens one of the unchosen doors, revealing a goat, and offers you the option of switching your choice to the final closed door. The 40 thousand dollar question is: to maximize your chance of winning the car, should you switch doors?

The mathematical answer is that you should switch. When you first pick a door, you’ve got a ⅓ chance of choosing the car, and there’s a ⅔ chance the car is behind one of the remaining doors. Monty eliminates one of the doors in that group, leaving a single door with a ⅔ chance of hiding the car, to which you may switch. Switching doubles your chance of winning the car.

If that’s not the answer you expected, you’re not alone. Evidently, the Dilemma, which we’ll call Monty, fools almost everyone at first. This piece steers the problem in a democratic direction, but many explanations of the math have been written and many subtleties explored. For the curious, I recommend:

We’ll go through a series of Monty Hall Dilemmas that parallel the information issues confronting our democracies today. The problem is inherently slippery, and missing or changed context transforms it completely. Explode it out into a group decision, and Monty sheds light on the difficulties of forging individual perspectives into smart collective choices. If combining our knowledge doesn’t elevate it, can democracy solve our trickiest problems? I think it can, but only with a higher bandwidth for processing information.

Monty the Illusionist

The Dilemma is a probabilistic illusion akin to the optical ones we grew up with and are still discovering. People get it wrong unreasonably often. Marilyn vos Savant’s original column apparently prompted “thousands” of letters insisting she’d made a mistake. Responders remained adamant despite additional explanation columns by vos Savant. Formal studies show a strong cross-cultural tendency to stick with the original door on the first try.¹

Our tendency to get this wrong is so powerful that “The Monty Hall Illusion” is an appropriate name. Some part of the problem abuses our probability instincts like a virtual reality headset fools our sense of space.

Monty gives us everything we need to know — we ought to apply a few grade school probability lessons and be done with it, but that’s not what happens. So, Monty’s first lesson: sometimes we’re bad at solving problems despite having all the right information and skills.

The Half Monty

Even with complete context, Monty is deceptive. With puzzle pieces missing, it’s impossible. Imagine your friend walks in half way through the game: she sees two closed doors and one door opened to a goat. Monty explains that one of the doors hides a car, that you’ve chosen one of the closed doors, and that you now have the opportunity to switch. He doesn’t tell your friend when you chose your door or when he opened the goat door. Now he asks her to choose, should you switch your choice?

Without knowing if you chose before or after the goat door was opened, she can’t solve the problem. If she believes you chose after the goat door was opened, the odds really are fifty fifty from her perspective. Monty reminds us of a second lesson: lacking even a small amount of context can put a problem’s solution out of reach.

The Full Monty

Now replace your friend with the entire audience: the decision of whether or not to switch doors will be put to a vote. Let’s assume the audience can come and go as they please, that the problem is new to them, and that the experiment will be run just one time. In this democratic variation, Monty’s two previous lessons imply that getting a majority “switch doors” vote from the audience is a tall order.

Sometimes we’re bad at solving problems despite having all the right information and skills. Since the audience members don’t all arrive on time, only some of them witness the beginning of the game. Others arrive early but don’t pay attention. Some do listen carefully from the beginning, but as we’ve seen, a likely majority of these still won’t vote to switch.

Lacking even a small amount of context can put a problem’s solution out of reach. Audience members that arrive late or don’t scrutinize the process may lack the information necessary to discover a “switch” vote’s advantage.

For audience members in isolation, then, it’s reasonable to expect less than half will vote to switch doors. But they need not be isolated: audience members can talk to each other freely, yell things out, maybe even throw together some signage. If some audience members know switching is best, could the minority convince a majority to vote “switch”? With an easier problem, maybe they could, but Monty’s tricky. In her original columns, Marilyn vos Savant’s thoughtful explanations failed to convince much of her audience. Her counterparts in our experiment would have the same challenge but none of vos Savant’s intellectual reputation or public platform. They might persuade some to switch to switching, but unless they convince a majority, the whole effort is moot.

I don’t have an empirical study to cite, but given Monty’s uncanny difficulty, I doubt that a voting audience can solve the riddle.

The Monty Commodity

Markets are supposed to be wise, right? Another iteration of democratic Monty might be market-based. Instead of giving each audience member a vote for or against switching, we create two commodities they can buy: switch vouchers and stay vouchers. If you own a stay voucher and the original door choice is ultimately correct, you get $1, and vice versa for the switch vouchers. Both types of voucher are sold in unlimited quantity by Monty Hall himself, who adjusts their prices dynamically according to demand from the audience members.

A market like this should preserve more information about the audience’s preferences than a vote. Instead of a binary choice, each audience member can buy any number of either voucher type for whatever prices they’re willing to pay. When time’s up, the final price of each voucher is a distillation of all that preference information. The most common mistake in Monty’s problem is to think the odds are fifty fifty, and if you’ve made that mistake, you shouldn’t be willing to pay more than ~ 50¢ for either voucher (50¢ being the expected payout from a 0.5 chance to win $1). If you’ve realized the answer, though, you’ll be willing to pay more for the switch vouchers (up to ~67¢ since they payout ⅔ of the time). If audience members fall into only these two groups, the final price of switch vouchers should always be higher than stay vouchers. In this toy government, democracy-by-market might be smarter than democracy-by-vote.

This idea comes from Robin Hanson’s “futarchy”, which suggests prediction markets as a potential replacement for voting in democratic governance.

Monty Gets Generalized

Admitting significant differences, I think the parallels between these thought experiments and real life democracies reveal the key role information processing plays in smart group decisions. When decisions akin to the Monty Hall Problem confront our societies — decisions where the best answer defies instinct — arriving at good solutions via voting may be unrealistic. To some degree, every group problem in a millions-strong society fits the mold: designing good policies for healthcare, the environment, education, or nearly anything else meant to serve millions is beyond the grasp of a voting public. Just like Monty, these problems require knowledge that many voters inevitably lack, and they fool our intuitions even when we spend time educating ourselves.

Our governments deal with this deficiency now by granting legislators full power over policy design, and we hope that elections somehow bind them to infuse laws with our values. Representative democracies do create functional policies, but the information loss is tremendous. For each citizen, there is a hypothetical approach to healthcare that leads to their best health outcomes, and that approach is hidden inside the citizen’s life story. Sum that info for the whole population, and it’s a gigantic data load. Much of it is destroyed when it gets jammed through the bottleneck of a few hundred congress members hired via majority votes in binary elections. The resulting policies are disappointing in a time when we expect higher capacity social information processors.

We have at least two paths toward solving this governance information problem.

First, we can attack it directly with more effective communication and education. So in Monty, audience members who know switching is the answer get better at convincing the rest of the crowd, or a higher percentage of audience members start out better at probability. If that sounds simple, it isn’t. For all we know, the Monty Hall Dilemma’s difficulty might be tied to our unchanging human natures — maybe we can’t just “get better at it.” Even if we can learn, our culture is bad at agreeing on new facts, not just in Monty but all over. Call it an epistemic crisis, a post truth era, or a fake news epidemic. If there’s a set of magic words that will convince Monty’s audience to switch doors, there’s no sign of it. Improving school educations is a chicken-and-egg problem: we need better schools to create a smarter public, but we need a smarter public to create a better school system. We’ve no shortage of proposals to improve education, but actual practices and results have been changing incrementally at best.

A more likely path to better information processing might be a larger role for algorithms, which don’t need us pitiful humans to work. Algorithms can externalize social information processing, making us smarter without requiring human nature to change. Hanson’s futarchy and markets in general work this way. The price mechanism processes more information than voting without a noticeably heavier burden on individuals. You don’t have to work harder or do diligent research for your grocery store to give you a fair price on broccoli — it just does. Since the algorithm doesn’t depend anyone in particular, it resists corruption as well.²

Futarchy is not the only such proposal:

  • DAOstack’s holographic consensus aims to enable huge groups of people to focus their attention on the few proposals that best fit their complex preferences.
  • I have suggested a social network stack, a governance-tinged system based on the most promising information tools already flourishing in the internet ecosystem.
  • Delfy, a community management toy I built that can refine and filter tons of verbal feedback into a short list of simple ideas.

The goal of these algorithms is to maximize the ratio of information reduced to meaning lost: to sift through the information sand dunes and pick up the few special grains that can represent all the rest. This is a generalized approach to Matan Field’s resilience and scalability governance problem: take in as much data as possible and output dramatically less, preserving the input’s essence as best you can, sort of like an extreme version of data compression or a cultural machine learning

Your brain does this all the time. Your muscles, organs, skin, and senses send constant drips of information up your nerves and into your skull, where your brain refines that ocean of data into the single set of instructions that guides your actions. The basement layer of your metabolism is mind-boggling. Each cell manages its own little energy economy, but your nervous system takes all that in and makes something simple: hunger. It tells you to go eat something, and it tells you at the right times, at the right frequency. That’s a remarkable feat of information wizardry.

Applications for this approach abound anytime a large group of people, who produce waves of information, need smaller bits of information, rules or instructions, to help regulate the group or coordinate a collective action:

  • Towns, cities, or states: Everyone has opinions about the rules our communities should adopt — on zoning, parks, roads, all that. This information is mostly lost and then poorly compressed by a system that depends solely on binary votes every few years. There’s lots of potential here for information algorithms to help us make better policies.
  • Companies: They already have information algorithms for their customers, and they work well. Facebook collects so much data on you because it knows how to use it. They might start thinking about their workers’ information the same way, if they don’t already. A company of thousands is quite like a town of thousands, after all.
  • Education: Scientific consensus is notoriously hard to nail down, and doing so is exactly this kind of information problem. Hundreds or thousands of papers on a topic get published in a decade, and when the subject is urgent, we need a way to reduce that mountain of papers fairly into a message that can spread among those the science affects.
  • Culture: Anti-rival networks like reddit and Youtube are the best working examples of this kind of algorithm. Legions of users submit a breathtaking flow of content, and these websites transform that flow, created by millions, into a feed that is easily understood by a single person, while still reflecting the essential pulse of the initial torrent. You can see culture changing if you visit Youtube’s homepage daily. The videos themselves might not always be great, but the mechanism that puts them there is significant. Only Youtube’s engineers know its equations for ordering and recommending videos, but they certainly take more variables than a binary vote, including watch time, topic tags, and other videos viewed by the same users, for example.
  • News: The so-called democratization of media means we’re flooded with internet news sources. That’s the kind of information glut asking for an algorithm to make it more useful. Building an algorithm to sort the news noise by honesty, evidence, and impact is hard, but I think crowdsourcing the “verifiability” of claims has potential.

Each use-case calls for a specialized algorithm that can reduce input information according to case-appropriate values. If the news is about finding facts, its ideal algorithm compresses information differently than Youtube, which values watch time, or a scientific consensus algorithm, which might use both fact-finding and popularity in combination.

You got nerves, kid. Use em.

Futarchy, holographic consensus, social network stacks, and sites like Youtube or reddit are starting points for designing these algorithms, but we’re going to see endless variation. Creating successful systems to manage so much information is a massive task, but I think the task will be undertaken, because the upside is big, and the infrastructure is built. Global internet access means data is zipping around at unprecedented rates, and this rocketing information growth is the genesis of these algorithms’ potential. It’s because of the internet that Facebook’s business model works. Because of the internet, more words have been written in the past 10 years than the previous 10,000. The internet is what makes electoral politics feel so slow and distant by comparison.

We’ve built society a new nervous system — but it’s mostly firing at random right now, flooding our screens with endless lists of addictive stimuli, entertaining but not empowering. Civilization’s muscles are only twitching, and we’re building the software that will really get them moving… Apart, we’ll probably be getting stumped by the Monty Hall Dilemma for a long time, but together, maybe we can get a little smarter.

Footnotes

¹ A mystery in itself since you might expect half to swap doors if they believe in fifty fifty odds.

² But it isn’t incorruptible. Much research has been done on market manipulation, and ambiguous incentives can motivate bad actions instead of good, both in people and machines.

³ Recapping Monty, the voting version takes in a boolean input from each voter, while the commodified version takes in how many of each commodity each person bought, and at what price. With a larger input (greater information reduction) and the added nuance of pricing (less information loss), I’d expect the commodified version to better represent the audience’s views. This fits the earlier analysis pretty well.

If you know of any research relevant to this topic or you’ve got any kind of feedback, please post below! Did I make any mistakes? Any and all comments appreciated. 🙂

--

--

Ezra Weller
Ezra’s Wellspring

co-founder of Groupmuse, communicator at DAOstack, M0ZRAT sometimes