Prediction Machines: The Simple Economics of Artificial Intelligence

Exploring how machines and humans are synthesizing better than expected, and how society might change tomorrow.

Mark Namkoong Life+Times
6 min readDec 11, 2018
A beautiful day in the neighborhood, perhaps, somewhere in Southern California? I wonder if there is a restaurant selling delightful hamburgers and hot dogs somewhere down there. Also, who’s tree house is that? Lucky guy. But look closer, you will also notice something special about that bus. What’s so special about it?

Consider the disruptive impact of the Model T; Ford’s revolutionary car didn’t just deliver a new vision of automobile as a product, it also changed forever how motor vehicles are produced.

How they are used and the very nature of the automotive industry. Today a vehicle has arrived with similar potential:

Olli.

— IBM press release July 20, 2016.

Passengers relax along the busy, industrial corridor of town. I wonder how their day is going. Where are they going next? The motorcycle rider observes the needles of a lone pine tree, surrounded by concrete jungle. Unbeknownst to him, IBM and Local Motor’s Olli bus is communicating with the outside world too.

Humans and machines both have failings.

Without knowing what they are, we cannot assess how machines and humans should work together to generate predictions. Why? Because of an idea that dates back to Adam Smith’s eighteenth-century economic thinking on the division of labor that involves allocating roles based on relative strengths.

Prediction Machines by Ajay Agrawal, Joshua Gans, and Avi Goldfarb.

A fleet of Olli buses in action!

Prediction Machines: The Simple Economics of Artificial Intelligence gives us a game to play. Are you ready for it?

Consider this sequence.

O XX O X O X O X O XX OO XX O X O XXX O XX

See if you can write the next sequence.

You’ll notice by the eye that a few more Xs construct the line than Os, giving us an intuition that our next line should be somewhere between two-thirds X and one-third O. A statistician might count and figure that 60 percent of the line have X and 40 percent O. Our likely answer is to accordingly write down a series of X and O by using these clues.

Figuring each letter has 60 percent chance of being X and 40 percent being O, it’s fair to go off that intuition by guessing each letter.

“The last two were X, so this is probably O…”

“The last one was O, so this is probably an X, then O, then two Xs…”

The human solution.

But a better solution?

Just pick X every time, and you’ll be correct 60 percent of the time. Human nature says question what each letter would be, but machine learning systems simply would see 0.6² + 0.4² =0.52.

If you always chose X, you’ll always be right 60 percent of the time.

If you always chose 0, you’ll always be right 40 percent of the time.

Lots of hugs and kisses.

If you randomize 60/40, as most participants do, your prediction ends up being correct 52 percent of the time, only slightly better than if you had not bothered to assess relative frequencies of Xs and Os and instead just guessed one or the other. (50/50). What such experiments tell us is that humans are poor statisticians, even in situations when they are not too bad at assessing probabilities.

— Prediction Machine: The Simple Economics of Artificial Intelligence

“Powered by IBM’s Watson Natural Language API and Internet of Things (IoT) for Automotive, Olli can take passengers to requested destinations, provide recommendations on where to go and answer questions about the vehicle, the journey and the surrounding environment. The system is context aware, sensing the environment to adjust how it interacts with passengers. For instance, on a hot day, Olli might suggest going for some ice cream — and then transport the passengers to the closest Ben & Jerry’s. ” — IBM July 20, 2016.

Humans are gifted with biological software, accomplishing a myriad of tasks without needing experience. Children can remember their elementary school classmates when they later see them as adults. Social interactions integrate easily even if we haven’t seen someone in years. Faces are remembered after only one glance, even from a different angle. Of course, names are harder to come by. Humans excel with very little data. Our automated systems struggle with sparse data, or environments with inputs that abruptly change over time. While predicting rainfall is easy, predicting seismic earthquakes or volcanic activities are incredibly hard (despite greater technology). Political campaigns are unforeseeable and unpredictable far in the future, but we can reasonably estimate population demographics in the 2020s and 2030s. Conversely, we do poorly with mass influx of information. Our game from above, silly and rather pointless, shows how humans struggle in harnessing processing power. Let’s explore more applicable examples of data automation.

“It’s hard to overstate the potential level of disruption this event could cause to the global automotive industry. The addition of self-driving capabilities and a layer of artificial intelligence to make the autonomous car experience acceptable and appealing to consumers requires a level of expertise that not all automakers equally possess.” — IBM July 20, 2016.

Our greatest fears in automation lies in the displacement of jobs that secures families and livelihoods. Vacuums that presumably ho-hum by themselves, while asking how your day went is not the big one. So what is? Perhaps it is a company like this founded in 2015, by Ron Glozman. Chisel is a start-up that essentially automates the redacting of legal documents. Before, there was no way to see what constituted as sensitive and non-sensitive content in papers without a person reading line by line. The redacting process was consuming, as is the vast majority of what lawyers do: drafting documents and contracts. In our lifetime, there will never be an automaton standing in court to talk to a judge. But the busy work element of law firms, once done by interns or young people have been displaced by electronic databases and start-ups like Chisel. The redacting process was clumsy at first, sometimes the AI blacked out lines that shouldn’t have been. Early tests had Chisel guide humans on what lines to redact, similar to how Microsoft Word offers you suggestions when it thinks a word is misspelled. Chisel’s AI could be modulated to be active or lenient in its redaction. If someone felt a document had highly sensitive info that shouldn’t be disclosed, Chisel’s AI could be turned up. If a document more or less didn’t have sensitive material, or the document needed the majority of its content to be disclosed, you turn the knob down. That was the company’s first service, you may imagine how much better these tools are now.

“By using new technologies like 3-D printing, crowdsourced development, cloud-based APIs and advanced artificial intelligence, Local Motors has been able to move faster than the giant carmakers — and develop more products that are more innovative in many regards. The arrival of such technologies could have a democratizing impact on the car industry, allowing tiny startups to compete and even move beyond the industry’s established behemoths.” — IBM July 20, 2016.

Daniel Kahneman and Amos Tversky published a paper outlining cognitive biases, and human heuristics. The duo gave a trivia to participants by asking one question: Hospital A gives forty-five births per day. Hospital B gives just fifteen births per day. So, which hospital will have more days when 60 percent of the births (or more) will be boys? The answer is… Well, let us come back to it in a second. The studies were done to highlight our weakness in pure math, which is especially important in medicine. Amos Tversky told physicians who specialized in lung cancer about treatments. According to five year survival rates surgery is the better option than radiation. But the surgery is more risky. Tversky told one group of physicians “the one month survival rate of surgery is 90 percent,” causing 84 percent of physicians to recommended the surgery. When flipped to “there is a 10 percent mortality in the first month,” only 50 percent of the surgeons recommended surgery. The same data. Machines run by algorithms don’t have these biases. An algorithm created by Harvard and MIT in 2016 successfully detected breast cancer via biopsies. The study found the algorithm detecting metastatic breast cancer 92.5 percent of the time, compared to 96.6 percent when examined by a human pathologist.

When the algorithm and human pathologist worked together, metastatic breast cancer was correctly detected 99.5% of the time.

Each one complimented the other’s bias.

The algorithm was more correct in finding when cancer wasn’t there. The pathologist, however, was almost always right in saying when the cancer actually was there. Obviously, the algorithm will continue to get better.

As for the answer to the original question?

Hospital B.

The smaller hospital is correct because the larger the number of events (in this case, births), the likelier each daily outcome will be close to the average (in this case, 50 percent). To see how this works, imagine flipping a coin. You are more likely to get heads every time if you flip five coins than if you flip fifty. Thus the smaller the hospital — precisely because it has fewer births — is more likely to have more extreme outcomes away from the average.

— Prediction Machines: The Simple Economics of Artificial Intelligence.

By Ajay Agrawal, Joshua Gans, and Avi Goldfarb.

Officially read IBM’s press release here.

--

--