What We Can’t Know in Advance

Chris Barber
Sightlines View
Published in
5 min readMar 16, 2016

Humans are good at creating outcomes no one wants, from polluted water bodies to clogged healthcare systems. We are also good at creating surprising and beautiful results that no one could have predicted, from ancient cities that last millennia to the near-eradication of diseases like polio.

At Sightlines View, we are interested in how one happens and not the other. When does organized human activity lead to breakdown, and when does it lead to extraordinary outcomes?

We look at epic failures, epic successes, and interesting stories that fall somewhere in-between.

Because our view is inherently a systems view, one of the first themes we’ll touch on is the counter-intuitive or emergent nature of the systems we humans create. Few stories drive this point home more than Stanley Milgram’s efforts to understand how so many people could be complicit in the Holocaust, through his famous Obedience studies conducted at Yale in the 1960s. Milgram himself was fond of the following quote, even if he often had to be reminded of the wording:

Life can only be understood backwards; but it must be lived forwards. — Søren Kierkegaard

Looking ahead to his study, Milgram assembled 40 renowned psychologists and asked them what they thought would happen if a researcher asked a study participant to harm another person — shocking him until he stopped being able to communicate — in the name of science. Everyone, including Milgram, was confident that most participants would simply object and that would be the end of that. They made this judgment even after the destruction of the two World Wars and of the Holocaust. It seemed like an obvious and intuitive judgment given what we know about human nature.

However, with uncanny consistency, across ethnicities and geographies, Milgram found that about 65% of the participants would proceed to the shocking conclusion of the experiment, in which they believed they were applying 450 volts of electricity to a “learner” who had failed to memorize words and who minutes before had been crying out in pain but had since gone silent. Why? Because, Milgram writes, “They were politely told to.”

If you have not seen the original Obedience film, this trailer for a 2015 film on the study brings the point home in 60 seconds.

How could 40 psychologists, not to mention most people who are told for the first time about the study setup today, make such an error in judgment, believing that almost no one would go all the way? Most people believe they are pretty good judges of human nature and if you know how humans work, you ought to be able to predict how they will act in a given circumstance, right?

Strangely, this isn’t the case. Apart from what this experiment teaches us about authority and obedience (it’s shown to every first-year cadet at West Point), it also teaches us something important about the limits of knowledge.

In a systems view, even a modestly complex system such as this, with a researcher, a participant, and some rules, will have emergent behavior. That means you are unlikely to know what it will do until you run some kind of simulation (like Milgram’s experiment), or you run the system forward itself (take any number of historical examples where people obeyed authority when they wish they had not). Traditional ways of exploring problems in business and society, at least in the West, have favored a non-systems view in which it is assumed you can predict outcomes by bringing in experts who understand the different parts of the system. For example, in debriefs later, one participant said she explained to her husband — hopefully an expert in her behavior — what she had done and he looked at her like she was crazy. The results defied everything she knew about herself and what others knew about her too. “I don’t like hurting anyone,” she said, “and I can’t understand myself going all the way.”

Various forms of simulation-thinking (visualizing, modeling, experimenting) have been around forever, and are finally mainstreaming. It’s not impossible to predict outcomes in a complex system, in fact, some people and organizations are very good at it. What sets these people apart is that they look both backwards and forwards.

For example, imagine Milgram had conducted a meta-experiment on prediction. For the control group he gathered 40 experts from psychology, business, and government, and asked them to predict how many people would go all the way in his experiment. Imagine he directed them to draw only on past research and their own assumptions they had developed over their lifetimes. For the experimental group, he gathered together 10 freshmen at his university, asked them the same question, and directed them suspend judgment until they had run the experiment forward with 50 of their friends, allowing it to unfold in a messy fashion due to their lack of experience and skill.

For a fraction of the cost and ego, we can imagine Milgram may have obtained slightly more accurate results, or at least some startlingly useful hypotheses, from the less-expert group. This is a silly thought experiment, but when you consider what may be at stake when we try to predict and shape behavior in complex systems, the conclusions are surprising.

At Sightlines View we will highlight stories where people find success with various forms of simulation-thinking. We look at how these methods get built into routine workflows from the Monday morning staff meeting to years-long strategic initiatives. We also look at current events through a systems view, especially when the results are surprising, and new advances in machine and human learning and what one can tell us about the other.

Recently, Lee Sedol, one of the world’s foremost experts in the game of Go, was beaten by a robot named AlphaGo. Go is a game infinitely more complex and emergent than chess, with more possible positions than there are atoms in the universe. When you read about the way the software engineers trained AlphaGo, it was in this same fashion. They didn’t feed the robot a book on Go best practices. Instead, they had AlphaGo look backward by studying “30 million moves from games played by human experts, until it could predict the human move 57% of the time.” Then, they had AlphaGo look forward by “playing thousands of games between its neural networks, and gradually improving them using a trial-and-error process known as reinforcement learning.”

It is tempting to think we can apply all that we know from studying the past to predict and shape what will happen in the future. But if we seek to only understand things by looking backward, as Kierkegaard implies we must, we will continue to produce outcomes that confound us all.

To connect and read more, please leave a comment or visit http://sightlinesgroup.com.

--

--