We Don’t Know How You Make a Decision

A new study shows we cannot tell how you’re using the evidence before you

Mark Humphries
The Spike
9 min readMay 21, 2020

--

Lumberjack making a decision. Credit: Photo by CIRA/.CA

We like to think of ourselves as rational beings. Even if we’re patently not. When we make difficult decisions, we like to think we add up the evidence before us. This accumulation may be conscious, the reading of endless online reviews of the smorgasbord of smartphones, accumulating endless pros and cons about seemingly identical black rectangles on their cameras and screens and ports and chips and tiny differences in the bezel before taking the plunge. This accumulation may be subconscious, as your mind notes a rustle of grass, then the flash of striped fur, then the barely perceptible breathing of a predator and shouts “tiger!” at you — you sitting reading in a suburban English garden as your ginger cat nonchalantly strolls out of the undergrowth. Or, more sadly, the accumulation may be of the absence of evidence, when the passing days of silence feeds the growing realisation that the job is not yours.

The big theories of how we make decisions also like to believe we sample and accumulate the evidence before us. From these theories stem big research programmes into how people make bad decisions, how they might make better one, and how the brain represents and uses that evidence to make decisions. All assuming: we add up evidence in our heads. But do we?

In a new paper, Gabriel Stine and colleagues in Mike Shadlen’s lab have shown a simple but terrifying thing: using our standard tasks and measurements we cannot tell if someone is adding up evidence to make a decision. Potentially hundreds — thousands? — of studies of decision making could be based on a faulty basic idea. So how do we make up our minds?

The big theories really are all about adding up evidence. For each option that we have to chose between, these theories say there is a counter which keeps track of the total evidence for that option. Evidence accumulates until one of the counters crosses a threshold. In some theories these counters are independent, simply racing each other. Other theories say that evidence in favour of one option counts as evidence against all the others, so counters will go up and down with each new bit of evidence. What they all have in common is that evidence over time is used, is added up — let’s call these the history theories. They think your brain is like a juror listening patiently to the cases of both the prosecution and defence, imbibing the parade of evidence and testimony, to slowly but surely arrive at a final, cumulative verdict.

But, say Stine and co, consider these two other ways of making a decision. In one, let’s call it “extreme”, you just wait until a bit of evidence arrives that is so big it makes you chose that option. No adding up of evidence over time, no interest in its history. Just checking each bit as it arrives and waiting to pounce. Like a juror passively watching the case unfold until seeing the prosecutor brandishing the blood-stained knife from the defendant’s kitchen drawer, and instantly deciding “guilty!”

In the other way, let’s call it “snapshot”, you just randomly pay attention to a single bit of evidence, whenever you fancy, and decide based solely on that single bit of evidence. Like a juror snoozing soundly throughout the case, stirring only briefly during the defence’s character witness giving an impassioned plea for the defendant’s good nature and charity work, and deciding “innocent!” before returning to a gentle slumber.

Both these ways of making a decision bear little resemblance to the careful adding up of evidence. But Stine and co show that, weirdly, they make many of the same predictions about how people should behave when they do make decisions. Which means that we cannot easily tell how they are making a decision.

Even when we set them a task that, surely, has to use the adding up of evidence for the person to do it. One classic task in the lab is watching some randomly moving dots, and deciding — are the dots moving left or right? Some small fraction of the dots are all moving in one of those directions; the more dots that cohesively move in the same direction, the easier the decision. This is the gold standard task for decisions based on evidence: watch the screen, see the dots, and slowly accumulate evidence for the direction in which the dots are moving.

Two snapshot examples of the dot motion task. On the left is an easy version: half of the dots are moving to the right, the rest randomly. On the right is the impossible version: none of the dots are moving consistently in the same direction, so the viewer has to guess (unknown to them). Dots are moving from the open to the filled circles.

But Stine and co show that even in this kind of task we cannot tell if people are adding up evidence, because those other ways of making a decision — the extremes, the snapshots — predict the same behaviour.

They all predict that, the harder the task is for you, like the fewer dots that are moving in the same direction, the more errors you will make. Up to the point where, unbeknownst to you, all the dots are moving randomly, so the evidence is totally ambiguous, and any decision you make is a total guess. The exact relationship between difficulty and errors is a very particular S-shape:

The S-shaped curve of difficulty. The easier the task, the fewer errors are made. The horizontal axis gives the proportion of dots moving in one of the two choice directions (left or right). The vertical axis gives the proportion of times the viewer chose “right” for that proportion of dots. When the dots are clearly moving left, then they rarely make the error of choosing “right”. When no dots are moving consistently (0%), then guessing is the result.

And all three theories of making a decision give the exact same relationship. The history theories do so because the more the dots move in the same direction, the easier it is to add up evidence for that direction, and fewer errors result. For the extreme theories, the more the dots move in the same direction, the more likely it is that a big enough piece of evidence will turn up for the correct decision. And for the snapshot theories, the more the dots move in the same direction, the more likely it is they will be obviously moving in the correct direction when you happen to pay attention to them. In short, a task designed to test how we make decisions by adding up evidence, but which people can seemingly solve without doing so. Oops.

Stine and co then asked: what if we give people as much time as they want to make a decision, can we rule out the other theories that way? We already know that when people can take as long as they like, then they naturally think longer for harder decisions; we know, for example, that they think for much longer when a tiny handful of dots are moving right than when half of them are. Like this:

As stimulus strength increases, the viewer’s reaction time falls exponentially. Stimulus strength could be e.g. the proportion of dots moving in one direction

This immediately rules out the snapshot theories: there can be no relationship between how hard the decision is and the time it takes to make it when you’re choosing a time at random. And this curve naturally supports the history theories, for they predict that the weaker the evidence the more evidence people will want to add up before making a decision, and so the longer they take.

But, you guessed it: the “extreme” theories can give exact the same relationship between difficulty and time to make a decision. Why? Because when evidence is harder to discern, so you have to wait longer for a big enough bit of evidence to turn up. Whether we add up evidence or just wait for a big bit of evidence, waiting is longer for harder choices.

Wow. This means we can give someone a task that we think needs the adding up of evidence to make a decision, but when we measure their behaviour we cannot tell if someone is doing that. This undermines hundreds, maybe thousands, of studies that all assume we make decisions based on adding up evidence. And from animals doing these kinds of tasks we have tons of recordings of apparent decision-making neurons, neurons whose firing scales with the accumulated evidence, jurors of your brain’s decision-making court. But if we cannot prove the animals were deciding by accumulating evidence, then these neurons are ephemera. Is all lost?

No. Stine and co finish on a ray of hope — we can think harder about how we design the task, and then we can be (more) sure.

They came up with a solution for telling the history and extreme theories apart. First, we give you as long as you like on the random dots task — that immediately rules out the snapshot theories. Second, we measure how much time your brain gives to processing everything that is not about the decision.

This extra time is how long it takes for your brain to do the rest of the stuff it needs to do. Especially the bit where it turns the decision into a response, like moving your arm and fingers to press the button for “left” or press the one for “right”. By making the decision ridiculously easy — by making all the dots move in the same direction — we can tell how much of your reaction time is not about the decision. This extra time is predicted to be much smaller for adding up evidence than for simply waiting for a large bit of evidence. So, we can measure your apparent “non-decision” time, plug it into the history and extreme theories, and see which better predicts your reaction times as the task gets harder.

When Stine and co tried this new design on 6 subjects they had, in one respect, total success: they could rule out that all 6 subjects were deciding using extreme theories in all 6 cases. Those theories cannot account for the way the subjects make their choices over both forms of the task. Instead, the history theories gave a much better account of how the subjects decide. Phew.

We learn much from this study. For one thing, it’s a paramount example of how to science. This work was done in Michael Shadlen’s lab, who has built a sterling reputation through decades of wonderful research on how the brain accumulates evidence — an assumption now questioned by this study. Publishing work that raises difficulties about the basis of much of your own research is brave, honest stuff.

Of course, it may turn out that all decisions we assume use the adding up of evidence really are made that way, and so the swathes of published work making this assumption are fine; but thanks to Stine and co, now we know that, when we read a paper on making a decision using evidence, we have to keep in mind whether the assumption of accumulation is key to the published results, and judge accordingly.

We also learn that working out how people make decisions is hard. For the work is not finished here. Even with just 6 subjects, all Stine and co could do was show that the history theories were better than the extreme ones; but better does not mean good. For they hit a snag: while they could rule out the extreme theories, they couldn’t rule in the history theories for 3 of the subjects. The history theories’ predictions didn’t exactly match their behaviour either.

By throwing even more complex versions of history theories at their subjects, they could show that two of the three hold-outs were losing information as they added up evidence. Their decisions made sense if Stine and co assumed those subjects were forgetting the evidence gathered first. Even so, one subject could still not be explained properly.

The biggest lesson then is that people don’t all make decisions the same way. Even with this well-studied, much-replicated, ultra-simple moving dots task, people do not make decisions the same way.

And that’s why we write theories down as mathematical models, models that make predictions — here, about patterns of errors and reaction times. Because the mere process of writing them down forces us to make plain our assumptions, and then test them. For these theories of decision making, writing them down has now revealed that our assumption of accumulating evidence was faulty, that other theories could predict the same behaviours without needing to add up evidence. Take heart, for this is how science progresses: we develop ideas, make assumptions, check them carefully, find our errors, and from those errors spring corrections that make our science more robust and more ideas flow — and we repeat ad infinitum. It all adds up.

Want more? Follow us at The Spike

Twitter: @markdhumphries

--

--

Mark Humphries
The Spike

Theorist & neuroscientist. Writing at the intersection of neurons, data science, and AI. Author of “The Spike: An Epic Journey Through the Brain in 2.1 Seconds”