It’s all about trust: quantum physics, cognition and trust-based decision making

The theories of quantum physics are so complex, philosophical and abstract, it’s unsurprising that they find application outside the realm of particle physics.

Lauren Fell, PhD student at QUT, specialises in quantum cognition — or, the application of quantum physics theory to our understanding of human cognition and psychology.

Fell’s research focuses on the decision-making processes surrounding trust, but from a perspective of quantum concepts rather than more traditional models based in standard Bayesian ideas of cognition and probability theory.

When quantum physics meets psychology

At its core, quantum physics explains how everything works; that is, it describes how photons, electrons and other particles behave at a very, very small level, and how those behaviours and interactions impact the world around us.

It can be confusing and counter-intuitive, and most likely calls up images of cats in boxes who are either dead or alive (but in reality are both and neither simultaneously — at least until we look under the lid and see).

For Fell’s research, it’s not a matter of saying that quantum effects are identifiable at a macro scale in our brains, but rather it’s another avenue for research that’s providing elucidating results.

“A lot of classical theories of cognition fall short of explaining certain anomalies in decision making and rationality, so quantum theory allows us to relax some of the assumptions of the classical models,” explained Fell.

Image: libre de droit via Getty

The research

“We’re investigating how trust develops, and also how our cognition is affected by trust, particularly in uncertain environments — for example, how our actions and interactions in an unknown or volatile environment will be impacted by that environment itself,” Fell explained.

“Trust is ubiquitous in all of cognition, so it’s key that we understand how the mechanics of it work.

“The end goal is to understand how trust works so that we can better understand what happens when trust breaks down.”

Quantum physics is driven by experimental protocols, which are basically ways of demonstrating or proving quantum effects. Fell’s PhD research hypothesises that she can demonstrate quantum effects in trust, and focuses on developing protocols to test this.

“It’s difficult to develop specific protocols for cognition, because there’s so much else going on in our cognitive processes; pin-pointing an exact effect in humans can be really hard,” said Fell.

It’s all in the context

Traditional models for trust factor biases into the decision: racial bias, gender bias, ageism, classism, and other social constructs, as well as confirmation bias and framing effects.

Fell’s research is investigating the impact of contextuality on decision making and trust.

“Classically, contextuality means something in the context directly affects the thing we want to measure — for example, a face in a dark alley will be judged differently if you see the same face in a brightly lit office,” Fell explained.

“In quantum physics, contextuality is not tied to causation. Applying this type of contextuality to cognition means that a context doesn’t directly cause a different outcome — but instead describes a complex relationship with other decisions that we are making at the same moment we’re deciding whether to trust or not.

“In one experiment we do, participants decide whether to trust a face at the same time they decide whether that face is dominant or not.

“Now, the outcome of that dominance decision is not important, and you’re not more or less likely to decide to trust because you think the face is dominant — it’s the mere fact that you’re cognitively measuring trust and dominance at the same time, and are aware of the two concepts simultaneously, that will affect your decision.”

The trustworthiness of faces is impacted by the conditions in which we make the decision to trust. Image: Dimitri Otis via Getty

If this concept seems hard to grasp, Fell paraphrases physicist Ian Durham to give a more relatable example.

“Say you have a puzzle and you don’t have the box, so you don’t know what it’s going to be. To start with, you construct it edges-first, and it turns out to be a picture of a cat. If you then break it all up and decide to make it again from the middle out, it turns out to be a picture of a dog.

“Essentially the picture didn’t exist until you actually constructed it, and whether it was a cat or dog depended on the way in which you constructed it. It wasn’t because you’d seen bits of a cat and so decided it must be a cat — there’s no context or bias — it’s purely based on how you’ve constructed reality: that’s what determines what reality it forms.”

Crowd-sourcing trust solutions

While this all seems firmly theoretical, Fell’s interest in trust and decision making has concrete applications in the real world.

In 2020, Fell was awarded a grant through the crowd-sourcing InnoCentive initiative, where she designed an interface for autonomous systems that would help improve trust among the humans that interact with them.

“A commonly cited key for trusting autonomous systems is explainability — usually when people get a recommendation or a direction, they want to know the reasons behind the directive, and autonomous systems are bad at explaining that.

“If we can improve the transparency and make the reasoning clear, trust will improve.

“For example, if there’s a security threat at a shipping yard — imagine a camera has picked up something strange and there’s feedback across multiple monitoring systems that the autonomous system has interpreted as a threat. The system may recommend deploying people to look into the issue, but you need to be able to trust that it’s not wasting your time, or, worse, sending you into danger.”

But explainability isn’t the only piece of the puzzle. The complex effects of context and contextuality — precisely what Fell’s PhD is exploring — are also important in understanding the role of trust in human-machine interactions.

Fell developed a dashboard that demonstrated a system’s decision within a context involving both the human and machine agents, breaking it down to show evidence of the catalyst for its decision, previous similar decisions the system made and the results from those, as well as the system’s prediction of risk versus reward based on different action pathways.

“In the absence of a fully explainable artificial intelligence system, the extra information the dashboard presents, in addition to its accounting for context effects, helps build that trust between user and computer,” said Fell.

“There’s a general distrust of artificial intelligence — the less we understand about how something works, the less predictable its actions are, and that’s where trust is eroded.

“We’re constantly integrating more artificial intelligence into our world, and understanding how to create meaningful trust-based interactions between humans and computers will be the key to our success as a society.”

More information

Explore research at QUT’s Science and Engineering Faculty

Contact Lauren Fell

The LABS

Learning and Big Solutions (LABS)