Bridging the Gap Between Human and Machine Decision-Making

Novella Martin
The LABS
Published in
3 min readOct 18, 2017

If you’ve ever found yourself pussy-footing about trying to decide if Schrödinger’s Cat is alive or dead, spare a thought for our machine counterparts trying to predict our likely choice.

Moreover, spare a thought for the people trying to teach autonomous systems how to think like us.

Professor Peter Bruza from QUT will develop and test quantum theory-based models that better explain and predict decisions on trust and improve autonomous system decision-making. Image Mopic|Shutterstock

Professor Peter Bruza from QUT’s School of Information Systems has accepted the challenge to develop and test quantum theory-based models that better explain and predict decisions on trust.

The “Contextual models of information fusion” project received US$241,000 in funding from the Tokyo-based Asian Office of Aerospace Research and Development.

“Trust in decision-making is an increasingly important factor as humans are expected to work alongside autonomous systems,” says Professor Bruza.

“Humans and robots may need to collaboratively make decisions under extreme and uncertain conditions, such as on the battlefield or in the wake of a disaster.

“The plethora of sources that need to be processed in order to make a decision is known as ‘information fusion’.

“As humans, we are often comfortable with a decision if we think all sources combined are collectively trustworthy.

“However, our decision-making can defy the laws of probability used by machines to make decisions, especially when information sources are unreliable and we’re unable to reach a global assessment of trust.

“According to probability standards, the order in which information is received doesn’t matter. The decision is the same whether receiving information source A before B, or the other way around.

“Humans don’t always think that way. The order in which we receive information, inferences and the context in which we make a decision can sway our thinking.

“This potential for misunderstanding between human and machine can quickly erode our trust in them.

“We’re irrational according to probability theory because our decisions don’t adhere to its laws.”

Quantum cognition models provide a better account of human thinking than traditional probabilistic models because these account for ‘contextuality’, says Professor Bruza.

“Our decisions are not always irrational, we just need a better model to explain them.

“Quantum cognition explains context — the interference a first judgement can have on subsequent judgements.

“Machines must understand what a human would do in context and explain that rationale before taking action.”

Explaining human decision-making rationale is the missing link to developing greater human trust for machines, says Professor Bruza.

“Trust in our machines will quickly erode if we can’t rely on them to make suggestions, recommendations or decisions we deem useful.

“We already experience this through online shopping — for example.

“Amazon spends a lot on improving recommendation algorithms because even the slightest increase in accuracy translates to huge gains in profit.

“In the future, we’re not going to trust our driverless car if we ask it to park while we run to a meeting and later find it in a tow-away zone.

“If it explained first that it would park in that zone if the hours permitted, we’re more likely to trust and rely on its decision making capability.

“That outcome aligns more with quantum theory models than the laws of probability theory.

Peter Bruza is a professor in of Information Systems and Information Science, specialising in Artificial Intelligence and Image Processing, Cognitive Sciences, Information Systems.

His project team will identify quantum cognition models suitable for autonomous systems using online platforms like Amazon Mechanical Turk to survey thousands of people on their decision-making rationale.

The two-year “Contextual models of information fusion” research project commences in November 2017.

--

--