The Art of Decision Making: Machines vs Humans

How do you make decisions?
How often do you flip a coin or roll a die to decide what your next action would be? Or do you leave it less to chance and instead use your intuition, emotions, and logical reasoning?
While in the process of developing the neural network for Mirador Health’s prescriptive analytics engine, I was struck with an epiphany. Epiphany may not be the best use of the word here, but I started to ask myself if machine learning could help us make better decisions than our own human thought process (or lack thereof) would. Thus, I embarked on the journey to discover the human and machine limitations in decision making, guided by the following questions:
- How do we (humans and machines) make decisions?
- What is the goal of decision making and what is stopping us from making the best/better decisions?
- What can a human do that a machine can’t (and vice versa)?
Genesis / Root
As human beings, we are interested in progress, constantly searching for ways to improve and perfect our lives (if you disagree with this statement, would you have it any other way?).
The study of decision making is relatively new in the field of social science, which started in the mid-1990s by looking at how organizations make decisions. Despite its young beginnings, prominent academics such as Daniel Kahneman and Richard Thaler have won Nobel prizes for their research on decision making and behavioral economics. For the uninitiated, this field of study is primarily interested in understanding how and why we make certain decisions, and how to improve our decision making processes.
The Starting Point
The basis of decision making depends upon the availability of information and how we experience and understand it. For the purposes of this article, ‘information’ includes our past experience, intuition, knowledge, and self-awareness.
We can’t make “good” decisions without information because then we have to deal with unknowns and face uncertainty, which leads us to make wild guesses, flipping coins, or rolling dice. Having knowledge, experience, or core values given a certain situation help us have a clear vision of what the outcomes could be and how we can achieve/avoid those outcomes. However, making our decisions based on knowledge and experience from similar situations can be dangerous, as outlined in Daniel Kahneman’s Thinking, Fast and Slow, which we will discuss later.
Since information helps us be better decision makers, does increasing the information available to us necessarily help us make better decisions?

Big Data = Better Decisions? Does size matter?
Companies are very much into big data (or so they seem to be), collecting as much information as they can about their customers with the goal of understanding and predicting their customers’ behavior to effectively achieve their business goals. More information does help us make better decisions, but only up to a certain point. For information to be useful in our decision making, they have to also be relevant, and develop relationships and insights. The quality of information is just as important as the quantity.
But in reality, the biggest barrier to originality is not idea generation — it’s idea selection.
The quote above is from the book Originals by Adam Grant, a non-fiction bestseller on how to generate, identify, and promote original ideas. Replace the word ‘idea’ with ‘data’ or ‘information’, and we can see how selecting and analyzing the right kind of data leads to insights.
It is important to have the right information for the context that we are making our decisions, because as much as the world is interconnected, not all connections are correlated and produce a (statistically) significant relationship for a given situation. It would be a waste of time and energy looking at data that has no effect on the outcome of our decisions. Even with neural networks that are able to learn and detect patterns beyond the mental capabilities of the human brain, there are bound to be data points that produce no relationship at all. We also have to keep in mind that even a simple correlation between two data points does not imply causation. This then leads to the selection of data points.

How Do We Pick the Right Information?
At time of writing, no single AI system is able to select its own data points for decision making. At least not yet. They only process data that have been programmed by their human creators.
When selecting data points, we need to consider how much variability is there in the context of decision making. Is there a pattern or do situations and outcomes happen randomly? Can we clearly establish the link of cause and effect of a decision? Some situations have a wide variance while others are somewhat rigid, constant, and predictable. When a situation doesn’t change very much from time to time, we can use assumptions to help us control for certain factors/data points, and be more time efficient with training our machine learning.
I briefly mentioned patterns earlier, and for the human creators to pick the right information and data points and feed them into the AI, they need to recognize patterns that are sometimes beyond their own human comprehension. Most people only think in first-order consequences, as highlighted in Ray Dalio’s book, Principles, where by default, they only think of an action’s immediate effects. That is a limitation of the human mind. It is difficult for us to visualize how an action creates further impact, especially intangible impacts that are undetectable to our senses (read: unpredictable).
A way to overcome this human limitation is to leverage computers’ processing powers and memory, which allows us to process large amounts of data. It is costly and time consuming to do this, but you can dump every single data point out there, set the coefficients, and let machine learning does its magic. Results are not guaranteed, but that is the advantage of computers over humans. People may argue that we don’t understand how an AI arrived at a decision, but are humans any better? I guess it is our behavior as humans to find logic and reasoning although we ourselves have biases and make assumptions to arrive at a decision.
The Goal of Decision Making: Satisfaction

The goal of decision making is to maximize our satisfaction (not to be confused with happiness), or utility, as economists would call them. I am borrowing the Utility Maximization Model in economics, which states:
Consumers decide to allocate their money incomes [choices/free will] so that the last dollar [decision] spent on each product purchased [option] yields the same amount of extra marginal utility.
If humans didn’t care about getting the best outcome from their decisions, everyone would just flip a coin and accept any utility dealt to them. Imagine a world where no one is responsible for any of their actions. Victory would taste less sweet, and failure extremely bitter (depends how you see it). However, like all models, there are a few assumptions. We assume that:
a. Humans act based on rational behavior
b. All our preferences [wants/needs] are known and measurable
c. We have the prices [the costs of each option]
d. We have a budget constraint [limited number of tries]
Throughout most of our life, rarely do these four assumptions apply when we are making decisions.
a. Humans are known to make irrational decisions based on emotions and cognitive biases.
b. We don’t always know what we want, and when we do, we have a bad sense of judging how badly we want them and how much satisfaction/utility we’ll get from it (hedonic adaptation).
c. We don’t always know the true cost of our actions until after the fact.
d. One thing is for sure though, we do have a limited number of tries in the game of life.
Why We are Not Making the Best Decisions
Can you guess how many decisions you make in a day?
.
.
.
The Google consensus (this should be coined as a legit term when more than 10 sources quote the same fact) says that the average human makes 35,000 decisions a day.
Out of that number, many decisions are made quickly with intuition or subconscious through many hours of practice or exposure, while the remaining few require careful and concentrated thought. Daniel Kahneman, categorized these two “modes” of thinking as System 1 and System 2.
In his book, Thinking, Fast and Slow, Kahneman describes the role of System 1 and its relationship with System 2:
System 1 [is] effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps

Our System 1 is the reason why we are not making the best decisions.
Nature always seeks the path of less resistance. We make mental shortcuts through biases and heuristics (as defined in behavioral economics and psychology) whenever possible and tend to let our emotions overrule our rationality. This is an inherited biological characteristic that helped our ancestors survive and escape danger, but is now a liability in today’s world where decisions require time-consuming rational thinking.
How did we end up with these cognitive biases, all 188 of them? These biases develop based on our experience and understanding of the world (typically shaped by our parents, peers, the media, and education institutions) and can also be found in our genetic code (the release of neurochemicals such as endorphins, serotonin, dopamine, etc., when responding to stimuli).
By that logic, shouldn’t we build machines that are devoid of cognitive biases and System 1 to make better decisions and increase our utility (satisfaction)?
Perhaps. Based on the four assumptions in the Utility Maximization Model, machines can remove irrational behavior and emotions, is clear with what it’s trying to achieve, have practically unlimited number of tries through simulation, and even know the true cost of each decision. However, this all depends on the availability and quality of data used to train/build these machines.
The showdown: Humans vs Machines
Now that we have covered the basis of decision making, let’s explore and compare the limitations and capabilities of humans and machines in making decisions.
What Makes Us Human
It is inevitable that the question of what makes us human be asked when comparing ourselves against other species/machines.
In my humblest opinion, what makes us uniquely human is the resilient spirit to adapt in the face of uncertainty and adversity. As much as we’d like to think of ourselves as “experts” in forecasting the future (weather, economy, etc.), the truth is this:
We don’t know what we don’t know. — Donald Rumsfeld
The only way we survived, and continue to survive, in this uncertain world where everything is changing at an increased pace, is by being quick to adapt. Survival of the fittest does not apply in the modern world, but rather survival of those quickest to adapt.

Machines typically need more than a single event to learn and change its decision. There is a Chinese saying, “一朝被蛇咬,十年怕草绳”, which means humans will learn to avoid snakes and anything that resembles a snake (a cognitive bias) just after one snake bite. Humans don’t need (or have the luxury) to experience mistakes multiple times to learn and make better decisions. We need to adapt quick.
It is not the strongest of the species that survives but the most adaptable — Leon C. Megginson
Humans also have the intellectual capacity to develop ethics, morals, and values, which machines don’t (or not yet at least. Never rule out the impossible).
The majority of machine learning algorithms are currently programmed to make decisions based on consequence, not ethics or values. Philosophers and psychologists (humans) are still needed to design ethical AI. In Originals, Adam Grant discovered that originals (creatives who reject the status quo), tend to make decisions based on the logic of appropriateness rather than the logic of consequence. Will machines be able to do what is right/appropriate and take the risks to stand up against an unjust authority?
This then begs the question: How do we re-frame our values and principles into consequences so machines can process them? (I use the word process, because machines can’t understand and derive meaning) Is it even possible or necessary to do so?
The Machine (Dis)Advantage
Could machine learning help us make better decisions? Possibly, for any outcome that is normally distributed, such as height, weight, number of miles ran. Machine learning can predict (non-random) outcomes and prescribe solutions, but the question to ask is, how predictable is the world? We did not predict 9/11 or the dot com boom. Would algorithms be able to predict the next life-changing event and make a decision to change the course of history?
There are factors that are beyond our control after a decision is made, i.e what happens between cause and effect, which will affect the outcome. Cause and effect is not a linear relationship unless we carry out experiments in a controlled environment, and reality is, the universe is random and chaotic (watch: entropy). I’m willing to go out on a limb and say that machine learning algorithms may not be able to predict the next random event more accurately than a human could. However, machines beat us humans hands down in terms of speed and accuracy when it comes to non-random, repetitive events.
Additionally, machines are better (and faster) at repetitive games. Google’s Deep Mind was able to beat the top Go player by analyzing hundreds of thousands of games of Go, a strategy board game, to learn what move to make for any given situation. Humans will need years of study and practice (possibly around 10,000 hours, according to Malcom Gladwell in his book, Outliers) to go through that much material.
This is by no means a comprehensive comparison and you’re welcomed to add more in the comments.
What did I learn?
The purpose of this article is not meant to draw a conclusion if humans or machines are superior in decision making, but rather an inquiry on the approaches to decision making.
Writing this article, I encountered more questions than answers and have enjoyed the whole process of discovery of how the brain works. I can’t say that I will make good decisions after writing this article, but I have found a path to discover ways to make better decisions, and that is to frequently ask difficult questions that challenge my existing decision making framework.
Challenging your thinking helps you adapt to different situations and suppress your cognitive biases (especially the confirmation bias). Of course, don’t challenge your thinking when being chased by a critically endangered Sumatran rhinoceros. Get to safety then reflect on your decision making process.
Here’s how I plan to challenge my own thinking: be open to conflict and disagreements, seek out randomness and discomfort, be willing to be proven wrong, and find satisfaction from the process of decision making and not the outcome.
Here are some books that I have read and would highly recommend it to anyone to improve their decision making:
- Thinking, Fast and Slow by Daniel Kahneman
- Factfulness by Hans Rosling
- The Black Swan by Nicholas Nassim Taleb
- The Art of Thinking Clearly by Rolf Dobelli
Thank you reading all the way to the end! If you learned anything new, have a challenging question, or would like to leave your 2 cents, please share them with me in the comments.
Always improving,
Min Xiang Lee
