Can Artificial Intelligence Help Us Make Better Decisions During A Crisis?

Andrew Serazin
Mar 24, 2020 · 5 min read

New research indicates that AI may help people cooperate and make ethical decisions faster and with fewer errors.

We are facing an unprecedented public health crisis, leaders are rationing critical supplies, and doctors are increasingly forced to choose who will live and who will die. With a limited number of ventilators, who gets one and who goes without? Should this patient be admitted or sent home?

A simple answer to this question may be whichever patient is most urgently in need. Yet a closer look reveals a thicket of conflicting ethical considerations. Some patients may need a ventilator faster because of unique traits of their conditions, while others may need to continue supporting young children. Why should the rich and famous get faster access to testing? Are younger patients more deserving of a ventilator than older patients? What priority should the disabled and vulnerable have?

Even in more normal times, doctors and hospital administrators are called upon to make decisions quickly, while also keeping all of these ethical considerations and more in mind simultaneously. After many hours of high intensity work, the doctor’s own cognitive resources may be impaired by sleep deprivation and fatigue. These sorts of ethical problems are persistent in fields such as medicine and hospital ethical review boards — made up of medical professionals and expert ethicists — struggle with these dilemmas, even without added factors such as time pressure and sleep deprivation.

Yet we may be on the brink of a revolution in ethics. New research and analysis from leading computer scientists, ethicists and psychologists indicates that artificial intelligence tools — if created with the proper parameters at its outset — could prove instrumental in improving people’s ethical decision-making, particularly in complex or high pressure situations.

It may seem counterintuitive to argue that AI may help us become better ethical decision-makers. In popular culture, the annals of science fiction, and indeed in the real world today, AI tends to be seen as either a tool of villains or a force that inevitably and implacably turns against humanity. After all, in the Terminator films, it’s an AI called Skynet that rains down nuclear destruction on the world and seeks robotic domination. In The Matrix, the AI seeks to enslave people’s minds. And perhaps the most famous AI of them all, 2001: A Space Odyssey’s HAL 9000, is bent on destroying his human operators.

Recently, we have seen electronic surveillance deployed by Chinese authorities to control the spread of coronavirus, to the consternation of democratic countries. Even Elon Musk, the leading proponent and innovator in the field of self-driving cars, has argued that unregulated AI may be more dangerous than nuclear weapons. But that is only one vision of artificial intelligence’s place in society. For every way that it can be used for personal gain or the detriment of society, AI could have a positive impact on our lives.

In a Templeton World Charity Foundation (TWCF) sponsored project, Duke University ethicist Walter Sinnott-Armstrong and neuroscientist Jana Schaich Borg have teamed up with psychologists and computer scientists at the university to investigate ways in which AI can be used to aid ethical decision-making.

Sinnott-Armstrong argues that artificial intelligence, if trained with the right kind of data, could be a valuable aid in making complicated ethical decisions. Rather than being given control of the decision, the AI is able to learn the patterns of ethical thinking that humans engage in and replicate those with new data sets, only without the interference of outside distractions, sleep deprivation, or complicated emotions that might cloud human decisions.

Machine-learning systems can counter fatigue, bias, and confusion. In short, these tools may help doctors and hospitals better live up their own expressed ethical standards.

Theoretically, an AI could take the same information a doctor has about her patients, and generate a series of suggestions which could then be used to inform human decision-makers. Instead ‘outsourcing’ moral decisions to machines, these new tools serve to enhance our innate capacity for moral decision making.

Researchers have also found that AI may help groups of humans make more positive decisions. In 2017, researchers published results of an intriguing experiment involving collaborative gameplay and AI. In their experiment, 4,000 human players were asked to play a collaborative game where to win within the time limit, players would have to cooperate with each other — without knowing the full game board — and at times make moves that were detrimental to their own personal position for the purpose of helping the greater good.

The researchers then introduced several AI players into the game, and instructed some of those AIs to make random moves up to 30 percent of the time. When they analyzed the data, the researchers discovered that in games where AIs were included and made random decisions 10 percent of the time, the win rate was 85 percent, compared to just 67 percent with human-only pools of players. Not only that, but the games were also won in less than half the time on average. Interestingly, while the results improved dramatically when AIs that made random decisions 10 percent of the time were included, this improvement disappeared with AIs that never made random decisions or made them more often than 10 percent of time.

At a time when social cohesion is critically needed to save lives, can we deploy such bots to increase cooperation in our society?

So why is most of the commercial focus on AI directed at uses such a law-enforcement, stock trading, or selling you ads on social media, rather than improving medical outcomes or helping groups of people achieve cooperative goals? TWCF aims to help fill this gap through our Diverse Intelligences initiative which focuses on funding research and innovation in the machine age. By funding incisive research, not just in computer science but also in psychology and philosophy, and by building cross-disciplinary bridges, we hope to help usher in a new era where artificial intelligence can be a force for social good.

Templeton World