Making AI Better by Making it Slower

Marianne Bellotti
Jun 19 · 8 min read
Image for post
Image for post
Technology vector created by freepik

The difference between beneficial and destructive AI may be whether making fast thinking faster has any utility for the user.

A couple of months ago I left my job at Auth0 to join a defense and national security company founded by some friends of mine. It was a risky decision both because the company is new and because when one is working with the military, one is inevitably building technology that will either directly or indirectly kill people. It is an environment rich with ethical dilemmas and most technologists prefer to keep their hands clean by simply opting out of any involvement at all.

There were lots of reasons why decided to take the risk, but the most relevant one to this blog post is that everyone in the tech community lately is talking about building “ethical” products and no one can really define how a software development process that produced ethical products is different from a process that produced normal products. Just “don’t be evil” may no longer be Google’s motto, but it is definitely still how Silicon Valley thinks about things.

I’m not the kind of person that believes that outcomes are determined by the quality of the people. The best engineers sometimes build shitty technology together. Teams are not a sum of their parts. Simply assembling a collection of thoughtful people (and Rebellion has employed A LOT of lifelong pacifists) does not mean they will build ethical technology.

But team are a sum of their interactions and interactions are government by formal and informal process. I am the type of person who likes designing effective process and I had by the time I finally accepted my friend’s offer come to believe that AI in defense is a when situation not an if situation. The opportunity for an outright ban has come and gone, these tools will enter the battle space and the impact of that will largely be determined by who participates in bringing them there.

Escalation -vs- De-escalation

Every month at work we have a standing organization wide meeting to discuss our ethics. In one early version of this meeting we decided that the distinction between offensive tools and defensive tools was not a useful one for exploring the impact of what we might build. The difference between defensive and offensive is really who’s holding the tool and what they are pointing it at. That’s not something easy to design software around.

Instead we ended up focusing on the idea of escalating and de-escalating conflict. Responsible technology in the defense space is technology that helps people think deeper and more critically about the choices in front of them. Irresponsible technology encourages them to jump to conclusions or leaves them so far removed from the on the ground reality it dehumanizes the people who are negatively effected when the technology is deployed.

But how does one design AI that de-escalates?

Human in the Loop

Ethical AI people love to talk about “Keeping the human in the loop.” In an earlier blog post I discussed this concept using the framework of System Safety, an existing scientific field that studies the counter-intuitive ways safety policies either benefit or sabotage safety outcomes.

Human in the loop is an effective guiding principle when designing policy, but it is a little more difficult when designing technology because ALL technology redistributes how human labor is applied in a given process. When new technology is introduced to an existing task, some steps are automated away and other new steps become necessary. How does the product team determine when moving the human’s position in the process is taking them out of the loop or not?

Type 1 -vs- Type 2 Thinking

The answer may come from how human and computer thought combine and play off one another. One model of how human’s think popularized by the book Thinking Fast and Slow is called Type 1 -vs- Type 2. Type 1 is intuitive (fast) thinking. It’s low effort for humans. Instinctual, based mainly on pattern matching and how close a given piece of knowledge resembles another piece of historic knowledge. Type 2 is analytical (slow) thinking. Calculating, often statistical in nature. It’s high effort for humans and therefore needs to be budgeted appropriately, but often corrects mistakes made by Type 1 thinking.

Funny enough, the early days of AI research documented something called Moravec’s Paradox which observes that computers have the complete opposite relationship to Type 1 and Type 2 thinking. For a computer Type 2 thinking is easy, Type 1 thinking is hard and resource intensive. Nearly all of machine learning and AI is Type 1 thinking. AI products therefore tend to focus on accelerating Type 1 thinking for human operators.

But Type 1 thinking is already fast, and I’m beginning to suspect the line between beneficial AI products and the ones that create problems is asking how much utility the user really gets out of making fast thinking faster? Web developers understand the law of diminishing returns almost as well as economists. It’s what governs the development of Service Level Objectives. There is a point where simply making a website faster doesn’t really improve the user’s experience any, it just spends money.

Similarly there is a point where making Type 1 thinking faster doesn’t actually offer the user any added benefit, but it does dramatically increase the odds of a critical error.

If humans struggle with Type 2 and excel at Type 1 thinking, and computers struggle with Type 1 thinking and excel at Type 2, and good decision making involves using Type 2 thinking to error check Type 1 thinking….why are we building machines to do Type 1 thinking for us? Isn’t there much more utility from using computers to make slow thinking more resource efficient rather than making fast thinking faster?

Problem Selection

The more I explore the question of AI and ethics, the more I understand how critical problem selection is. The exact same technology can have dramatically different ramifications depending on how the problem it is solving is framed.

Consider two scenarios:

A police officer is trying to identify people in a photograph. AI isolates their faces and looks for matches in a facial recognition database.

A police officer has seized a hard drive with thousands of files on it. AI searches the files and prioritizes them based on faces of significance that might be present in them.

In the first scenario the computer attempts to do the Type 1 thinking for the human operator. Although some operators will examine the match carefully and critically to confirm the AI’s results, most will not. Most will do no thinking at all and simply assume that a match is a definite match.

The second scenario unblocks the human from doing the Type 1 thinking. In real life, the backlog of digital forensic evidence to be processed is often years long. So long that much forensic evidence never gets looked at at all, the case just moves on without it. For all the power of modern computers this work still involves a lot of manual searching by a human operator. Using AI to increase the efficiency of the process, increases the critical thinking being done by the human in the loop, rather than replacing it. Even if some important files are missed you still get more than you would have without it.

Here’s a non-hypothetical example: compare the notorious sentencing recommendation application COMPAS to a similar system called ESAS. On the surface, both technologies seem to solve the same problem: making recommendations on sentencing based on historical data. COMPAS attempts to distill lots of data down to simple conclusions that the user can disregard, but cannot dig into or challenge. It considers everything from your parent’s criminal history, to the lifestyles of your friends, to your answers to personality questions. ESAS, on the other hand, focuses on just the case information. It looks for similar cases and lets the user easily find and explore the context around the sentences that resulted. What made one case worth a long prison sentence and another case with the same charge a shorter one?

COMPAS attempts to do the Type 1 thinking for the user and because both the algorithms and data used to create the recommendation are hidden, the Type 2 thinking that would check for Type 1 errors is blocked. Worse, COMPAS assigns a numerical value to their recommendations. Someone wasn’t just “high risk” they were high risk on a numerical scale. One of the things we know about Type 1 thinking is that it is susceptible to anchoring. Give someone a high number, and even if they think that high number is wrong, the number they replace it with will be higher than they otherwise would have estimated.

Much has been made of the biases in the data that backs COMPAS, but in truth even if the data had been perfect COMPAS would still have created bad outcomes. It automates error prone Type 1 thinking, poisons the user’s judgment with an arbitrary anchoring value and prevents Type 2 thinking from spotting problems.

Buried in COMPAS’s definition of the problem is also one monster of an unchallenged assumption: that a person at high risk of reoffending will be made less likely to reoffend by giving them a longer prison sentence. That the cause of reoffending is some character flaw that the prison system corrects. COMPAS does not consider that the relationship might actually be reversed: that people who spend more time in prison become disconnected from social support networks and are more likely to reoffend in order to survive.

This is the danger of replacing Type 1 thinking done by humans with Type 1 thinking done by computers. Computers can calculate a correlation, but they cannot construct a narrative around it that turns that correlation into actionable insight. Therefore even the best algorithms need human beings to consider the context of their results. AI that removes that context lives or dies by the accuracy of its model. AI that removes that context over several layers of abstraction carves a large blast radius into the Earth when it goes wrong.

On the other hand, AI that increases the speed and opportunities in which human beings can apply both Type 1 and Type 2 thinking fairs much better. Early trials with ESAS in Florida have shown that by matching a case to a range of comparable cases and allowing users to explore their context overall length of sentences were reduced, sometimes considerably. The ESAS team estimated that just five criminal cases saved the state of Florida $1 million in the daily costs of incarceration.

Designing AI by Redistribution

The narrative with advancements in technology is usually about what gets replaced, but technology doesn’t actually replace, it redistributes. The time, energy and money spent on one part of the process shifts to another part of the same process. Adding a computer to something might remove a human being performing a manual process, but replaces it with multiple human beings who build, deploy and maintain the computer doing the work.

The impact and ultimate effectiveness of any product that uses AI, therefore, is determined not by which algorithms it uses, but how it redistributes human effort. Is it creating more opportunities for critical thinking or encouraging more action with less thought and discussion? Software engineers who build AI need to pay attention to computer human interaction more so than other programmers. AI that does Type 1 thinking for the user and blocks Type 2 thinking typically leads to disastrous outcomes. AI that increases the opportunities for Type 1 thinking and encourages the user to add the Type 2 error checking to the machine’s Type 1 thinking, tends to increase utility.

Software Safety

Because software is eating the world

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store