The Human Side of Artificial Intelligence: Q&A with Osonde Osoba

Osoba explores our growing reliance on autonomous machines, and what that means for security and the future of work.

RAND
RAND
4 min readJun 21, 2018

--

Osonde Osoba in a RAND panel discussion in Pittsburgh, Pennsylvania, February 20, 2018. Photo by Lauren Skrabala/RAND Corporation

Osonde Osoba works at the leading edge of artificial intelligence (AI). His recent work explores society’s growing reliance on autonomous machines, and what it means for security and the future of work. He also has highlighted concerns that algorithms too often rely on historically biased data, preserving and extending those biases in everything from criminal sentencing to mortgage approvals to credit access. He was introduced as the “engineer of fairness” at a recent TEDx talk. Osoba grew up in Nigeria, earned his Ph.D. in electrical engineering at the University of Southern California, and came to RAND first as a summer associate in 2012 and then as an engineer in 2014.

You created an AI algorithm when you were 15?

It was a terrible idea. I was bored and my mom sent me to this computer course at a local university. I learned to program for the first time, I learned logic for the first time. I figured, maybe I could create a program that could parse arguments and get a sense for when they are cogent, valid, and persuasive. I underestimated how hard it is to do natural-language processing.

What was it about AI that caught your imagination?

It’s less about the intelligence and more about being able to capture how humans think. I wasn’t trying to create general intelligence; I was trying to better understand how people think about argument, what makes an argument. Artificial intelligence is a way of understanding what it means to be human beings.

What are the conversations we should be having now, as a society, that we’re not?

We need to think about AI in terms of value alignment — I think that’s a better framework than, say, fairness and bias. You create this algorithmic decisionmaking agent, and to what extent does it align with what the general population or with what the culture deems valuable and important? At the moment, most of the work on AI comes from a technical point of view, so it’s focused on making it as precise and accurate as possible. But that’s not the primary objective of most human interactions with the world. There are countless other things to which we need to pay attention.

When you look 5–10 years out, what are we going to be talking about in terms of AI?

I think the concept of privacy will continue to change to reflect whatever technology we have. That doesn’t mean I think privacy is irrelevant or that it’s necessarily going to degrade over time. But we’re going to have to be more sophisticated in how we talk about privacy. Maybe we talk more about variety in the types of privacy that apply to different contexts.

Artificial intelligence, algorithms, are only going to get more sophisticated, and so applying them to social media, to information dissemination, to achieve strategic ends — that’s going to get more sophisticated. I think when we look back at what the U.S. intelligence community has concluded were Russian attempts to intervene in the 2016 presidential election, we’ll probably think those are child’s play. I would bet money that there’s going to be escalation on that front.

What level of intelligence would be required before we start thinking of autonomous systems less as this ‘other’ and more as parts of our society?

And then, if you’re being a little playful, I wonder what level of intelligence would be required before we start thinking of autonomous systems less as this “other” and more as parts of our society. People are already having such conversations about, say, liability: If we treat these systems as entities unto themselves, that allows us to better consider how issues of liability play out. A fellow researcher tells a story about how, several years ago, Sony created this artificial pet called AIBO that people actually bought and used as pets. At some point, Sony had to discontinue support for AIBO, and people had these systems that started dying or degrading, and people actually started mourning the death of their AIBO. So you can imagine what happens when you extend that to even more intelligent systems.

What are you working on now?

I have two strands of work. I have the technical strand, trying to develop artificial intelligence for different types of scenarios, different problems. Right now, we’re trying to use artificial intelligence to improve planning. Then there’s the interface of artificial intelligence and society. We’re trying to understand what fairness means in different contexts. We’re looking at equity in algorithmic decisionmaking in insurance pricing, criminal justice, and vetting.

Like you said, bringing you to a better understanding of human beings.

We serve as sort of the proof of what’s possible, and so understanding how we act is going to be useful in understanding how to create better artificial intelligence systems. That’s the technical line of reasoning. But another reason to focus on human behavior is because we create these things as tools. They exist to be used within human society. And so we need to know how they interact with human society, with human norms, and that requires us to study human behavior and how we relate to AI.

This originally appeared on The RAND Blog on May 1, 2018.

--

--

RAND
RAND

We help improve policy and decisionmaking through research and analysis. We are nonprofit, nonpartisan, and committed to the public interest.