Dr. Anca Dragan and Dr Fei-Fei Li at SAILORS 2015 / credit: Lauren Yang

The future of AI needs to have more people in it

Guest post by Dr. Anca Dragan, Assistant Professor at UC Berkeley and Lead for AI4ALL’s UC Berkeley program, BAIR Camp

AI systems need to help humans and humanity. I believe that for them to do that well, we need a new definition of AI — one that takes humans and humanity into account explicitly. That’s why I’m excited to share that I’m leading the upcoming AI4ALL education program at UC Berkeley, BAIR Camp, where high school students will explore human-centered — or humanistic — AI.

Creating AI with Humans in Mind

So far, much of the work done in artificial intelligence has been about creating an agent (such as a robot) that acts in the world to optimize a given reward function (which tells the agent how well it’s doing). This definition of AI presents a few problems.

First: where are the people?

Though we’re building these agents to help and support humans, we haven’t been very good at telling these agents how humans actually factor in. We make them treat people like any other part of the world. For instance, autonomous cars treat pedestrians, human-driven vehicles, rolling balls, and plastic bags blowing down the street as moving obstacles to be avoided. But people are not just a regular part of the world. They are people! And as people (unlike balls or plastic bags), they act according to decisions that they make. AI agents need to explicitly understand and account for these decisions in order for them to actually do well.

When autonomous cars (orange) account for human-driven vehicles (white), they start being able to coordinate with them. The back up at an intersection to encourage the human to go through, or inch forward to test if the human is going to let them go.

Second: how do we figure out what an agent’s reward function should be?

In other words, how do we tell a robot what it should strive to achieve? As researchers, we assume we’ll just be able to write a suitable reward function for a given problem. This leads to unexpected side effects, though, as the agent gets better at optimizing for the reward function, especially if the reward function doesn’t fully account for the needs of the people the robot is helping. What we really want is for these agents to optimize for whatever is best for people. To do this, we can’t have a single AI researcher designate a reward function ahead of time and take that for granted. Instead, the agent needs to work interactively with people to figure out what the right reward function is.

We need the next generation of AI researchers to think in this new people-focused mindset.

This is where AI4ALL comes in. AI4ALL is a fantastic vessel for teaching the next generation of students about AI in a human-centered way from the get go. When they find out about AI, they should already be finding out about it in the context of the people that AI is trying to support. Supporting people is not an after-fix for AI, it’s the goal.

To do this well, I believe this next generation should be more diverse than the current one. I actually wonder to what extent it was the lack of diversity in mindsets and backgrounds that got us on a non-human-centered track for AI in the first place.

AI4ALL + Berkeley AI Research Camp

I am working with AI4ALL to create an educational program at UC Berkeley, as part of the Berkeley Artificial Intelligence Research (BAIR) Lab, the Center for Human-compatible AI, and the CITRIS Center for People and Robots. The BAIR Camp program will give a diverse group of high school students exposure to Computer Science and AI, directly introducing these areas with a humanistic focus. The BAIR Camp is designed for early high school students, and in particular those coming from a low-income household. At BAIR Camp, we’re working to create an environment where students will come away not just with an idea of what AI is and an interest in pursuing it further as their careers develop, but also a clear idea of what steps to take now in order to get there.

Please stay tuned for more information about BAIR Camp and how you can apply.

Anca at SAILORS 2015 / credit: Lauren Yang

About Anca

Anca Dragan is an assistant professor in the EECS Department at UC Berkeley, where she runs the InterACT Lab, helps steer the Berkeley AI Research (BAIR) Lab, and is co-PI of the Center for Human-compatible AI. Her goal is to enable robots to work with, around, and in support of people. After teaching at SAILORS, Anca is now the lead for AI4ALL’s UC Berkeley education program, which will work to introduce high potential low income high schoolers to humanistic AI.