Role Models in AI: Ece Kamar

AI4ALL Team
AI4ALL
Published in
7 min readApr 25, 2018

Meet Ece Kamar, a senior researcher at Microsoft who works on human-machine collaboration, AI systems in the real world, and issues around bias, robustness, reliability, and transparency in AI. Ece also co-authored the first report in a 100-year study of artificial intelligence, intended to provide a set of reflections about the field as it progresses. The report offers insights on where AI is headed, policy recommendations, and the importance of reflecting on fairness and transparency in the field.

Ece believes that it’s unlikely that important tasks will ever be fully automated, as human-AI partnerships will be complementary, rather than a relationship of replacement. See how Ece envisions the future of AI, how her academic exploration in college helped shape her career, and how she sees diversity as key to moving the field in a positive direction.

We interviewed Ece as part of AI4ALL’s Role Models in AI series, where we feature the perspectives of people working in AI in a variety of ways. Check back here weekly for new interviews.

As told to Nicole Halmi of AI4ALL by Ece Kamar; edited by Panchami Bhat

NH: Can you describe what you do as a senior researcher in the Adaptive Systems and Interaction Group at Microsoft? What does a typical day look like for you? What kind of projects are you working on right now?

EK: In my group at Microsoft, I have complete freedom in terms of what I want to focus my research on. I’m very interested in how AI works in the open world. So far, we in the field have been getting a lot of great numbers in laboratory settings about the accuracy of different AI systems for different tasks, but getting AI to work the same way in the open world with real people is a very different ball game.

I think we’re coming to an inflection point in AI.

It is not a question of “can we build AI,” as we have a lot of successful examples of AI systems. Instead, there are questions of, “how should these systems be built and deployed? How should they be partnering with people?”

You’ve also done a lot of work looking at how humans and machines can collaborate.

My work is influenced by the fact that people and machines have complementary abilities. AI systems work well when you have lots of data, the task is well-defined and repetitive, and the system is in a closed environment without a lot of unknowns. However, when you move beyond this and either try to adapt a system to a new domain or try to use it in the open world, these algorithms face a lot of challenges.

People don’t do very much statistical reasoning, we may not be very consistent all the time, and we have biases. However, we have common sense reasoning, counterfactual reasoning, creativity, and we’re good at adapting to new settings and combining our knowledge from different domains.

If our focus is only automating what exists in the world, and not on complementarity of tasks, we won’t be utilizing the true power of human-machine partnership.

The combination of humans and machines will be more effective, efficient, and reliable than either is on their own.

What are some of the important things people should be doing to create a positive, inclusive, and ethical future for AI?

One problem that I work on is what I call the “blind spots” of artificial intelligence. There’s a common assumption that the training data is perfect, but this isn’t true. For example, face recognition systems not recognizing dark-skinned people because the training datasets didn’t include enough representation of certain groups of people. Or speech recognizers not working as well for elderly or for children.

These blind spots are due to representation problems in a model’s training data, which the models don’t see at training time, but we do see in the real world. This creates a big reliability problem.

When blind spot errors are concentrated around subgroups of people, they create biases that AI systems can learn and amplify. We have to work really hard to address these issues, not only with technical tools but also with our value judgments.

How did you decide to get a bachelor’s in computer science and engineering? Were you interested in the field at a young age or did you discover it in college? And how did you come to focus on artificial intelligence for your Ph.D. research?

I was born and raised in Turkey. We have a school system in Turkey where we took these nationwide exams at the end of high school and selected our university and our major at the same time. That was a really hard decision for me because I didn’t know what fields I would be successful in. I didn’t think I would be good at computer science because I thought it was for people who really liked games — and I didn’t even have a computer at the time.

I decided to go to a new school in Turkey that allowed you to declare your major later. I took my first programming class there and enjoyed it because I love puzzles and solving problems from scratch. Our class was 50% girls and 50% boys. Though there were a few people who had a bit more experience, it was mostly an equal playing field. I never got intimidated by the field because of this atmosphere.

I was always very interested in language and one of my professors in college was studying computational linguistics. Because I was doing well in my classes, he offered me a summer research project. I started my AI journey working with him on this language project.

It sounds like your undergrad computer science experience was very different from many undergrad CS classes in the US. Do you think there was there a reason for the gender balance in your classes?

Having the freedom to try different classes at my college really helped people explore subjects without any limitations or stigma.

A lot of computer science problems are practical and could have real-world impact. I see women who care about the impact of technology on the world. I cannot imagine girls not enjoying these beautiful problems if they are given the opportunity and support in exploring these areas.

When I got to grad school in the United States, I realized that this gender disparity existed. It was an interesting cultural shift for me.

Who were your role models growing up? Do you have any role models now?

My biggest mentor and role model is my Ph.D. advisor Barbara Grosz. She’s one of the pioneering women in computer science and has worked so hard both to advance the field of AI and to promote women in sciences.

She’s always going for the problem that she thinks she can make the biggest impact on, even if it’s very hard. Working with her inspired me to become more involved in the field of human-machine collaboration and decision-making. I feel very lucky to have her as a mentor.

What has been the proudest or most exciting moment in your work so far?

I was recently invited to give big talks in Turkey where I got to meet a lot of people and tell them about my research and perspective on AI. Having the opportunity to speak in my home country about what I’ve accomplished was a lot of fun.

I’m also very proud to see students I’ve mentored becoming professors at great places and get recognized for their work.

What advice do you have for young people who are interested in AI who might just be starting their career journeys?

I think that our field has a lot of open, big problems that we’re just starting to dig up. These problems are going to have really big consequences for the way we live and for the society we live in. There will be a lot of opportunities for our work to make an impact, and I would recommend that people explore these issues without any hesitations.

Some of the issues we’re seeing in AI systems today have deep connections to the diversity problems in the field. Having diversity in our field is not only an issue about the culture. It is also about the reliability, robustness, and fairness of the systems we build. It is of practical importance to us that everybody’s voice gets represented in the systems we build.

About Ece Kamar

Ece Kamar is a Senior Researcher in the Adaptive Systems and Interaction Group at Microsoft Research. Ece received her Ph.D. in computer science from Harvard University in 2010. Her research is inspired by real-world applications that can benefit from the complementary abilities of people and AI. Since many real-world problems requires interdisciplinary solutions, her work spans several subfields of AI, including planning, machine learning, multi-agent systems and human-computer teamwork. She is passionate about investigating the impact of AI on society and studying ways to develop AI systems that are reliable, unbiased and trustworthy. She has over 40 peer-reviewed publications at the top AI and HCI venues and served in the first Study Panel of Stanford’s 100 Year Study of AI (AI100).

Follow along with AI4ALL’s Role Models in AI series on Twitter and Facebook at #rolemodelsinAI. We’ll be publishing a new interview with an AI expert every week this winter. The experts we feature are working in AI in a variety of roles and have taken a variety of paths to get there. They bring to life the importance of including a diversity of voices in the development and use of AI.

--

--

AI4ALL Team
AI4ALL

AI4ALL is a US nonprofit working to increase diversity and inclusion in artificial intelligence.