Meet Ayanna Howard, a professor, the director of the HumAnS lab, and the chair of the School of Interactive Computing at Georgia Tech. She spends her time mentoring students, researching ways to improve human-robot interactions, and speaking publicly about AI. As CTO of Zyrobotics, she’s committed to helping young students gain and develop confidence in STEM.
Her work in AI and robotics is vast and inspiring. Discover how her journey in robotics started, and learn about the steps she believes we must all take together to ensure a positive, inclusive future for AI.
We interviewed Ayanna as part of AI4ALL’s Role Models in AI series, where we feature the perspectives of people working in AI in a variety of ways. Check back here weekly for new interviews.
As told to Nicole Halmi of AI4ALL by Ayanna Howard; edited by Panchami Bhat
NH: You are a professor, the director of the HumAnS lab, and the chair of the School of Interactive Computing at Georgia Tech. What does a typical week look like for you? What kind of research or projects are you working on at the moment?
AH: A typical week falls into three different buckets — people, research, and education. Interacting with people involves talking with students, faculty, and alums about interactive computing and computing in general.
My research is focused on robotics, intelligence, autonomy, and getting robots to interact with humans in various ways. I’m primarily focused on rehabilitation, but also on the trust humans need to feel in order to be comfortable interacting with robots. I also work on issues associated with humans being over-reliant on robots.
The education and strategy part of my week involves my role in moving the community forward, which I do through educational and global speaking initiatives at conferences and with school groups.
You founded Zyrobotics, where you’re currently the CTO. Can you talk about what Zyrobotics does and why you decided to found the company?
Zyrobotics develops AI-powered STEM tools and learning games focused on early childhood education. As an academic, I know that there are a lot of students that could be trained in computer science and engineering but aren’t. When you meet someone who says “I don’t want to do engineering, I don’t want to do computer science” you can usually tag it to middle school when they’re in a class and they didn’t feel confident.
I wanted to make the language of computer science and STEM part of students’ DNA from very early on. That way, a moment where a child loses confidence is just an instance where they continue forward.
How did you decide to get degrees in engineering and electrical engineering? Were you interested in the field at a young age, or did you discover it in college? And how did you come to specialize in robotics?
As a child, I wanted to build a bionic woman. I didn’t realize you couldn’t actually do that at the time. I identified robotics as the one field that could allow me to do that, in terms of the components and the integration of the robot. It was the idea of using robotics to save the world, while still being a human, that interested me.
At that time, there wasn’t a robotics degree as such. It seemed to me like if you wanted to do robotics, you went into engineering. The reason I chose Brown University [for my undergraduate degree] was because you didn’t have to declare what type of engineer you were until junior or senior year, so you could just explore. Every single engineering student took the exactly the same engineering courses for the first two years. The folks that taught me a robotics course and a computer vision course were electrical engineers, so I decided to declare an electrical engineering major.
By the time I went to grad school, robotics had started to evolve. Computer science was starting to “claim” robotics as a subfield.
The human side of my interest came about when I was doing courses in case-based reasoning. These were the early AI courses, which discussed data that was primarily coming from expert humans — humans who don’t necessarily know how to jot down their expertise.
You can’t think about the human as just a black box that gives you information. I think of the human as an essential component of the system itself.
What are some of the things people should be doing now to create a positive future for AI?
I think there are three elements to this. One is at the society level, one is at the researcher level, and one is at the consumer/user level. As a society, we should demand certain things from our systems. We demand things from our government officials, and we demand good, moral acts from corporations. I think that we also need to do the same thing as a society for AI. Some of this conversation is happening with AI and robotics, and we should maintain that.
As researchers, we need to start thinking beyond what we can do and start thinking about what we should do.
I know a lot of the time, we as roboticists just want to build cool stuff. When we were keeping our work within the lab, it was fine. Now, the whole world is looking at us as the experts. We need to be more responsible with what we do with our algorithms, and think more globally about it. We need to be thinking deliberately about our technology, be responsible about what we put out there, and make sure that it doesn’t exclude certain classes of people.
As consumers, we don’t need to be blindly trusting of companies. It’s the same with robots. If something that a robot says doesn’t sound quite right — in that if a human said it, we would question them — then we need to have the same response to that robot as we would to a human.
Who were your role models growing up? Do you have any role models now?
I had two teachers who were really influential. My fourth-grade teacher pulled my parents aside and said: “I think that your daughter can do more.” I sat in the back of the classroom and she would just feed me stuff to learn, and I was gobbling it up. Similarly, my physics teacher in high school pushed me to help teach others in my class how to program.
Now, I look at people like Grace Hopper, and I’m reading stories about Katherine Johnson, who I didn’t even know about growing up. I look at them and their history and I think, we can do so much now. If they accomplished so much in tech, and they were the only ones at the time, there’s no reason why we can’t do the same now.
You’ve said elsewhere that mentoring students is a priority for you. Why is this important to you?
Our words have power. As an adult, or as a researcher we can say something to a student that turns them down a wrong path by saying something that we think is really small but to them is life-changing. To counter that, I feel like it’s my responsibility to mentor students, and share my experiences with them. I’m honest about difficulties I’ve faced, and reassure them that everything has a brighter day.
What advice do you have for young people who are interested in AI who might just be starting their career journeys?
Just pursue. Take online courses, and find opportunities in your communities or in your schools. If someone says no, just go find someone else — because eventually, someone’s going to say yes. Have confidence in yourself, even if you have to fake it. If you don’t feel confident about being in a class or pursuing AI, if it’s really something that you want to do, just do it and believe you can do it. At some point, your mindset will change.
What has been the proudest or most exciting moment in your work so far?
The proudest moments are when someone I don’t know is using something that I developed. Zyrobotics has a coding app for young kids and someone just tweeted to say, “oh we love this, thank you.” Those moments are exciting.
Ayanna Howard, Ph.D. is the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing in the College of Computing at the Georgia Institute of Technology. She also holds a faculty appointment in the School of Electrical and Computer Engineering. Dr. Howard’s career focus is on intelligent technologies that must adapt to and function within a human-centered world. Her work, which encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, has resulted in over 200 peer-reviewed publications in a number of projects — from healthcare robots in the home to AI-powered STEM apps for children with diverse learning needs. Dr. Howard received her B.S. in Engineering from Brown University, and her M.S. and Ph.D. in Electrical Engineering from the University of Southern California. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being recognized as one of the 23 most powerful women engineers in the world by Business Insider.
In 2013, she founded Zyrobotics, which is currently licensing technology derived from her research and has released their first suite of STEM educational products to engage children of all abilities. Prior to Georgia Tech, Dr. Howard was a senior robotics researcher at NASA’s Jet Propulsion Laboratory. She has also served as the Associate Director of Research for the Institute for Robotics and Intelligent Machines, Chair of the Robotics Ph.D. program, and the Associate Chair for Faculty Development in the School of Electrical and Computer Engineering at Georgia Tech.
Learn more about Zyrobotics here.
Follow along with AI4ALL’s Role Models in AI series on Twitter and Facebook at #rolemodelsinAI. We’ll be publishing a new interview with an AI expert every week. The experts we feature are working in AI in a variety of roles and have taken a variety of paths to get there. They bring to life the importance of including a diversity of voices in the development and use of AI.