Following is an interview with Tess Posner, Executive Director of AI4ALL. Tess will be discussing the importance of incorporating diverse and inclusive perspectives as the future of artificial intelligence is developed at the upcoming Future Labs AI Summit.
Some obvious cases of discrimination in building AI technology are in the form of implicit gender, racial, and other biases. What are other aspects of the technology in which discrimination is prevalent but maybe not as widely apparent to most people?
Our focus is on inclusion and diversity in artificial intelligence (AI). Right now, AI is really being created by a homogenous group of technologists. When AI applications are being created, they can reflect and sometimes amplify the bias of those people building them, as recent studies have shown.
At AI4ALL, we’re trying to increase access to the tools, building blocks, policy efforts, and research to try to prevent bias as this technology develops. Often, media can portray AI in frightening ways: limiting, obtuse, threatening, and exclusive. We’re trying to change that.
What are the societal ramifications when technologies like AI aren’t pushed on diversity? For example, companies are hiring hundreds of thousands of people to train AI software. Is anyone tracking who the trainers are? Does it make a difference if it’s not a diverse pool?
Overall, I’d say there’s less awareness of this issue. I think you see this happening on the company level with initiatives like implicit bias training, for example.
It’s important to help the individual developers using training sets to recognize biases in their training sets and how they may reflect their own implicit biases. For example, there’s generally been a track record giving AI personal assistants female names (Siri, Alexa), and giving expert AIs male names (Watson).
AI4ALL has emphasized the importance of building AI technologies with humans in mind. What are some tactical ways this can be done?
We think there needs to be much more of a focus on humanistic AI or AI that’s relevant to societal good. This means a paradigm shift towards how AI can be applied to solve human solutions and better serve society broadly.
One way to do this is to ensure that diverse individuals are designing and implementing AI applications for a more complex society and that they’re able to anticipate and solve for blind spots in the models they’re building. This also requires that organizations, schools, and policymakers dedicate resources to develop systems that will evaluate any unintended consequences or harm brought on by AI.
Rather than perpetuate some of the widespread fear-mongering about AI, we think there needs to be a focus on how the technology can complement and augment human capabilities with more of an emphasis on humanistic applications.
Can you touch on the importance of reaching underrepresented kids in high school versus more of a focus on university students or even students in junior high?
Recent research out of Microsoft showed that girls lose interest in STEM curricula around age 15. A key barrier for girls and for underrepresented students more broadly is that they lack both a peer community and role models that look like them. So if you’re losing interest or feeling left out by 15, it will impact your ability to get involved in the field later on in life. That said, it’s absolutely never too late to start — you can get involved in the space or teach yourself new skills any time — but our focus is on intervening and building a support network early on.
By focusing on high school students, we’re reaching them at a crucial time when they may be starting to lose interest in STEM. We’re also able to show how AI can be applied to real-world situations rather than teaching abstract skills.
What does this relationship with students mean for the future of jobs in AI?
There is a talent crisis in the level of STEM graduates in the U.S. We have an average of 50,000 computer science bachelor graduates per year, but there will be demand for well over 1 million jobs in computer science by 2020. It’s clear that we need to get more people in the pipeline to fill these jobs and, importantly, to build our economy and compete on a global scale. Plus, STEM jobs typically pay 50% more than average professional careers.
It’s also important to remember that the types of skills that are in demand today will change. We know that we can’t train people for a static set of jobs because we know those jobs and the skills needed to master them will evolve. These skills — which we refer to as “future-proof” — are areas like problem solving, creativity, and collaboration, which are all important for any type of job regardless of how technology changes.
How do partner universities work with AI4ALL on your initiatives?
One of the biggest initiatives that we partner on with universities is a summer camp where we bring students in to their AI labs, where they’re exposed to everything from basic coding to more humanistic AI projects and projects focused on solving real-world, societal problems.
It’s so important to provide these high school students with access to the labs, graduate students, and resources within the universities. They get exposure around what it’s like being on a college campus, and they become immersed in what college life can be like. One of our partners, UC Berkeley, works with us on bringing kids into their robotics lab, which is focusing on cutting-edge research that is happening right now. This is all very real, practical stuff.
Are there any industries using AI in particular that have the most work to do in terms of increasing diversity and overcoming bias?
This is a broad problem at the intersection of so many areas, not just one industry. Our mission is not just getting people into AI development but also into policymaking that will shape and govern AI development from a national and international perspective. We choose to take a broad approach by fostering those who will be teaching the next generation of technology and AI developers.
Tell us about some of the most important initiatives at AI4ALL.
The aforementioned summer camps with our university partners are definitely one of our most hands-on initiatives for the students. We also have a very strong alumni program, where we support our summer camp alumni on their career trajectory by connecting them with role models, mentors, peers, and current professionals working directly in AI. They get a chance to work on projects and learn a lot through these connections.
We also take our students and alumni on many field trips to companies using AI so they can see what it’s like to work in specific roles and have a better sense of the opportunities out there.
Where do you see the state of diversity in AI moving over the next year?
I’m an optimist, and I believe if we are proactive and if we address issues in diversity now, we can make a real impact. There is definitely an urgency around this now, given the ubiquity of AI in our lives. With AI having more direct impacts on everyone’s lives, we cannot afford to have such a lack of diversity among those creating it.
How important is access to technology generally when we think about solving the diversity gap here?
This is something we’re really thinking about: access to education and closing the digital divide is going to be crucial. Only 40% of U.S. schools teach programming in high school, and the quality of the courses greatly vary. Many people may not realize that only about half of U.S. households have internet access. In addition, internet speeds without high bandwidth prohibit people from taking internet courses or engaging in the kind of training that requires more sophisticated internet capacity. Lack of access does not allow you to be an active consumer of AI and its associated technologies, nor does it allow you to participate in the discussion.