Intel’s Anna Bethke: “Why you need to make sure that everyone is heard in a meeting”

Authority Magazine
Authority Magazine
Published in
13 min readMar 18, 2019

For women currently in the AI industry, this includes treating and paying all individuals equally. So making sure that everyone is heard in a meeting, have the appropriate tasks, job titles and pay bands. Then also ensuring that everyone feels included when group activities are planned — as not everyone is interested in happy hour, or can attend a happy hour. Some of us like decorating pumpkins during tea time (super fun activity I’ve done with a previous team). For women coming into the field, it is ensuring that they receive the same support, education and encouragement. We still hear too many stories of middle school girls being asked if they are lost when they come into a computer class and graduate students who are told they will never be able to gain their PhD. For those in transition, we should assess and make sure we don’t have any hiring biases, which can often be unconscious. Women are more likely to respond to certain types of job descriptions based on skill set and terminology, and we may hold women and men candidates to different standards. If we are using AI as part of the resume screen, we may have some held-over unintended biases in these algorithms. Basically it’s important to look at these potential biases in ourselves and our algorithms and the full end-to-end hiring and retention process.

As part of my series about the women leading the Artificial Intelligence industry, I had the pleasure of interviewing Anna Bethke, the Head of AI for Social Good at Intel, where she is establishing partnerships with social impact organizations, enabling their missions with Intel’s technologies and AI expertise. She is also actively involved in the AI Ethics discussion, collaborating on research surrounding the design of fair, transparent, ethical, and accessible AI systems. In her previous role as a deep learning data scientist she was a member of the Intel AI Lab, developing deep learning NLP algorithms as part of the NLP Architect open source repository.

Thank you so much for joining us Anna. Can you share with us the “backstory” of how you decided to pursue this career path?

I have always been interested in how things work. As I child, I would take apart our old rotary phones or cassette players, then try to put them back together — we had a box of old hardware for me to play with. My dad is a climate scientist, studying the earth’s atmosphere via satellite measurements — so I was surrounded by math and science growing up; even getting to see a shuttle launch when I was in second grade. When I was choosing what to major in during college, I took my love of engineering, math and science and combined that with my love of space and chose aerospace engineering. At MIT, my specialization became human factors engineering — the study of how people interact with technology. It taught me a lot about human-computer interaction, statistics and programming. These building blocks really helped me launch into data analytics and data science. My most recent career change to heading our AI (artificial intelligence) for Social Good program came from this technical background and a deep desire to use my skills in a positive, impactful way. I’ve been a volunteer data scientist with Delta Analytics (a local San Francisco group that connects volunteers with social impact organizations), and wanted to transform my role into one that gives back. There are so many small and large things we can do that can make a very real and large impact on individuals, groups or our planet. I’ve been grateful for the support of my managers and co-workers to allow me to build out this program.

What lessons can others learn from your story?

One of the largest things that I have seen is that roles, goals and methods are constantly evolving. I’ve found that I am constantly needing to learn and adapt to stay engaged and up-to-date. It is important to do something that you are passionate about, be it a small part of your daily tasks or your larger goals. I had heard stories that you can make your own role, but had never seen it done in practice, so I was floored when I received unanimous support to create the AI for Social Good program at Intel. Asking for a role that I created was (and still is) scary as there is no template or precedence, but it has been rewarding.

Can you tell our readers about the most interesting projects you are working on now?

One of my largest tasks over the last few months has been learning about the projects we already have established at Intel, and creating new connections with organizations or individuals. I’ve learned about Intel’s accessibility group, the AI Academy, the Software Innovator programs that help support researchers and entrepreneurs, our AI Builders group that supports start-up companies, as well as the many projects we have done over the years. One of the projects I’m reviving is identifying and fostering healthy online conversations. This is a project that we did in the past called Hack Harrassment with Vox and the Lady Gaga Foundation. We are working with researchers down at the University of California Santa Barbara to continue the research and suggest strategies to defuse hateful speech.

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that?

There are so many people that I am grateful to. I think I will always remember and admire my first mentor and project lead, Laura Kennedy. She taught me so much about SQL, Java, data science and creative problem-solving by giving me tasks that were challenges, yet manageable, and always encouraging my ideas and successes, as well as helping me troubleshoot any issues that came along. No matter how busy she was, she always had time for me. I’ve had so many wonderful role models, colleagues and friends through the years who are there for me, but Laura certainly has played a huge role in my career.

What are the 5 things that most excite you about the AI industry? Why?

1. The span of areas that it can be applied to. We are seeing artificial intelligence being used in an increasing variety of tasks and areas — it has really transitioned from just being in a lab to being increasingly useful.

2. Its accessibility. I think it is getting easier and easier to learn and create AI systems. There are a lot of groups and individuals who have been working tirelessly to make sure algorithms and tools are placed in the open domain, and teach everyone about AI, regardless of where they are located. Some of the most inspiring applications are coming from 13- to 17-year-old students like the Timeless app by Emma Yang (a phone app to help her grandmother who is struggling with Alzheimer’s).

3. The number of people interested in the topic. There is such an enthusiasm for the field, its research and its application. It is really interesting to hear what everyone is doing with AI and what they are striving to do. Conferences like NeurIPS have exploded in size and scope.

4. Its complexity. There is a lot more room for growth in this area in terms of accuracy, complexity, speed and more things that we can apply it to. We have just scratched the surface of determining how various AI algorithms works, and how we can make them better. It is a very interesting and rewarding time to be in AI.

5. Its positive impact. AI can be used to help park rangers identify poachers, allow doctors to discover new disease-fighting drugs, researchers to study the health and diversity of our plants and animals, and more. It can be used as such a helpful tool to aid and accelerate people’s jobs as they try to improve the overall well-being of the world.

What are the 5 things that concern you about the AI industry? Why?

1. Diversity in opinions. The AI workforce is not very diverse, in particular in our backgrounds. It would be useful to work alongside more social scientists, policymakers, teachers and others to get a large diversity of perspectives. There is certainly also a diversity divide in terms of gender and races as well, but we don’t speak about perspective diversity as often and it should also be considered. The more diverse our teams, the better our AI systems will be able to positively impact a diverse set of individuals and use cases.

2. Harassment. I’ve seen too many women (and some men) online and offline shot down for their views or their role in the industry in very negative and inappropriate ways. It is an issue that spans beyond the AI industry too, of course, but it does make being a more influential female in this area a very scary thing sometimes.

3. Trying to solve everything with technology. It is a tendency that many of us, including me, can fall into. In essence, we want to write an algorithm to solve the problem, but in many cases an algorithm isn’t the only aspect needed. For instance, in the healthy conversation example I mentioned above, we can certainly create an AI to flag inappropriate content and potentially even suggest statements to make to defuse a harmful conversation. But at the end of the day what is most helpful is to raise the norm in the online communities that this type of language is not OK and that it is quite harmful. It is really a combination of AI and discussions that will provide the largest long-term benefit.

4. Isolation/silos. Being a former data scientist, I have seen how easy it is to stay in your own little bubble. I would sometimes go weeks without really talking to anyone beyond my team if I was focused on a problem. When I was writing code, I could get into a zone where only the code really mattered and I mostly thought about the ways that I could improve the algorithm or program. But keeping connections outside of the problem is helpful as others can have ways of seeing the issue in a different light, identifying new issues or may, in turn, need a pair of fresh eyes on their issue as well. Continually reaching outside of your bubble is a way to continually foster diverse perspectives, which is important.

5. Harmful use. Instead of creating a bot to help defuse harmful conversation, someone can also create a bot to instigate, promote or encourage harmful conversation. It can even be the same algorithm at the core, but simply with a different goal.

As you know, there is an ongoing debate between prominent scientists (personified as a debate between Elon Musk and Mark Zuckerberg) about whether advanced AI has the future potential to pose a danger to humanity. What is your position about this?

I think there is a larger danger from people using AI in a way that is dangerous to humanity than from the AI itself. Just like a hammer or any tool can be used both for good, like making a bird house, and harm, like smashing a vase, AI can also be used for both. While there may be a potential of an AI entity learning to a point that it becomes self-aware and malicious, this will likely be extremely difficult to do. Individuals have already created malicious AI by creating programs to do denial of service attacks, harass individuals on social media and more. While we shouldn’t simply sweep the possibility of a self-aware malevolent AI under the rug, we should be having a conversation of maliciously created AI more often.

What can be done to prevent such concerns from materializing? And what can be done to assure the public that there is nothing to be concerned about?

Discussions around this topic are key, both to hypothesize about any harmful results from AI systems, to determine ways to mitigate or stop these negative outcomes, as well as to inform the public about how and when the systems will be used. The more transparency that we have for technology as it is developed and implemented, the more that everyone can be informed, and have a voice to raise any concerns. It’s important to keep human judgement in the loop at some level, from the design, test and implementation of a system, to correcting any adverse behavior either immediately or in batch upgrades as necessary.

How have you used your success to bring goodness to the world? Can you share a story?

One interesting phenomenon is that people are talking a lot more about AI for social good. I’ve heard so many really interesting ideas from colleagues and partners. I think the more we talk about the ways we can really help each other, the more we want to do this. For example, with the HOOBOX Robotics facial gesture-recognition-enabled wheelchair, there were a number of individuals who came up with brothers, daughters or loved ones who could benefit from a system that allows them to command their wheelchair in a more user-friendly manner. They had joy and hope seeing that we were helping people have more choices, which has been so wonderful to see.

As you know, there are not that many women in your industry. Can you share 3 things that you would you advise to other women in the AI space to thrive?

I feel like one of the most important steps is to find people that you are comfortable around, be it at work, like-minded friends, or through meetup groups. I really appreciate groups like WiMLDS, Women Who Code, GirlGeek and other similar groups to help balance the diversity of people I may see. These groups are also lovely ways to support each other and gain some practice in presenting (either formally or informally) about what you are passionate about. I’ve gone through phases of wanting to be very “girly” and just be “one of the boys” and it is something that I go back and forth with. And while it may be cliché, I’ve been starting to learn to just be myself. Knowing that I don’t have to wear a skirt or dress onstage if I want to be seen as a woman leader, but I also don’t have to wear a suit to a formal meeting so that I ‘blend-in’ better. Oh yeah, and this can totally change as well. It may seem like a little thing to some, or a big thing to others, but for me it has been an interesting process.

Can you advise what is needed to engage more women into the AI industry?

For women currently in the AI industry, this includes treating and paying all individuals equally. So making sure that everyone is heard in a meeting, have the appropriate tasks, job titles and pay bands. Then also ensuring that everyone feels included when group activities are planned — as not everyone is interested in happy hour, or can attend a happy hour. Some of us like decorating pumpkins during tea time (super fun activity I’ve done with a previous team). For women coming into the field, it is ensuring that they receive the same support, education and encouragement. We still hear too many stories of middle school girls being asked if they are lost when they come into a computer class and graduate students who are told they will never be able to gain their PhD. For those in transition, we should assess and make sure we don’t have any hiring biases, which can often be unconscious. Women are more likely to respond to certain types of job descriptions based on skill set and terminology, and we may hold women and men candidates to different standards. If we are using AI as part of the resume screen, we may have some held-over unintended biases in these algorithms. Basically it’s important to look at these potential biases in ourselves and our algorithms and the full end-to-end hiring and retention process.

What is your favorite “Life Lesson Quote”? Can you share a story of how that had relevance to your own life?

I tried to think of a better one … but whenever I think about the movie “Newsies,” there is a song that goes: “Open the gates and seize the day, don’t be afraid and don’t delay. Nothing can break us, no one can make us give our rights away. Arise and seize the day.” I think this quote/song has always been empowering and inspiring to me (the short version “Open the gates and seize the day” was my senior year quote in high school). It has encouraged me to seek those things that I am passionate about, like using AI for impactful purposes. It makes me feel like I can do anything that I set my heart and mind to, and overall that has been the case. I have certainly had algorithms that failed or projects that didn’t work out as I intended, but these “failures” don’t mean that I have to stop trying. It isn’t so much about fighting an oppressive force as the newsboys are doing in the movie, but to seek what I think is good and right, and continue to pursue those efforts.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)

I would love to see everyone giving back, volunteering time, skills or finances as they may be available or necessary. Even if it is for an hour a week or month, it can mean a very real difference for someone. Groups like Delta Analytics, Data for Democracy, DataKind and so many others are great if you are looking to donate your data science or software engineering skills. I’ve also loved to see the Viz for Social Good community grow and thrive. One of the activities I am very grateful that I did as a high school student was spend a weekend each year in downtown Denver, helping homeless shelters distribute food and resources. Finding the time to volunteer isn’t always easy, but it helps us gain new perspectives on other people. I also think we could all use a bit more kindness and compassion in the world. These last two thoughts probably play into each other, but the more we are able to humanize other groups and viewpoints, the easier it is to see that most of us are just trying to make sense of the world and do our best.

How can our readers follow you on social media?

@data_beth on Twitter J

Thank you so much for joining us!

--

--

Authority Magazine
Authority Magazine

In-depth interviews with authorities in Business, Pop Culture, Wellness, Social Impact, and Tech