Watch MIT Technology Review’s Karen Hao on Infinia ML’s Machine Meets World.

MIT Technology Review’s Karen Hao on The Urgency of AI Ethics

Join Machine Meets World, Infinia ML’s ongoing conversation about AI

James Kotecki
Aug 4 · 20 min read

Episode Highlights

This week’s guest is Karen Hao, Senior AI Reporter for MIT Technology Review.

“It definitely feels urgent to have these conversations to constrain the way that we use AI, because the slower you are the more people that are harmed in the process. But it doesn’t necessarily mean that [AI decisions] are entirely irreversible if we wait too long.”

“…ethics is so nebulous that it feels like everyone has a different set of ethics and how do we actually achieve any kind of consensus and some people would argue that because of that, we should not be specifically just talking about ethics. We should use a lens that’s a bit more universal like human rights. We actually have international human rights law that people have already formed consensus around.

“ But other people would argue that because different regions, different cultures, different groups have different ethics they should actually just design AI systems or their technology or whatever differently to suit their needs …one of the tensions within the AI ethics space is do we actually need a universal framework or can we have localized frameworks?”

We asked Karen some quick questions about artificial general intelligence — the idea that someday, AI could be generally smart in the way that people are.

Watch the show above. You can also hear Machine Meets World as a podcast, join the email list, and contact the show.

And be sure to check out Karen’s AI newsletter, The Algorithm.

Audio + Transcript

James Kotecki:
Hey, and we are live. I am James Kotecki from Infinia ML and this is Machine Meets World. We’re talking artificial intelligence today with my guest, MIT Technology Review Senior AI reporter, Karen Hao. Welcome, Karen.

Karen Hao:
Thank you so much for having me, James.

James Kotecki:
It’s great to talk to you again and one thing I didn’t ask you when we chatted before and just to kind of set the stage, why are you on this beat? Why do you like talking and covering artificial intelligence?

Karen Hao:
So I started engineering undergrad and I originally studied mechanical engineering actually, and the reason I was drawn to engineering and technology was because I thought it was the best way to actually make change in the world. And then at some point I realized that technology is not always a force for good. It does change the world significantly but there’s a lot more that goes into it than just magical pixie dust that makes everything better. And so I started going into journalism to cover that and the thing that I love about AI, specifically, is that it’s just so broad and it cuts across so many different industries. It really epitomizes a lot of the challenges that most technologies face so it kind of just gives me a wide lens or like a playground, so to speak, for really exploring all the different issues that I like exploring.

James Kotecki:
And why journalism as a way to approach it? Why not be in a lab tinkering with AI yourself? I actually know you do have a technical background, right? You used to be at a company that was somehow associated with Google X Labs?

Karen Hao:
Yeah, yeah. It was the first startup to spin out of Google X. To be honest, it comes down to my personality. I really love talking with lots of different people. I love the really quick feedback loop that happens in journalism. You learn, you write an article, you get published, you get the feedback immediately. You don’t get that in a startup and you certainly don’t get that in a research lab. And I think also because my interests are so broad it’s hard for me to focus on one particular thing. I know for people who aren’t in the AI world AI may seem like a really narrow focus but for me, it’s so expansive that it really allows me to go in a billion different directions and never get tired of learning.

James Kotecki:
Awesome. Well, I really appreciate talking to you and I want to focus this conversation, at least part of it, on AI ethics, which seems like it’s an increasingly hot issue. I have sensed that in the last couple of years the conversation has really shifted and a lot of that has kind of bubbled up. But to start with, how do we, or how do you define AI ethics? It seems like the term is just increasingly thrown around but there’s maybe as lot of different ways of thinking about it.

Karen Hao:
There are a lot of different ways. I was actually moderating an event last night. We have a flagship conference series called EmTech and we had our EmTech Asia in Singapore yesterday, obviously, all virtual. But I moderated four different AI ethics sessions and they all defined it differently and one of the presenters actually put up 10 different questions that are all under the umbrella of AI ethics, like what is AI’s impact on jobs? How do we make AI fair? How do we, I don’t know, make sure AI is not abusing democracy or whatever and it … When I looked at the nine questions I was like, “Ah, this is why I feel like my beat is so scattered all the time because it does touch so many different things.”

Karen Hao:
But the way that I think about it is just, AI intersects … Ultimately, the goal of any technologies, I hope and particularly of AI, is to make people’s lives better and so ethics is kind of the way to think about how to actually do that. How do you define what is good and what is bad and then how do you facilitate AI actually doing the good things and not the bad things. And that’s why it is broad because there’s so many different ways to define good and bad. But that doesn’t necessarily mean that it’s unproductive to do so. I think just in every specific context and every different industry there’s a flexibility to how you define that.

James Kotecki:
I think it was Max Tegmark’s book, Life 3.0, where he pointed out that we as a human society don’t even agree on ethics let alone Ai ethics, right?

Karen Hao:
Yeah.

James Kotecki:
Cultural norma and ethical norms are different between houses here in my neighborhood let alone if you go to a different country and a different system of government. When we talk about AI ethics are we really just talking about ethics?

Karen Hao:
Yeah, pretty much. Yeah, one of the presenters last night was making this really good point where he was like, “How do we actually go about developing an AI ethical framework?” Well, if we look at the medical ethics framework it’s pretty much good enough so … and that fundamentally is just an ethical framework.

Karen Hao:
And you’re right, ethics is so nebulous that it feels like everyone has a different set of ethics and how do we actually achieve any kind of consensus and some people would argue I think that, because of that, we should not be specifically just talking about ethics. We should use a lens that’s a bit more universal like human rights. We actually have international human rights law that people have already formed consensus around.

Karen Hao:
But other people would argue that because different regions, different cultures, different groups have different ethics they should actually just design AI systems or their technology or whatever differently to suit their needs and that is kind of like the … I guess one of the tensions within the AI ethics space is do we actually need a universal framework or can we have localized frameworks?

James Kotecki:
Wow. I mean, I can see, obviously, huge problems with that. When you look at this three-legged stool of the United States, Europe and China having very different approaches to technology, very different approaches to security, privacy, the way that the internet should be structured, you put AI in that mix and you’ve got this … Kind of reminds me, I think 1984 had three different major super powers in it, right, that were all competing. So it doesn’t necessarily bode well for a harmonious future when we’re all going down these different paths.

Karen Hao:
I don’t know, I mean, yeah, there’s definitely a significant cause for concern and I definitely am constantly worried when I’m on this beat but to steal a phrase from one of my speakers last night, “it’s like short term pessimistic, long term optimistic.” I think-

James Kotecki:
Ooh, that’s a good one.

Karen Hao:
… yeah. I was recently working on this piece actually about some of the technology frameworks that are … or the way that China is starting to evolve their data privacy regime and one thing that makes me long term optimistic is the fact that data privacy, which is, I guess one of the many aspects of AI ethics, is pretty … Countries don’t develop these frameworks in isolation.

Karen Hao:
They’re constantly looking at other places and what they’ve already done. So actually in developing their own data privacy framework China pretty much looked very closely at the EU’s GDPR and then figured out ways to adapt it to themselves. And so, I think what makes me long term optimistic is that there’s still dialogue that happens across borders and there’s still a desire to … People still have a lot of shared desires and so even though the geo-political situation right now is extremely worrying there’s also a lot to share or a lot of commonality between countries.

James Kotecki:
Do you think there’s an hourglass and the sand is slipping out for how long we actually have to engage in these kind of conversations before we get some kind of technological lock-in and by that I mean, let’s just say bias, for example. You train all these systems on biased data, let’s say it’s gender biased, racial biased and then systems are making decisions about criminality or loans or what have you, based on that data and those systems propagate and perpetuate and continually raise people up and bring people down in society and at a certain point it’s kind of locked in and it’s very difficult to change. Are we in this liminal moment now when we can change it but is there some kind of sense or urgency that time is running out and we better get this right?

Karen Hao:
There’s definitely, yeah, that’s an interesting question. I recently read that Amazon, one of Bezos’s things that he likes to say is, “Decisions are either a one way door or a two way door.” You can either reverse your decision or it’s irreversible and so you need to be more careful with one way doors and I think with AI it’s kind of the same thing in that it’s not all the decisions we’re making are irreversible but there are definitely some that would be particularly hard to reverse because of the scale and speed at which AI moves or like systems are applied and I can’t really think of a good example of an irreversible decision but one thing that is concerning but is reversible, for example, is the use of face recognition.

Karen Hao:
People are pretty concerned about the way that face recognition has evolved so quickly and spread across so many countries to, in many cases, infringe on civil liberties. You could argue that the technology is already out in the world and therefore it’s irreversible and we can no longer do anything about it but that’s not true. There are a lot of places in the U.S. and the EU that have started considering banning face recognition specifically government use of face recognition and that’s possible. We did that for nuclear power or nuclear weapons where we developed the technology but thus far, fingers crossed, we’ve been able to not use it in a way that maintains world peace. So I think there are … it definitely feels urgent to solve, to have these conversations and make, to constrain the way that we use AI because the slower you are the more people that are harmed in the process, but it doesn’t necessarily mean that they’re entirely irreversible if we wait too long.

James Kotecki:
The difference between regulating Ai and regulating, let’s say nuclear power or regulating pharmaceuticals or something like that is, the chemicals that are involved in making drugs or the processes to do them are very difficult and it’s very hard to make a nuclear reactor, right?

Karen Hao:
Mm-hmm (affirmative).

James Kotecki:
But with AI it’s continually becoming more and more accessible to people with lower and lower levels of technological sophistication. That’s what a lot of these tools are doing. It’s kind of democratizing it for better or worse. Are you as optimistic then about the regulation or the ability of governments to curtail those kind of technologies, certainly you can ban your local police department from doing it, right? But to really put a lock on facial recognition, for example, in all the different ways it could be used from corporations and even skeezy individuals who can hook up their webcam in their doorbell or whatever. There’s all different kinds of things that you can imagine could happen without the government really being able to effectively oversee that.

Karen Hao:
Yeah. I actually, I’m not sure what my thoughts are any more on whether or not AI is a democratic technology or actually in fact much more similar to a centralized technology like nuclear weapons than I previously thought because it actually does take quite a lot of resources and quite a lot of expertise to actually get AI systems to work.

Karen Hao:
And I still have to think through where I stand on that. But I guess to answer your question which is like there also systems where you don’t really need as many resources as much expertise to deploy and that is definitely becoming increasingly true in some instances. Yeah, it’s a good question. I don’t really think people have an answer to it. It’s like can we actually effectively, not necessarily design the regulation but enforce it so that the people, the bad actors don’t actually do bad things. And I don’t know, I feel like that’s, yeah, there’s always going to be bad actors that just have a complete disregard for enforcement and when you have a democratic technology or assumed democratic technology like AI, it juts gets a lot harder but I don’t think that necessarily means that we shouldn’t try. That’s a pretty bad answer.

James Kotecki:
Well, I think it’s an honest answer and actually you’re causing me to rethink. I mean, I probably asked the question around this is a democratized technology, the cat’s out of the bag, that was almost an automatic question that I was asking you based on I think kind of the zeitgeist around this. But as I said, in thinking this conversation, I think, well I work for an AI machine learning company and it’s hard to implement this stuff.

Karen Hao:
Yeah. Yeah.

James Kotecki:
We have really highly trained experts who work hard on it and not just to create the algorithms but to actually implement them into production in a way that actually does what you want it to do and then in a way that actually sustains that performance over time as new data comes in. That’s really challenging and it’s not just the technological challenges, by the way, it’s the bureaucratic, the organizational challenges, the infrastructural challenges, all that kind of stuff. The cultural changes that have to be made.

James Kotecki:
So I think you have a really good point that maybe, at least in the short term, harder to implement some of this stuff than the average person might assume when they just see that technology can do something in theory, it doesn’t mean that it’s ready to do it, which kind of gets back to my point, maybe we’re at a moment now when we still have the ability to pump the brakes and think through clearly what we want our principles to be while this is still kind of difficult to implement.

Karen Hao:
Yeah. I mean, that goes back again to my uncertainty around whether or not we’re actually trending towards a more democratized technology or a more centralized technology because honestly, in the research world AI has become more and more centralized. So if you look at the way that talent and state of the art cutting edge results are coming out, how it’s distributed across academic versus corporate labs, it is increasingly becoming more and more concentrated in corporate labs like Google, Facebook, Amazon, Microsoft, whatever.

Karen Hao:
The main tech company, iLabs and it’s because the talent is all, not just the talent is draining away from academia but even the super talented people that have stayed in academia don’t have enough computational resources at their disposal to do some of the really sophisticated stuff. So, yeah, if we were to assume that actually the trend is towards a more and more centralized technology then perhaps it would be easier to regulate over time because a lot of tech companies, like the tech companies are the main entities to regulate at that point because they have most of the AI technology and most of the AI capability.

James Kotecki:
Yeah, it makes me worry though about if we think about the SEC, the Securities and Exchange Commission, as the entity that regulates the financial markets, for example. I mean it’s pretty well known that the people that work at the SEC may be very smart, very diligent but they’re up against much higher paid people that have an incentive to go work at the firms that they’re regulating, right? And there’s also kind of a revolving door element to it as well.

James Kotecki:
It seems like that dynamic would play 10X in the AI space. Let’s say there was some kind of regulatory body to regulate algorithms or something like that, the people that would be drawn to work there would be drawn by patriotism and doing the right thing and I’m sure a lot of people would do it but you’re up against these behemoths that you mentioned, Facebook, Google, Microsoft, Amazon, that can throw almost unlimited resources at these challenges and are behaving according to, of course, market pressures, not necessarily according ethical principles. At least that’s their motivation unless they’re checked.

Karen Hao:
Yeah, totally. I saw this article the other day that I think … I forget, was it Larry Page? It was former Google exec or something wants to create a government sponsored AI university that specifically, that rivals the best private universities and generates talent that has to go work in government, AI talent that has to go work in government to try and reverse the trends that you’re talking about and I think you’re right. That right now tech companies just have such ridiculous salaries and have so many resources that they can get all of the talent and pull it away from government.

Karen Hao:
Tech companies also have a huge incentive to create this narrative that regulators don’t know what they’re talking about and regulators are actually pretty scared sometimes to regulate Ai because they do feel like they don’t know what they’re talking about and they’re really worried that they’re going to hamper American innovation and then it’s going to affect us geo-politically and all that stuff.

Karen Hao:
But I do think that there are a lot of people that are aware of this problem and there are also lots of models that are emerging where the person, the experts don’t necessarily have to be working in government in order to advise government. When I write articles on AI policy I’m often interviewing independent researchers that are called upon by Congress people to explain to them how different things work. There’s a Senate testimonial format where the Senate AI committee, I forget what this little name is, they’ll often call on AI experts and other experts to come and teach them workshops about things. So I think there are models of knowledge transfer that doesn’t necessarily require the government to start paying six figure salaries to people.

James Kotecki:
Yeah. Obviously that’s such a huge challenge for the future. I do want to look back into the past for a minute because you recently wrote an article and the headline was something like, or the subtitle was something like, “AI’s problems actually go back centuries” and I wanted to see if you can unpack for us what that means?

Karen Hao:
Yeah, so the article that I wrote was based off of a paper that came out of DeepMind and Oxford. It was three researchers who have been studying this idea of why do we have these problems? Some of the biggest challenges that we’ve seen with AI in the past couple of years as people have started realizing that it’s not a perfect technology are things like algorithmic discrimination, ghost work, which is like the phenomenon of all of these hidden laborers that have to label the data, clean the data to train algorithms. Or like the geo-political imbalances that cause AI governance. Most of the countries that are participating in AI governance right now are developed countries and the developing countries are left out of these conversations.

Karen Hao:
So they were looking all of these different problems and asking why … Is there actually some thesis that ties these together because otherwise they just seem like a long laundry list of issues that we need to tackle one by one and that seems a lot more overwhelming and also makes it harder for us to then predict future problems because it seems like they’re completely unrelated to each other. And so what they arrived at is the argument that all of these different problems are actually pretty rooted in Colonial history. So if you look at algorithmic discrimination, that’s rooted in Colonial racism, racism, there were a lot of racist ideologies that were perpetuated in order to justify the African slave trade that have become sort of the … that have grown into some of the structural racism that we see in the U.S. or in other parts of the world today. And so it makes sense then that you would have something like algorithmic discrimination because if you are designing these systems and training them on data that come from a society that has these racial inequities it will reproduce these racial inequities.

Karen Hao:
Same thing with ghost work. One of the really interesting things is the countries where there are really big concentrations of ghost workers or they’ve naturally become the ghost working hubs, are actually former colonies. So the Philippines, India, Kenya, those were all former colonies of the U.S. and the British Empire and the reason why they’re great ghost working hubs now is because they know how to speak English but they have cheap labor. So that’s kind of the thesis and I, personally, like in reading this paper, it was extremely enlightening to me because I think that is a very compelling argument and the really amazing thing about having this unifying thesis that ties all these problems together, as I mentioned before, is it becomes a tool for actually predicting future problems. So if we agree that a lot of these problems are actually manifestations of this history of Colonialism, then we can start using that history to look at, okay, what other things have not yet manifested but could? And then, how do we actually design AI systems in a way that prevents these problems from ever cropping up in the first place?

Karen Hao:
So they had three different solutions. I’m forgetting the first one but the second one is this idea of making sure that we have people from marginalized communities actually participate in AI research. So one of the reasons why we end causing harm to marginalized communities is because their voice isn’t at the table when these systems are designed and they don’t have the opportunity to bring up issues that might be blind spots to non-marginalized communities. So that’s one and then another one is actually been, not just inviting them to participate in the technologies that non-marginalized communities are initiating but also strengthening the marginalized community of scientists, of AI researchers so that they can initiate their own technologies and create what the authors called, Decolonial AI, AI that is able to exist completely separate of these Colonial legacies.

James Kotecki:
Wow. What were the other shoes that were yet to drop that they were able to predict might happen based on the history of Colonialism?

Karen Hao:
So AI governance was an interesting one because there really isn’t a lot written right now about the fact that this is a problem. So they were actually proposing it as a problem before many other people have and they said, “The developed countries, which were mostly Colonial powers, are now the ones that are participating in developing these AI ethics frameworks and developing international agreements for data transfer and things like that and they end up leaving these developing countries behind and further disadvantaging them.” There was a recent instance where during a G20 meeting, all the countries were discussing … I forget. I think it was called the Osaka Agreement.

Karen Hao:
That was an international agreement for countries to have easier transfer of data across borders and basically all of the developed countries were like, “This is great. We want more data.” And all of the developing countries were like, “This doesn’t work for us because we still need to … We don’t want to just like give up all of our data to the AI powerhouses that are just going to start using it to create value for themselves. We want to keep our data until we have enough of our own AI systems to help us.” And so India ended up not signing the agreement because they were like, “This is B.S. This is designed for you, not for me.”

Karen Hao:
The other thing, I’m forgetting the last one that they predicted. Oh, the last one was, it was also kind of related to this geo-political power imbalances and the way that it ends up also dictating conversations around AI for good or AI for social development, which is also not really something that many people have talked about but they were like, “We predict that this would … the way that developed countries or international organizations like the U.N. approach these conversations are pretty paternalistic, like it’s still this residual Colonial mindset of like, “Oh, let me help you and I know what’s good for you” and it’s like, “I’m going to bring these AI systems that I’ve already developed for myself into your context-

James Kotecki:
AI for good but who defines what good actually is?

Karen Hao:
Exactly, exactly and they were like, “This is probably also going to end up also disadvantaging these developing countries because they’re not getting to design these AI systems for themselves. They’re not getting to define what good means, how they want to be developed.” And so in the end all of the AI systems that are being ported over from, I don’t know, from Europe into African countries are not actually going to work the way that they’re supposed to and they’re going to end up just exacerbating some of the issues there.

James Kotecki:
Hm. These are all really important topics. We have just a few minutes left. I want to keep it light with the AGI Lightning Round. Are you ready for the AGI Lightning Round, Karen?

Karen Hao:
I’m ready.

James Kotecki:
AGI, of course referring to artificial general intelligence, the idea that one day computers will be generally intelligent, kind of like people are and not just good at one specific thing. All right, these are questions, answer as quickly as you can, don’t think too much about it. We’ll leave the thinking to the theoretical AGI. Okay, on the scale of one to never, how many years before we achieve AGI?

Karen Hao:
Never, minus one.

James Kotecki:
Okay. So it’s just possible you’re saying?

Karen Hao:
Yeah.

James Kotecki:
Okay. But just this side of possible. I will believe AGI is real when a computer can blank.

Karen Hao:
Oh God. I’m looking at river right now so I was like, swim, but that doesn’t make any sense. I don’t know, learn from one exa[mple]… no, I don’t know. That’s a really good question.

James Kotecki:
It’s an interesting question right, because I think at some point someone’s going to claim to be doing AGI long before maybe it actually exists but it’ll be a trick…

Karen Hao:
Yeah

James Kotecki:
… we the humans will be pretty easy to fool I think, but … All right. I’m getting away from the AGI, the lightning round format. Okay, true or false, people worry way too much about AGI today.

Karen Hao:
True.

James Kotecki:
All right. Which of the four companies that recently testified before Congress would you prefer to develop AGI, that’s Amazon, Google, Facebook and Microsoft?

Karen Hao:
Microsoft.

James Kotecki:
And finally, when the robots take over I am looking forward to blank?

Karen Hao:
Relaxing on a beach.

James Kotecki:
Relaxing on a beach, the universal robot-enabled AGI dream.

Karen Hao:
That’s right.

James Kotecki:
Karen, thank you for playing that ridiculously goofy segment at the end and for really, overall a really enlightening conversation. Really appreciate you being here.

Karen Hao:
Yeah, thank you so much for having me, James.

James Kotecki:
That is Karen Hao, the Senior AI Reporter at the MIT Technology Review. I am James Kotecki, your host. This has been Machine Meets World. You can email the show. It’s mmw, for Machine Meets World, mmw@inifiniaml.com. You can like us, rate us, listen to us as a podcast, watch us as a video, subscribe, tell your friends, tell your mom, whatever you want to do I really appreciate you listening and/or watching. I have been James Kotecki and that is what happens when machine meets world.

Image for post
Image for post

Originally published at https://infiniaml.com on August 4, 2020.

Machine Meets World from Infinia ML

Weekly Interviews with AI Leaders

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store