What even is AI ethics?

Solveig Neseth
daios
Published in
15 min readJun 1, 2022

--

A conversation with Thomas Krendl Gilbert, Ph.D.

Photo by Ally Griffin on Unsplash

Welcome to the first interview of my series exploring AI ethics! In case you missed my introductory post explaining this series, I’ll catch you up. I am a performing artist with absolutely no background in artificial intelligence, so, in order to inform myself, I will be having conversations with people who are far more knowledgeable than I on the subject. The goal here is to gain some perspective on how automated systems work, how they impact society, and how we can improve them. I, for one, am entirely uncomfortable with the notion that I have no understanding of the many technical facets that permeate my everyday life. If you feel the same, read on.

The following is a conversation I had with Thomas Krendl Gilbert, a postdoctoral fellow at the Digital Life Initiative at Cornell Tech in New York City. The conversation takes place over a google meet. I’ve known Tom for some months now, but this interview serves as our long-awaited, first one on one. I, the interviewer, began with a simple prompt.

“How did you get into AI ethics, Tom? What was your gateway into the field?” I ask.

Tom replied, “my gateway into AI ethics was a product of both my personal background and my academic training. When I was a kid I loved reading science fiction. I remember when I was twelve I read a book by Ray Kurzweil called The Age of Spiritual Machines, which was one of the first books to popularize this idea of a singularity — this kind of moment where you get machines that are smart enough that they can improve themselves, which then makes them even smarter, and then you get this runaway feedback loop where there is this intelligence explosion. This was the first time I had ever heard of this idea. It really blew my mind.” He paused for a moment, then continued. “And I retained that fascination. That would have been around 2001, when AI was still in what we would refer to as an ‘AI winter’, where there weren’t many people working on it. It was highly theoretical, highly specialized, highly futuristic. My interests were really more philosophical, more human. It wasn’t really until much later when I went to grad school — after I had lived a couple years in Europe, I’d studied intellectual history, political theory, sociology — that I increasingly could no longer make sense of my life except with reference to automated decision-making algorithms.”

This surprised me, though it probably shouldn’t have — Tom’s title at daios is AI Ethics Lead. Shows how good at paying attention I am. “So your background is more in philosophy than, say, computer programming?”

Originally, yeah,” he replied. “As part of my graduate studies I picked up a lot of technical skills, but for me the questions were always humanistic in nature.”

My next prompt was much less of a softball. “So, Tom, let’s pretend you’re talking to a plebeian. Because, in fact, you are. Why should I, or anyone who doesn’t know anything about AI ethics care? What should we know? How does it affect us? What makes you passionate about it?”

Tom responded first by answering this question in reference to how it is usually addressed in the AI community today. “I think that most people who work in AI ethics and are into the AI superintelligence stuff usually answer that question with, ‘we should be absolutely terrified by the arrival of machines that are qualitatively beyond our ability to comprehend how smart they are. They’ll outmaneuver any attempts to control them and they will infiltrate the stock market, decide who gets to run for president, and gradually subvert all the mechanisms by which we live in ways we won’t know about until it’s too late. And so the most rational thing you can do is basically surrender your autonomy to technocrats, engineers, and data scientists who are building the tools we need to control these systems before they become arbitrarily powerful.’ It’s usually framed in relation to fear and to loss of control.”

Yup, that checks out. Fear of machines. Enter any sci fi plot ever.

But Tom continued. “My approach is actually pretty close to the opposite of that.” Phew. “The reason this matters is that if we don’t take a much more proactive stance against AI and how it’s built then we won’t be able to redefine our agency and reorganize it in terms of our own articulation. It’s not about what we’ll lose — I think it’s about what we won’t gain.”

I liked this approach. Much less fatalistic. I wondered — hoped, rather — that anyone else shared this outlook. I asked Tom about the field of AI ethics and machine learning. “How is this issue currently being addressed? Are there lots of ideas? A lack of ideas? Are there enough minds working on this problem? Are people disagreeing about how to address it?”

“I think people are terrified,” he answered, frankly. “I think people are either short-circuited because they’re terrified or they’ve priced themselves out of the conversation because they’ve been told that they’re not smart enough to think about it.”

“So there’s an accessibility issue?”

“Yes. I think it’s an accessibility issue. I think it’s a literacy issue. There are different ways of articulating it, but it’s essentially a political problem.”

Tom received his Ph.D. at the University of California at Berkeley and, because I actually do pay attention, I know that Tom designed his Ph.D. in Machine Ethics and Epistemology. I brought this up, noting that since he had to design his own Ph.D., it’s likely that there are very few resources out there for people to educate themselves about AI ethics. He chuckled and replied, “yes, and that was also one of the themes of my Ph.D.”

At this point, I was feeling particularly motivated to jump aboard the soapbox. I asked Tom about what people can do in order to be more informed. “Are there people talking about these topics that we should know who just aren’t famous enough? Who is the Neil deGrasse Tyson of AI ethics?” Just for fun, I added, “Is it going to be you, Tom?!”

“That’s an interesting question and is actually something I think about a lot,” Tom answered. “I would actually point to Carl Sagan. You can go back and look at old footage on YouTube, not even the lectures, but just his time on the Johnny Carson Show. You’d have, like, Steve Martin as a first guest and then Carl Sagan would show up and would just sort of speak continuously for twenty minutes about Alpha Centauri or some other astronomical phenomenon. And the audience would just sit in rapture and silence, not interrupting, not laughing. It wasn’t because they were bored, but because they were engaged. Back then there was, frankly, a lot more public trust in experts than there is today. Today, the dominant form is people being told to be afraid and to surrender control and deliberation to those who ‘know better.’ To me this is tragic and very frustrating, as it cynically feeds into this lack of trust. We don’t really believe in our ability to have our interests represented by experts. We’d rather just surrender to people who are not accountable to our interests. Sagan was someone who always emphasized how accountable he was both to the public and to the scientific community he was a part of.” He circles back to my question. “I think the first thing we need is a much more methodologically rigorous scientific community working on these AI ethics problems and the second thing we need, no less importantly, is an actively engaged public that is prepared to articulate what it wants from these systems. Both those things are way more important than having someone ‘famous enough’ to get people’s attention.”

Now get ready everybody, because the conversation is about to get wild.

“So here’s a naysayer’s argument, Tom.” And before you ask, yes, this naysayer was, in fact, a very annoying Tinder date. “People just don’t care. You actually spoke about this in your last blog post, where people just click consent, click agree, click whatever. How do we combat the apathy that occurs due to the culture of immediacy? Sure, maybe people care about their data, but do they really even know what their data is?”

“Right. So this is the question. And it’s really hard to answer because there are no easy answers. But it’s an important question. It’s going to require, I think, several different things at the same time working in concert. My own research, my own projects do take stabs at some of these things you mention. ‘People care about their data.’ People do care about their data, but there are other things going on with AI beyond data that people have not been told that they should care about and are probably more important in some ways.”

This is the part where Tom breaks down the system for us laypeople.

“So every AI system, no matter how intelligent the system, needs three things. The first is data from which it can learn. The second is the model, or the representation of the task it is supposed to be doing. The model needs data to learn to do this task. The third thing, which is a little bit more technical but is actually quite a simple idea, is the optimization problem — you can also just think about this as the purpose of the system, its goal. So, for example, facebook makes use of your data in order to learn what content to show you, right? That’s still not answering the optimization question. The goal of the system is not just to predict what kind of content you’d like to see, the goal is to keep you onsite — to prevent you from opening another tab, closing facebook, unfriending people, or doing anything that would reduce the likelihood of you staying onsite. Keeping you on the site longer, that’s the goal. Now, is that good? Maybe it’s not? And if that’s not good, does it actually matter how it’s using your data? If the goal is inappropriate does the data matter?”

Aaaaaand cue everyone, once again, being annoyed with facebook. Excuse me, ~MetA~. “So it’s a corporate responsibility issue to decide whether or not it’s okay to farm human minds?”

“I think we don’t yet have a precise or adequate language for describing this,” Tom responded diplomatically. “Corporate responsibility is in there, but we have to get into a conversation about what that means and how we define it. I think democracy enters in as well. There’s a whole host of values at play, and really the longer I’ve worked on these problems the more I’ve appreciated how very few of them are actually addressed.”

This is where Tom starts dropping mics all over the damn place.

“AI systems are not the perpetrators here. This is the same problem, the very first problem of western political thought: ‘what do we do about slaves?’”

Hold on, what?

“So some philosophers in ancient Greece, like Plato, were concerned about this. What do we do about slaves, and also poor people, and also stupid people? They don’t know how to think, they can’t do philosophy, so we need to figure out a way to control them, manipulate them so that social order is maintained. That’s Plato’s Republic. It’s probably the single most famous piece of ancient political thought. And the center of the argument is a fundamentally aristocratic concept of the role of thinking in social order.”

A flurry of images flash through my mind of all the grade school homework assignment prompts I’ve endured. Think critically, use your critical thinking skills. According to my primary education teachers, Plato is very wrong. “But, Tom! That implies that critical thinking isn’t a skill that can be learned!”

“Right,” he replied. “The unquestioned assumption is that there are very few people who are able to do it. You know, it’s funny how sanitized that idea is in philosophy class. You’re taught this stuff and it’s like yeah, great, this is what it means to be a philosopher, just think carefully. And actually you’re just being socialized into being an extremely elitist and anti-democratic person the more you buy into this stuff.”

I now have beef with Plato. As my friend, Christine, says, he’s on my list.

“Anyway, there’s a different approach which argues that these are skills and that they can be cultivated, and that they have to be cultivated in order for democracy to work. That’s basically a tradition that I identify with Aristotle. If you review his politics, his ethics, some of his works, this is the perspective he’s building toward. The goal of philosophy, the goal of inquiry some would say, is not to maintain social control by manipulating people, it’s to live a good life.”

Watch your toes for that mic.

“That’s a completely different frame on the problem and, frankly, it’s one that I think is sorely needed in the AI discourse right now. And I increasingly see all of my work either directly or indirectly pointing toward this neo-Aristotelian frame for AI ethics and politics.”

“So your position, Tom, is that given the right circumstances, the right people, the right minds and motivations, we are actually completely technologically able to solve these problems, but people aren’t working together to do so on a social level?”

“I think the issue is a democratic deficit, yeah. And I think that if that democratic deficit were fixed, many of these problems would not just be solved, they would disappear. I actually think the tricky thing is that the way to go about fixing that democratic deficit is not by teaching AI what our values are and trying to put ourselves into our tools, but by using AI differently and in such a way that we can become more democratic.”

“But that’s so complicated, Tom! Because democracy is complicated and tied to government and the economy and foreign affairs and all these complicated things!”

“Yup. And I think we’re going to have to rethink our relationship with the economy and with other countries in order to achieve this. I mean, that sounds radical, but I think that’s much less radical than claiming that we’re going to build super-intelligent AI beings that are going to completely change the trajectory and structure of human history.”

Touché, Tom. Thank you for keeping it real. “Fascinating. So what should I do, as a user, a person who has basically no autonomic options other than just saying, ‘I don’t accept your cookies’ — What do I do?”

“This is tricky. So there’s a few things here. I think one thing people need, and one of the things I’m working on, is a better, more accurate understanding of what is at stake with these systems. I gave the example of data versus goal. Data matters, but it only matters in relationship with the goal of the system and the model that it’s learning. People need to think of these things not as discrete objects, but as elements of a feedback loop. There is no model without data, there is no actionable goal without a model, and there is no data without people being willing to buy into the system being advertised to them. These things don’t actually exist in isolation from each other. I think that most laypeople probably do need to understand that so they can see which parts of these systems are hurting them or could be of greatest use to them in their lives. Right now, most people don’t have that understanding. Another thing people need to understand is there’s a difference between being able to make a choice and coming to a decision.”

“Woah, please elaborate on that.”

“Okay. So you’ve given this example several times: accept, accept, accept cookies, but you’re not deciding anything really. You didn’t build the system, you’re not deciding what it’s trying to do, you’re just being asked to set an arbitrary limit on how much about you the system knows. So you’re put in this existential double bind because if you accept the cookies, on some level you are afraid that your privacy is being compromised, but if you reject the cookies you can’t participate in the website, the interface, the platform. Or even worse than that, you often don’t know exactly how excluded you are and people aren’t given the tools to make sense of that choice, and so they’re not able to make a decision about it. And that again brings us back to Aristotle’s work about what it means to come to a decision. The point is the system should be empowering people to deliberate about their lives more actively and completely, not automating deliberation by trying to put it in a machine that can’t actually decide anything and can’t actually value anything. A self-driving car has no idea where it’s going and there’s no self-driving car that Tesla or any other company is building that has any idea what it’s doing or what it’s looking at. The fact that it’s really good at imitating how people drive does not mean that it is anything like a human driver because it’s not deciding anything.”

Somehow, this example brings my mind back to the ever-looming, nonexistent threat of superintelligences, this time with even more concern that things could get out of hand. “Let’s imagine that these problems are addressed, that we’re in a position where this shift in the way AI is utilized in our society occurs. Is there a pitfall to that, too? Could Skynet still happen either way!?”

Tom, with all his patience, reiterates his perspective. “I think our priority needs not to be trying to live a perfect life, but trying to take ownership of our life. That’s what we don’t have right now. Frankly, the way we talk about AI and how we build it confuses what it means to improve life with what it means to own it. My life is full of apps that claim to make my life more convenient. People aren’t really happier today than they were 30 years ago. If anything, people are much more neurotic, anxious, and depressed. We have actual evidence of this from revelations like the Frances Haugen testimony about facebook that young people, teens, are incrementally more depressed the longer they use facebook, and in a demonstrable way. They’re really quite empirical and rigorous, these results. That seems ironic if what these systems claim to do is to connect us to people, make us feel less alone, make us feel happier. They seem to be massively failing at doing this and the reason is we’ve approached the problem in the wrong way. Rather than treating these systems as tools for our own growth, self-realization, and deliberation about ends that are ours to define and determine, we try and automate the ends. We try to put them into a system that we lazily claim is smarter than us and then once in a while hand a ‘choice’ to the user about how much of their data it should use. So your life ends up drifting like a game of Plinko down this trajectory that, had you known what the implications were years before this started, there’s no way you’d choose. We all assume that the future is determined by these hypothetical super-intelligent machines that don’t actually exist. We try to choose against a horizon that we didn’t really articulate to ourselves.”

“So maybe AI should be considered and approached in a more individualistic way instead of just a corporate tool that is only utilized for certain ends?”

“I would play with that language a bit. I wouldn’t say we need to approach AI in a more individualistic way, we need to approach AI in a more individualizing way. These should be tools for growth, tools for individuation. That individuation might not be strictly ‘individualistic’ — you can imagine social movements, you can imagine new ethnicities, new demographics, identities, profiles emerging through these systems in very empowering ways, in ways that weren’t possible before.”

I ended the conversation by asking Tom what kind of resources exist for people who want to know more. The unfortunate reality is that, since this is a new and emerging field of study, there isn’t a lot of overwhelming accessibility. He pointed to books that influenced him in his own journey. “I mentioned The Age of Spiritual Machines, which is now quite an old book. I now disagree with almost everything in that book,” he chuckled, “but it was very influential for me. It was the first book that made me start thinking about the stakes, and that’s still true. It’s a very clear (though, probably wrong) statement about how we should make sense of the stakes and explains those stakes very clearly. I also would point to works by Aristotle. Aristotle is not very hard to read, not very technical. Yeah, it’s philosophy so it’s a bit dry, but if you read Aristotle you’ll understand what democracy is in a very deep way and you’ll understand what it means to be an agent.” I asked him about his own work. He mentioned an academic paper published in an AI journal titled Hard Choices in Artificial Intelligence. “This paper in particular speaks most to what I’ve discussed here and with a little more precision. Some of the language is a bit technical, but I think it’s still pretty clear.”

So there you have it, folks. I hope this article was enlightening and inspired you to delve deeper into the AI conversation. Stay tuned for further interviews, and be sure to follow Thomas Krendl Gilbert’s work in AI ethics.

--

--