DIGETHIX: DIGITAL ETHICS TODAY — EPISODE 13

Exploring Attention and Virtue through Science Fiction

A talk with Emanuelle Burton

Center for Mind and Culture
DigEthix

--

“A lot of the kinds of problems that are created by social media have to do with that kind of immediacy. And I think that’s pretty interesting, but one thing I like about virtue ethics is that, it calls our attention to the value of things that cannot happen quickly.” (40:30)

— Emanuelle Burton

DigEthix is a podcast that looks at issues in the ethics of technology from a philosophical perspective. In this episode we chat with Dr. Emanuelle Burton about how to teach ethics in the classroom. At the center of this question of teaching ethics is the question of formation.

This conversation explores these central questions:

  • How do we become the right sort of people to create worthwhile technologies?
  • How do we prevent these technologies from distorting us in our relationships?
  • What ethical skills will be useful for the next generation of technologists?

(Jump to important links here)

Click the link below to listen to the episode:

About the guest:

Emanuelle obtained her PhD in Religion and Literature from the. University of Chicago Divinity School. She teaches ethics in the Department of Computer Science, University of Illinois at Chicago. In collaboration with Judy Goldsmith, Nicholas Mattei, Cory Siler, and Sara Jo-Swiatek, she is developing a science fiction-based ethics curriculum for computer science students. Her solo research focuses on ethical formation in science fiction and fantasy for young readers.

Burton, Emanuelle | Computer Science | University of Illinois Chicago (uic.edu)

Computing and Technology Ethics: Engaging through Science Fiction by Emanuelle Burton, Judy Goldsmith, Nicholas Mattei, Cory Siler, Hardcover | Barnes & Noble® (barnesandnoble.com)

I think there’s nothing more civic-minded and anti-capitalist, and generally productive for making the world we want to make, then finding a community of people or being in a community of people that shares your vision of what is good, and how to get there, and keeping yourself grounded in it. (43:18)

— Emanuelle Burton

Transcript:

Seth Villegas:

Welcome to the DigEthix podcast. My name is Seth Villegas. I’m a PhD candidate at Boston University working on the philosophical ethics of emerging and experimental technologies.

Here on the podcast, we talk to scholars and industry experts with an eye towards the future. Today I will be speaking with one of our advisory board members, Dr. Emanuelle Burton. Emanuelle obtained her PhD in Religion and Literature from the University of Chicago Divinity School. She teaches ethics in the Department of Computer Science at the University of Illinois at Chicago. In collaboration with Judy Goldsmith, Nicholas Mattei, Cory Siler, and Sara Jo-Swiatek, she is developing a science fiction-based ethics curriculum for computer science students. Her solo research focuses on ethical formation in science fiction and fantasy for young readers.

As I mentioned last week, these conversations will be edited down to be more focused on the topic at hand. Thank you again for the feedback. We will continue to work hard to improve the podcast each week.

A large part of this conversation focuses on how to teach ethics in the classroom. At the center of this question of teaching ethics is the question of formation.

As such, the key questions for this episode are:

How do we become the right sort of people to create worthwhile technologies?
How do we prevent these technologies from distorting us in our relationships?
What ethical skills will be useful for the next generation of technologists?

While there are no easy answers to any of these questions, I thought it might be good to start with this story that has been in the news this week, or rather with two stories.

First, Facebook’s temporary shutdown on October 4th, a problem attributed to configuration issues. And second, former Facebook employee Frances Haugen’s coming forward as a whistleblower testifying before the Senate on October 5th. Now part of why I’m bringing this up, to go to the first point, is mainly talking about centralization and decentralization as a matter of network architecture. This is something that technologist Jaron Lanier is really clear about in many of his books, the way that the internet is designed, in fact, and the way that many platforms are designed is not arbitrary, they could be designed in some other way. And actually what we see with something like Facebook, that has been integrated into so many platforms, and many of the stories that came out actually had to do with someone integrating their business into Facebook, and then Facebook being down for several hours, is the way in which seamless integration into a social media platform, like Facebook, can also be a kind of dependence. And because of this, we shouldn’t necessarily take for granted that things have to be centralized like this, say through really big tech companies, but also that there might be other ways to build networks that are more decentralized.

While I think there are advantages and disadvantages to say, a centralized architecture versus a decentralized architecture, we always have to be thinking of the kinds of trade-offs that we might have in any given technological system, and what might work better for us in the long run. In the case of Frances Haugen’s testimony before Congress, one of the things that she brought up is how she’s deeply concerned about the next generation of children and teenagers who are growing up with social media. Particularly, she is worried about the ways in which they could be influenced and all the kinds of power that Facebook might have over them. In a way this question is tied to the point we just talked about, which is, if you have a completely centralized entity that’s in charge of something that’s really important, and which is your main way of getting information, then it’s natural to be suspicious of the power that that kind of company might have. And while I do know people who work at Facebook, this is something that’s kind of undeniable.

So for instance, Tristan Harris, in his documentary, The Social Dilemma, talks about the issue of having to make a kind of deal with the devil in which he really wants to remain connected to other people, people who are within your network in which you have, say, relationships that might span the entire length of the country. I know that’s the case for me. But you don’t necessarily want to be inside of that network, because you’re not sure who’s in control, or if you can trust the person that’s in control. And actually, Mark Zuckerberg came forward and said, “Well, we have many of these same concerns about young people, which is why we have a Center for Research on them,” but again, how much can we trust him and the company behind him, is, remains an open question.

One of the people that we often refer to or that Bernd Dürrwächter, who appeared on last week’s episode, and I like to talk about is Shoshana Zuboff’s work. And Zuboff makes this point that what’s particularly insidious about a lot of forms of technology today is not necessarily that they manipulate us, but rather that they manipulate us below our level of awareness. They try to nudge us in all kinds of different directions. And so I think that negotiating that sort of balance is incredibly important.

While I do think it is important that we address problems with technologies that we’re using every single day, I also think that is important to cultivate a sense of imagination within the next generation of technologists.

What is it that you can observe about technologies today that you don’t like
What is it that can be changed?
How could these things be redesigned with completely different architectures?

So that say, you have a more democratic kind of technology, something that’s completely decentralized. And also thinking about the contexts in which that’s appropriate? Maybe there are situations in which a centralized entity would be better. But until we actually come to a broader understanding of how those things work, what alternatives might be available, and I think that it may be up to the next generation technologists to actually present what those alternatives might be.

This podcast would not have been possible without the help of the DigEthix team, Nicole Smith and Louise Salinas. The intro and outro track Dreams was composed by Benjamin Tissot through bensound.com. Our website is digethix.org. You can also find us on Facebook and Twitter, @DigEthix, and on Instagram, @DigEthixFuture. You can also email us at digethix@mindandculture.org. Now I’m pleased to present you with my discussion with Emanuelle Burton.

Seth Villegas

(6:26) So first off to just kind of get started, it’d be great to hear a little bit just about like, who are you? what are your interests? I knew you kind of have a unique story too, kind of going from religion and literature, to now teaching in a CS department and really would just love to hear the story of how that happened.

Emanuelle Burton

Sure, um, one thing before I launch into this. Um. So the question is, how did I wind up where I am?

Seth Villegas

Mhm.

Emanuelle Burton

I think accidents is an understatement. I had no thought at all of doing this with my life. I think the way that you phrased this in the question you sent me is, how would I have reacted 10 or 15 years ago if someone had told me where I was going to wind up? And I think what I would have said is “you’ve pulled the wrong file,” but I know how it happens because I grew up in Silicon Valley. A number of my high school classmates have gone on to careers in IT in some dimension of computing. So that’s where I come from. And in fact, the thing that I worked on was sort of the intersection of literature and ethics in terms of world making, which is why religion was the right context for me in which to do this, right, thinking about the language that gets used in constructive studies and religion is theological anthropology. That’s a way of naming a nexus of questions that’s really interesting to me. Not just what does it mean to be a person, but what does it mean to be a person in relation to the world, situated in the world, in relation to other humans in the world? What are the potentialities and capacities and possibilities and responsibilities of being a person in the world?

I’ve always been interested in how literature or you know, particular works of literature, or maybe literature in general, invites us to engage in imagining that. It’s something that every work of literature does to some degrees, and I specifically wrote a dissertation on sort of the dialectic of person making and world making in Chronicles of Narnia. If you’d asked me then, I would have said, and in fact, I would still say that, in a lot of ways, the distinction between science fiction and fantasy is not necessarily a particularly important one. I wasn’t really engaged with science fiction. I wasn’t a science fiction reader. Those were not the texts I was looking to work on.

I moved to Kentucky in 2014 when I graduated, with my wife. We basically we needed to break the orbit of our graduate institution. And her alma mater, her undergraduate alma mater, was looking for people to teach introductory humanities and that was what I had been doing for years. It was what I loved, it was what I assumed I’d be doing for the rest of my life, in some capacity. So we went, we did some, some sort of humanities for adjuncts in our college. And while we were lower down here, we went to Rosh Hashanah services, and we decided we were going to stay to the end and at the end of the service, someone came up and said, “Hello, welcome. We haven’t seen you before. Do you want to come over for lunch?” And we thought, “Well, it seems nice. Sure, we’ll come have lunch.”

And while we were at this lunch, my wife went up chatting with a computer scientist who happens to be about two weeks away from submitting a workshop paper on using a particular work of science fiction, The Machine Stops by E.M. Forster, to teach ethics. She was sort of at the I’ve almost started phase of writing. She taught the class but had not read the paper. She and a former graduate student gonna write it. Would I be interested in helping? And I was like, “you know what seems fine, fine, fine. Why not? Why not add another thing?” I was sort of in that chaotic blitz one is often in for the undergraduate school where you’re trying to say yes to everything. But I looked at this and I thought, “well, this seems fun. It’s finite, it’s two weeks, and then I can go back to my real life of being a humanities scholar.” So we did this. And then it went over well, and we did another workshop paper, and we did another article. And then Judy, the woman who had recruited me, started talking about maybe we could put together an anthology of teachable stories, and they could write some introductory material to go with them. And so she met with an agent from, or an editor from MIT Press at AAAI, which is the big AI conference, and came out of that meeting saying, “why don’t we write a textbook?” Within the year, we had secured an EAGER Grant, which is a seed grant from the NSF to get started on the project. And so then…

Seth Villegas

(11:10) A really great part of your story, too, is you also encountered someone who had an open mind as well. Oh, look, there’s someone here who potentially has a kind of expertise that I don’t have. And I’m really interested in using, say, literature in this way. And I don’t think I talked to you about this before, but you know, I have a background in literature as well.

Emanuelle Burton

I did not know this.

Seth Villegas

And that’s what I did for undergrad, and a lot of people I know from undergrad in Silicon Valley also ended up at big tech companies. And so that’s kind of where, you know, I was like, “oh, like people are kind of dealing with the sorts of questions that you sort of brought up,” or even the real like philosophy first order questions, which, which made actually literature and also like theology, religions, the sort of place to, to ask those things of. Especially the sorts of, “well, what should we be doing? What should we be designing? Why should we be designing those things?”

Partially, one of the reasons why I think literature is interesting and I think science fiction is interesting. You know, you read a science fiction story, especially like a dystopian story and there’s kind of the formula of you know, X technology gets introduced, widespread application ruins society, and then you know, we’ll be talking to my engineer friend and he’ll be like, “That’s an interesting thing. We could do that. Like, we could make that.” And I was like-

Emanuelle Burton

But we could change it slightly. And that would be more interesting and or morally fine.

Seth Villegas

Yeah, it was just sort of funny, but at least in those sorts of conversations, I was having at least kind of as an undergraduate they didn’t seem to get past that. It just has kind of distraction of like, did you read the story? Like, like, the story seemed to be about you know maybe it’s not good if you use like X technology to have like, total social control or something like that, and-

Emanuelle Burton

There is a certain kind of, “can we build it engine” in the brain.

Seth Villegas

Yeah, exactly. And so I really actually enjoyed that sort of environment, because I kind of got to see both sides of that. And also the kind of, you know, Marvin Minsky has his like, sort of creative chaos sort of thing that sort of infuses lots of CS departments. And, you know, I had people running into real issues that didn’t know how to think about and so I thought, “Well, okay, I have this kind of broader training that should be applicable to this so…”

Emmanuelle Burton

Sure.

Seth Villegas

“… why not try to piece apart those sorts of things, start to think about what real ethics education could look like that can be helpful.” For you, what have you actually found as you’ve tried to teach these things? Like, what is ethics education look like?

Emanuelle Burton

(13:52) Yeah, I’m really glad you asked that. Because I, I think the language of ethics can be very narrow and constricting. And in point of fact, it took me probably five years of, no, four years of graduate school to figure out that I was interested in ethics, because the language of it - and frankly, a fair amount of sort of the institution in both philosophy and religion of ethics — is sort of hidebound and very grounded in specific traditions in ways that make it not friendly to fundamental human concerns.

I feel like ethics as a discipline in a variety of ways skirts, or, or, you know, hopscotches paths that or even sort of denies the value, or legitimacy of that kind of inquiry. And I understand why some of that is, right, you can get into very sort of weird essentialism, when you start sort of trying to talk about like fundamental human nature and like, the impulse toward the ethical or whatever, but…

Seth Villegas

Let’s, let’s just kind of continue with the kind of line that you have here because the reason why I personally think it’s important to consider what, say my own personal theory of what people are really like is right, is because there are all kinds of assumptions that are going to be built into what I’m doing.

Emmanuelle Burton

Yeah.

Seth Villegas

That are kind of an outgrowth of that.

Emmanuelle Burton

Yeah

Seth Villegas

So for instance, if, you know, like, even if we just go like a really simple, you people are good or people or bad sort of thing, right? The way in which my kind of default stance to that is going to be a real reflection of that worldview. And I think one of the big things in technology that’s just been trying to point out is like, oh, like, you have right, like, I don’t know why you’d want to deny that you have that.

Emmanuelle Burton

That we’re carrying around stories that..

Seth Villegas

Yeah

Emmanuelle Burton

…shift what we see.

Seth Villegas

Yeah, exactly or even really framed by the rejection of particular story.

Emanuelle Burton

Although, I mean, when you reject a story, you don’t just sort of cancel it, it doesn’t go away.

Seth Villegas

So kind of going then back then to the initial question of, “what is it that’s allowing literature? What’s it allowing you to do, right?” So, like, let’s say, like, I’m a, I’m a fresh, you know, CS student, you know, maybe I’m like a sophomore or something like that. I know, I want to code, I’m probably forced to take this class in someway.

Emmanuelle Burton

That is why almost all of my students take it.

Seth Villegas

Like, what are you hoping that that person gets out of being in that kind of environment?

Emanuelle Burton

(16:31) So I will start by answering it in a way that I talk about on the first day. When I’m talking to people who are not persuaded that this class has anything to offer, they’re mildly surprised that they even get so far as absorbing the knowledge that I have a PhD in these things. The sort of, the turning point for a lot of students - and it definitely does not come to the same place for everybody - but the sort of the catalyzing moment that I try to create, or sort of, I don’t even know how to say that, but I try to lay the conditions for, because I can’t really create it for them is, is a kind of an “Oh shit” of like, “I thought this was going to be solvable or fixable with some sort of like targeted intervention, or, you know, the application of my amazing engineer brain” or whatever it is. And it turns out that it’s not. Turns out that there are insidious layers of complication or entanglements that mean, even if you fix a particular issue, it will be displaced elsewhere. It will ratify elsewhere and sort of that moment of catalyzing realization that, that shit’s complicated basically, that’s a major takeaway. That’s sort of like the founding takeaway.

In terms of how literature is helpful for that. One, the first thing I would say is that I think that, I mean, there are lots of ways to pursue the study of literature, but one thing that I think is common across many of the humanities, and certainly this can be how literature works, is that being good at reading is basically being good at paying attention. And that is, to some extent, about one’s individual capacities of perception analysis. But another thing that it’s about is having a rich understanding of the kinds of things you might be looking at and that requires knowledge of the world, that requires an understanding of what life is like for other people who are not you, it requires an understanding of how systems are purported to work and how they actually work, and the kinds of tensions and gaps that are created there. And so, this learning to pay attention, it’s definitely not something you can do by yourself in a room at home and the way we unfortunately, all been forced to do in the sense of that requires sort of listening to others and learning from others.

Pretty much all of my students show up having already started that they haven’t necessarily noticed that they’re doing that or thought of this as a significant part of themselves, or as a discipline of self-making, or as part of their obligations to the world, but you know, they’ve noticed things, they’ve come to conclusions. So part of what I try to do in the class is first of all, create the epiphany of like, oh, wow, shit’s complicated. Again, I don’t want to say create, I want to create the conditions for that to happen, because it’s, I could harangue them from the front of the room, all semester, and it wouldn’t necessarily make a difference, right? The thing — and this is the other thing literature can do — is it creates a common ground in which experiences can emerge, right. I do not have to be the authority, the authority can be their experience, right. And that doesn’t mean they’re untrammeled experience when I am there, and their peers are there to sort of be like, “Well, wait a minute, what about did you notice?” your clean clear theory of how this all works, excludes half of what’s going on? What happens if we pay attention that and try to account for that? Working through the world of the story itself, I think can create that initial sort of realization that shit’s complicated.

And I think too, that working with short stories, especially in the context of group conversation where you are thinking with other readers and, you know, trying to respond to other readers and harmonize, or challenge, or whatever it is that you’re doing, is really good practice in paying attention.

It’s like all of the small observational and imaginative and sympathetic muscles that go into that work. You just get practice doing it. And literature offers a setting that is at once, I think, pretty much always at least some students who really feel deeply some aspect of the story — and actually, that doesn’t always happen. Sometimes they show up for classes like, “Yeah I don’t understand why we read this”, and then 20 minutes into discussion they’re like, “oh, oh”, and they have a version of that catalyzing moment, and then they, they start to understand what makes it hard, or they feel compassion, or something like that. And I think just, just at a granular level, doing that work, doing that task of wrestling with description, and wrestling with understanding.

Now there is a way of teaching literature and I don’t think it’s the worst thing, but I don’t think it takes advantage of literature where you basically you shake it down for content. You say “okay, so this is a story about a care robot that is equipped with these various capabilities that mean that it can emulate the care for persons family members, what do you think about the ethics of this situation?” And like, okay fine, the author has created a useful situation for us to talk about, we can talk about that. But to me, that’s not nearly as valuable as saying, “Well, okay, let’s not make up our own stories on the basis of this premise. Let’s look at the very rich story we have. Let’s look at the behaviors of these characters. Can we, can we imagine why it is that this is how her daughter-in-law reacts to the way that her, the way that this woman treats her? Can you imagine why it is that when her granddaughter calls and the patient is herself asleep, she talks to the robot instead? Like, can we, can we feel our way into what it would be like to live in this particular socio technical moment?”

That I think could be a really helpful counterweight to the sort of the, the clean lines of the engineering brain that says, Oh, if we build this, we’ll just do X and Y, and then it will work perfectly. And people will get better in the ways that are necessary for them to get better in order to use the technology properly, and then everything will be better. So that’s, let me say one more, actually I’m wrong. I’m going to say one more thing. I think that, so the thing that there’s a lot of, been a lot of scholarship recently on how literature is a better way to teach discipline X, then scholarship in discipline X, and ethics is one of those things, it’s not the only thing. The way that the sort of the center of gravity in that is about sort of cultivating the sympathetic imagination. Thinking your way, feeling your way, into the experiences of people who are not you. You’re close enough that you get emotionally invested, but it’s not your actual experience that’s under the microscope, so it’s a little safer to talk about with others. I think that that can be really powerful, but I don’t think it’s enough and this is something that I had really bought this line that cultivating one’s capacity for sympathy and sensitivity and imagination, was sufficient to sort of turning people toward greater compassion, both in feeling and thought and, then also an action to others. And um, and in a much more granular level, right, not talking about the broader political landscape, but just sort of in conversations with individuals, that seemed not to be true in ways that were very hard for me to accept.

Some of the people I knew who were most sensitive and attuned, were really I thought falling down, or even, even just actively turning away from action, from compassion, from making concrete choices that actually would improve the lives of others or mitigate their suffering. And I… the conclusion that I’ve come to and I try to operationalize in the course, is that the sensitivity is a really important ingredient, but the other thing you need to be able to do is you need to be able to stomach the discomfort of recognizing the suffering of others and the potential harms that you could cause and the actual harms that you have caused. Because if you feel it, but don’t have the capacities to sort of confront it, and sit with it, and think, and feel from within it, and you instead try to fight it off, then you’re just gonna run away from what you have learned. And that is not ultimately helpful because though it can bring us to a place of sort of acute feeling and sort of human health experience, they’re not our lives. I’m not asking those people in the class who have suffered who have been the most vulnerable to be like this happened to me, don’t you care about me, right? Because they’re gonna be disappointed a lot of the time. People are gonna turn away, or they’ll say the right things, but they’re not able to contend with it in a practical way. And that the other thing that literature is good for is it creates, the right balance of intimacy and distance to work on this very, very important capacity, which is the capacity to manage comfort.

Seth Villegas

(25:25) Speaking first about this kind of messiness of lots of different situations, I think that really speaks to a lot of where I started to get into some of these things. In part because so, so for instance, like I have a friend, who’s an engineer at Facebook, and..

Emanuelle Burton

Oh dear.

Seth Villegas

Well, and you know it can be hard for him sometimes, right? Because especially, if the company’s in the news a lot you know there’s kind of reputations, and things like that. But you know, for him and his team, that’s not how they feel working on their system. I think in part because I know him so well, I want to be really, you know, sympathetic to circumstances, try and have a broader perspective, what’s going on? Also, speaking of like, okay, well, it can still be problematic kind of macro behavior from the company, within that, while there are people that that work there. And actually, you know, as I was kind of designing my own case studies, one of the things I tried to put in those situations is like, “Well, what do you do if you’re being pressured to do something that you feel like is unethical from management?” For instance, if you find a kind of data exploit or something like that, that you could fix, or you could use, right, and these are things that happen in the real world all the time.

Emanuelle Burton

Sure.

Seth Villegas

And, you know, that kind of speaks to these sorts of situations where you can have real incentives to do things that well, maybe you wouldn’t otherwise do. And uh, my kind of hope — and I’m glad that you mentioned this — is just that people would come out of these things sort of, you know, paying attention, right like that, they’d be able to recognize when they’re in those sorts of circumstances, to kind of stop and slow down and be able to respond to those things. Unless we’re able to sort of do that over and over again, it’s going to be hard to see sort of broader changes. And you know, we can kind of sometimes be really caught up in given projects and whatnot, but unless we’re going through that process of, you know, what are the details that are going on here. And you know, as you were saying, literature is really great at like, well, you show me in the text where that happened, and then you know, if someone else points something that kind of goes against your interpretation, it can be like, oh, like, I actually completely ignored stuff that seems kind of important.

Emanuelle Burton

Or I looked at it, but it didn’t occur to me what I was looking at, and it took this other person’s insight to tell me what I was looking at, when this character made this face and didn’t say anything.

Seth Villegas

Exactly, and it’s out of the conflict of those characters that you get sort of, I don’t know, like a myriad of that kind of conflict between different personalities in the classroom of people who kind of resonate with different people in that situation. And it’s a way of kind of playing out a set of social circumstances in a, in a way, that’s not just a case study, right? It’s not like super dry, but instead, it kind of shows like, oh, like, this person felt this way, in these circumstances, right. It’s not, it’s not just a matter of use cases, and all these other-

Emanuelle Burton

Right.

Seth Villegas

sorts of things, but the actual internalized experience of what it is to be interacting with different kinds of-

Emanuelle Burton

Sure.

Seth Villegas

-systems at different times. And unless we kind of go through that imaginative process, it, it’ll be hard to really get into the kind of nitty gritty circumstances of what makes a given system really worthwhile. And something that Shannon Vallor talks about is, you know, like using technology to make a life really worth living, and that’s kind of where I’m at, right? Like, hey, like, I’m really excited about this, not just because the technology is cool, but because it would actually be beneficial.

Emanuelle Burton

Yeah. Shannon Vallor is, of course, a virtue ethicist, which I am too. I mean, the whole like, the whole literature and ethics movement is a, is a sort of a virtue ethics movement, because it attends to sort of small, the granular formation of character and small choices. I think one of the, one of the real risks of case studies to which literature is absolutely not immune, is that you can read a case study, “oh, well, this character did this thing that was dumb and bad, and so clearly, the solution is don’t do the dumb and bad thing.” They’re solved, right? Like and that’s, that’s like, that’s not useful.

The thing that I think is useful is what you’re talking about is, is being able to sort of feel your way into why it might be difficult to do that thing. I mean, I do think there’s something to be said for simply, simply getting this kind of practice of being like, “oh, well, if it’s dumb or bad or transparently self-serving and disruptive opportunity comes up, I won’t do it. I’ve done this case study or case studies like it 50 times, I don’t do it, I don’t do it, I don’t do it.” I think the thing that makes that practice valuable, is if it goes through deep into the roots of the self, right? It’s, it’s, it’s really getting into the difficulty of it, in realizing how hard it is to do something that from the outside might look easy, that makes it a lot easier to understand, first of all, why you might be creating problems if you create, if you, if you create a technology that sets up those choices, but, but also sort of it gives you the capacity, the, it gives you the chance to struggle with building your own capacities.

Seth Villegas

(30:37) So, you mentioned, uh, virtue ethics. Um, I know that part of, uh at least some, at least from your scholarship, looks like you’re kind of trying to go through major ethical theories, in addition, right to kind of give like a baseline for those things, and also to kind of complicate, say, just the way the numbers look on a given issue. So, for instance, in something like utilitarianism, right, which really focuses on outcomes, you know, measured good is really, really attractive, I think, for lots of really big data sort of approaches to things of like, oh, look like, we have this, we have this number, you know, number went up, therefore good, sort of thing. You know those sorts of arguments. And so, I guess it’s interesting that, I think, for both of us you know, because I’m also in this sort of virtue vein of having a real concern…

Emanuelle Burton

I know, I know who I’m looking at.

Seth Villegas

Yeah, yeah. A real concern…

Emanuelle Burton

I see you.

Seth Villegas

…for the, the engineers themselves, right, and the kinds of decisions they’re making, more broadly, the kind of people they are, in addition to whatever the outcomes are. How did you kind of get to a point of really worrying about that sort of stuff, as opposed to the thing that’s sort of touted in the big graphs for the most part, when we are talking about the relative good that X technology had?

Emanuelle Burton

(32:03) Well so this is where my, my sort of personal biography is helpful, because I would not — I mean, had I come into this differently, I was never going to approach this any other way. The stories I read, as a child, built me into a virtue ethicist long before that was the language that I had. You know, when I, when I teach virtue ethics, there are usually between five and ten students who experience this as revelatory, because this is how they have always seen the world and understood the world and they did not have language for it. And so it’s, it’s, here’s a toolbox for describing and understanding and pushing, like, and building on what you’ve already been doing, all on your own. And so it just feels like this incredible gift. And I was also such a person. Every once in a while I get students at the end of the semester, in their final reflection paper, who will say, “Oh, the ethical theories cannot contain me, they are so simple and the world is complicated.” And I’m like, I failed, I failed, because in fact, each of them is tremendously complex, inelastic, and can do a lot of different things and that is what makes them worth teaching.

The thing that I try to teach about utilitarianism is not like, “Oh, guess what, it doesn’t work,” but rather that, it’s very, it’s very hard. I mean, if you read John Stuart Mill, if you read the early utilitarians, it’s fairly clear that they’re, they’re working out of a really, really different moment, culturally. And I, I mean, my, my, my take on John Stuart Mill — I’m not a Mill expert, by any means — but he reads to me like someone who presumes some measure of virtue ethics, not because he was distinctive, but because he was writing when he was. And so the notion of what is good is profoundly informed, as you know, most Western thought was for literal millennia, by a sort of a rich notion of human good, and of what it means to flourish. And we just, we’re not there now. This was a sin doubt notion of good as in like, “yes, we have four televisions in our house instead of two televisions, there’s what the GDP is, people are living 1.5 years longer, we’re exporting more corn.” All of these things just don’t get where Mill and some of the other utilitarians I think, took for granted that we all understood was where we were going to go, and now the ‘we all’, of course, is really limited.

Obviously, there’s problems where Mill was coming from, but I do think it’s like, really, the main takeaway is that utilitarianism, I think, if you take it seriously is tremendously hard, because it makes you accountable for all sorts of layers of experience, had by all sorts of people who are not you, um, and you are responsible for being rigorous and careful, and imaginative and compassionate toward all of them. And that is really, really hard. So I have no quarrel with the very serious and committed utilitarians.

I think the, I think the structure of utilitarianism not just its similarity to like Markov decision processes and sort of its, but I mean, it just sort of, there’s a, it’s easy to be a bad virtue ethicist, easy to be a bad deontologist. I think bad utilitarianism is maybe extra inviting, and in our current cultural moment, has the gloss of looking like rigor, maybe more convincingly than bad deontology, um, or bad virtue ethics. But I don’t think the problem is utilitarianism.

In fact, I, there’s a woman I you know, I mean, I may mispronounce her name, because I know her from Twitter. I’ve never heard the name said out loud. But her name is Abeba Birhane she’s, she’s a cognitive scientist, she’s doing a PhD, I think at Dublin. She, she has a lot of really, I think, I think she’s really worth listening to on a lot of things. One thing in particular, which talks about, and this is going to come out as a defensive utilitarianism, in a way, is she, you know, she she has talked a couple times about how frustrated she gets by people saying, “oh, but the real issue with tech ethics, is that, like, we’re getting addicted to our phones, and that the quality of life is being, you know, diminished or thin, then these minute ways we aren’t paying attention to, and can we not talk about that? Can they talk about the very, like, material, concrete harms that are happening to minoritized and disadvantaged people, instead of like, focusing on like, what the what the rich white people experienced with tech?”

And I think that that’s a really important thing. And I think that utilitarianism rigorously applied, holds us accountable to the kinds of things that she is saying. And I think about that a lot too, because, you know, as a virtue ethicist, I worry a lot about sort of these these granular changes to our day-to-day experience. I mean, and I, the way that I reconcile that is, I think, you know, those create the conditions for larger scale harms in the not so distant future. So it’s, it doesn’t necessarily have to be an either or, but I do think that it requires us to pay attention to the lives of everyone, and not just the people in our immediate sphere. So.

Seth Villegas

Part of the reason why I’m bringing it up is, I think that the biggest sort of fights around assumption and measurement occur within a kind of utilitarian framework of what’s happening. And to give a more mundane example, there’s the kind of, you know, one-star/five-star phenomenon, right, or everything’s either the best thing or ever, or it’s the very worst thing ever. And which just speaks to the measurement system itself no longer really functions, because people feel so strongly about something, that they’ll downgrade it to bring the entire average down. And, you know, so there are issues going on like that, which I think, you know, people in philosophy would be really interested in just because the thing kind of represents something else now because of how people treat it.

But then also, it’s like, we have to hold both the new ways in which we can measure harm. So for instance, one of the other people we have on the podcast, Muhammad Ahmad talks about discrepancies in patient pain reports versus doctor pain reports, and how people’s pain gets discounted in certain circumstances.

Emanuelle Burton

Or depending on who they are.

Seth Villegas

Yeah, exactly. You know, I’ve had some experiences with these things as well. And it really makes it hard to build trust with your doctor. But you can also use data to really show a kind of, a widespread harm that’s very common across these dimensions. And I think that that’s a really powerful tool. But we also wouldn’t want to get so caught up in that, that we dismiss the ways in which people’s experiences are shaped by things that are much harder to quantify and measure based on their circumstances, right. And so this is weird, sort of, depending on what you’re able to pay attention to, you’ll kind of see those different things.

Emanuelle Burton

Sure.

Seth Villegas

And I especially like what you said earlier about those sorts of try and, the sorts of people are more sensitive to those things as well may be worth paying attention to, because they may notice something first before you do and I find that to be really invaluable now in life.

Emanuelle Burton

(39:28) So, two things that I would want to say in response to what you said. I think that one thing that is, one of the many reasons I like virtue ethics, is that it is of necessity slow. And I think that a lot of, a lot of the difficulties that you’re talking about are, are a function of immediacy, right? Like if you have to sit down and write an angry letter and put it in the mail, because you’re mad at someone, that’s just a whole lot slower than texting them, or sending an email, or filling out an online form, or giving a one-star review. The sort of the, the upsurge of immediate feeling rarely leaves us to be our best selves. I think if you’ve worked pretty hard on yourself, you can have a sort of a lot of apparatus of, of reflection in place such that you’re almost… immediately after the immediate reaction, slows your own roll down. I mean, I don’t want, I don’t want to say that that sort of immediate surge of reaction is never useful. I think there are all kinds of, sort of, bold things that we do, which we can only do by riding that wave. A lot of, a lot of the kinds of problems that are created by, by social media have to do with that kind of immediacy. And I think that’s pretty interesting because like, but one thing I like about virtue ethics is that it, it calls our attention to the value of things that cannot happen quickly. It does not obviously provide a practical solution, but it, it can at least sort of say, maybe you want to become the kind of person who slows down before you send the angry review.

A lot of of the kinds of problems that are created by, by social media have to do with that kind of immediacy. And I think that’s pretty interesting because like, but one thing I like about virtue ethics is that it, it calls our attention to the value of things that cannot happen quickly.

Seth Villegas

As we’re kind of, you know, ending our time here, I’d really love to hear from you what you think we should be paying attention to.

Emanuelle Burton

Yeah.

Seth Villegas

So, what I mean by that is, you know, you’ve kind of mentioned all sorts of things, right? Like, there’s, there’s kind of things that come up in our daily lives, there’s new technological systems that are being developed. And…

Emanuelle Burton

Who’s the we here?

Seth Villegas

So, um, let’s just talk locally, right? Let’s just talk about, you know, let’s again, I think I’m imagining people like in the classroom, right. So, so people who are kind of going out into this new world, either to make these systems, right…

Emanuelle Burton

Yeah.

Seth Villegas

…or to at least take part in them, right to a significant amount.

Emanuelle Burton

Yeah. So, to students in the classroom, who are, let’s say, coming to the end of their educational experiences, about to be dropped into the cold, hard world without an apparatus of support. Actually, I think one thing that you can do that’s tremendously valuable for yourself is to build an apparatus of support. Find a community of friends, or coworkers or family or some mixture of those that, that grounds you where you want to be. I think that it is very — we have a lot of stories, in this country, in particular, about some brave solo fliers who defy institutions, one man stood alone in the face of the blah, blah, blah, and he was Clint Eastwood and blah, blah, blah. And I think that that is really challenging. Most, pretty much all of the stories I know of people who are able to do brave things, even when it looks like they’re doing them by themselves, do not experience themselves to be by themselves, right? Sometimes it can be, “I was standing with my ancestors,” “I was living out what, you know, I and my three childhood friends all knew and shared as a vision of what was really important in the world.” People who break out of toxic communities or patterns, rarely do so because they independently conceptualize another way to be. They may want there to be another way. They may feel that where they are is not good, but I think there needs to exist at least the idea of another community, even if you’re not a part of it, yet, that ,that you could belong to, that you can move toward.

I think there’s, there’s nothing more civic-minded and anti-capitalist, and generally productive for making the world we want to make, then finding a community of people or being in a community of people that shares your vision of what is good, and how to get there, and keeping yourself grounded in it.

And talking and thinking with those people and being able to be vulnerable with them, which doesn’t just mean saying “I have these feelings,” but saying, “I am considering doing these things that I might be ashamed of. I’ve done some things that I’m ashamed of, I feel shame about all of these options I might pursue, but I have to do something.” And the idea isn’t necessarily that you, you know, find a way that involves no shame at all, because the world is hard, but that you, you find people who can help you figure out what kinds of harm and shame you can live with. Because that’s how you don’t wind up some sort of tangled, broken person who’s holding yourself together with scotch tape and who can’t afford to pay attention to anything.

I definitely do not want to sort of say like the real victim here is Mark Zuckerberg, like that’s not that’s not my goal here, but I do think that a lot of people who are doing terrible harm in the world have sort of wedged themselves in the situations where they they feel like they cannot afford to pay attention. And so one of the most, the most important things you can do for the world as someone who’s going to act in it with power, is to preserve your capacity to pay attention, not by keeping your hands clean, because again, you don’t do important work without creating some harm intentionally or otherwise, but sort of holding yourself accountable for it in a way that allows you to move forward, as someone who tries to mitigate the harm you’ve done and tries to get better and help others do that, too.

Seth Villegas

(45:39) I think you’re really highlighting the importance of community then, right-

Emanuelle Burton

Yes!

Seth Villegas

-of building up those relationships for mutual reflections and one another of the kinds of things that we’re experiencing, right. So not, not just the noticing of things, but the, the attempt to articulate those things and all of our failings to get that across to someone of you know, like, “Am I crazy when I’m seeing this, right? Is this a real thing?”

Emanuelle Burton

I didn’t talk about this in terms of accountability, but I think being accountable for describing is important because describing is actually really hard. This is something that a lot of people discover, in the course of my classes, like one of the versions of the little epiphany that I felt really good about is if I’m asking someone some questions, not, you know, putting them on the spot, but you’re just saying, “can you say more about what you see?” and they say, “it’s hard to explain,” and I say, “I know, that’s why we’re here, keep trying.” Describing is hard and I know that I still find that I learn things about what I think or what I understand or what I see in the work of trying to describe to other people. So that’s a really important kind of accountability. And I think that that’s something that happens in conversation with others.

Seth Villegas

Well, thank you so much. So, if people want to kind of follow you and your work, how can they do that?

Emanuelle Burton

Well, I’m on Twitter. I mean, I’m not what I would call good at Twitter, but I’m on Twitter, and I mean, one of the good things right is because I have like, you know, 400 followers instead of 40,000 followers is that my DMs are open. That’s something I can afford.

What else can you do? You can email me at UIC, where I teach. I really dig the AIES conference. It’s not a perfect conference, but it’s pretty great. And so if you go there, you will probably find me. We’ve got this textbook coming out next year so that’s…

Seth Villegas

Yeah. What can you tell us about that?

Emanuelle Burton

Oh, my God, I should talk about this. My co-authors are going to be so mad at me, like how did you not do this? Okay. So, this was a textbook that Judy ended up talking about, which came out of this meeting with my co-authors. This textbook is well, I want to tell you what it’s called. I cannot tell you what it’s called, because we’re still debating it, but it will probably be called either Understanding Technology Ethics for Science Fiction or Understanding Computer Science Ethics for Science Fiction. And we would like to, to be honest about the purview of what it is that we were working on.

Computing and Technology Ethics: Engaging through Science Fiction by Emanuelle Burton, Judy Goldsmith, Nicholas Mattei, Cory Siler, Hardcover | Barnes & Noble® (barnesandnoble.com)

So my co-authors on this project are Judy Goldsmith, who is a computer science professor at the University of Kentucky. She’s the one who, who initially recruited me into this project that she was working on with Nicholas Mattei. Nick is now at Tulane. He did his grad and undergrad at Kentucky. He worked at Watson for a while. He did a postdoc in Australia. Now he’s at Tulane. The other two co authors, Cory Siler and Sara-Jo Swiatek are both graduate students. It’s really important to me to underscore that they were not there, as like second order contributors who were filling out other’s ideas. They were absolutely equal contributors to this project. There are five co-authors on this project. There are not like, there’s not a main author and then other people. So Cory is a graduate student in computer science at Kentucky, Sara-Jo is finishing her PhD in ethics at the University of Chicago Divinity School, which means that she did all of this while working on her dissertation. So that’s, that’s the five of us. We’re a fairly varied and colorful team.

And the project, the textbook is, it is a combination of something that is mostly like a traditional textbook, and then an anthology of teachable stories, um which are like I just, I’m sort of bowled over by the incredible quality of the stories that we have. They’re just knock out stories. So you know, because I’m a literature person, my friends like to say, What have you been reading and I have for the last several years said that I’ve been reading the same fun stories over and over, that’s all I’ve read, which in some ways, sounds demoralizing, but it has actually been okay, because they are such rich and extraordinary and generative little worlds that it’s been incredibly enriching to get to know them as well as I have done. We have a little bit of framing material about them that’s going in the textbook itself and then we also have these fairly extensive pedagogy guides that are going to be available to instructors. Because of what I said before, right, about how I don’t think the most productive way to teach literature is to shake it down for content. What’s really valuable is to sort of go back into the text, but that’s hard, that’s really hard to do as an instructor. When I said that the rest of the textbook is more traditional, it is in a sense that it’s like a textbook. It doesn’t look a huge amount like other computer science ethics textbooks that exists.

Our chapter on ethical frameworks, I mean, first of all, we’re calling them frameworks and not series, because it’s really important to us to take them out of the register of abstract truth. We’ve done a couple of structural things to do that. We’ve also diversified it significantly. So we’ve got four major frameworks: utilitarianism, deontology, virtue ethics, and communitarianism, the last of which is grounded in sub-Saharan traditions of community based understanding of how to live in the world. We also have a, a unit in their fifth unit on contemporary responses. So that’s ethical movements and theories that have popped up in the last 75 years, we’ve got responsibility ethics, the capabilities approach, and feminist ethics in there. So that’s one thing that’s, that’s already pretty different. And also, the organization of the textbook in general is quite different. We have an entire chapter that’s essentially an epistemology.

This is the other thing I was gonna say, this is the thing I was gonna say about utilitarianism, right, one of the really useful ways to get under the skin of utilitarianism and sort of crack open the sense that it’s easy and straightforward is to say, “Well, how do you know what the good is?” And you know, there’s an easy way to answer that question. But you put just a little pressure on it and whatever easy answers people have supplied, start to fall apart. And then you get to questions of “how do you, how do you know these things?” And like, there’s good news, there’s an entire philosophical discipline with wide tendrils that helps us explore these questions of how it is you know the things that you know, either on an individual or on a social level. So there’s a whole chapter, it’s called “Managing Knowledge” and it is, it’s not just intro to epistemology. It’s geared pretty closely toward thinking about information technology, both for like people who live with it, and people who build it.

We also have a chapter on personhood and privacy. This is getting at some of those concerns that I was talking about earlier, right, that sort of the, the minute ways in which the conditions of being a person are not abstract and absolutely given, but are framed by the socio-technical moment in which we live. And so the chapter in particular thinks about what that means for privacy, not just what’s possible, but what seems valuable, and what makes it valuable, what is the thing that can be known about a person that, or can be transmitted or shared so it’s that whole nexus. I don’t know of any other curriculum that spends very much time on personhood at all and or the puts it together with privacy. I also don’t have any other curriculum spends that much time on epistemology. So it’s like a very hermeneutical curriculum, hermeneutical and very humanistic. It was really important to us to create something that would feel vital and urgent and immediate to students in computer science, but that did not shortchange them on the deep conceptual resources, for asking the fundamental questions that lie under sort of the immediate practical ones that they’re likely to encounter.

Seth Villegas

That sounds like a really different approach from, at least most of the books that I’ve seen, right, which are kind of usually really split between, especially the kind of older kind of computer science ethics, which is very case studies based so…

Emanuelle Burton

(53:17) There’s your philosophy unit, right? So, I mean, there’s often a real gap that is not well navigated between sort of what you might call field ethics, like “we’re professionals, and we’ve dealt with this and here are the problems that arise and here are the worries we have and here are some of the strategies we’ve adopted for coping with it.” And then “oh, here’s a philosopher, or here’s somebody who works in the academy.” You see it really clearly in medicine, right? Medical ethics is run almost exclusively by doctors. And then you have bioethics, which is basically a bunch of people in universities who talk to one another. And they don’t talk to the doctors, and the doctors don’t pay attention to what they say and why is it like this?

Well, I know why it’s like this, it’s because it’s really, really, really hard to bridge that gap, but we’ve tried. We are currently, so we sent that we sent the manuscript off to the press in early June. At some point, we will get reviews back, but at some point in the next probably couple of weeks, maybe months, we’re going to get reviewer reports. We will have some number of months to address those and make revisions. I think it’s going to be out next summer is the long and short of it. Some of its going to depend on, you know, when the reports come back, how much they want, how quickly we’re able to address them. But it should be ready for fall course adoptions if not that, certainly for spring of 23. And it’s going to be pretty, it’s gonna be pretty affordable. This was a commitment that we brought to MIT, but they actually were all on board with from the beginning. I mean, the way that I think about it is that I hope that students who get this textbook and find that there are a couple of stories that really stick with them will say, “You know what? I’ll keep it. I’ll keep it so I can reread Lacuna Heights or Today I Am Paul whenever I, because that story did a lot of work for me and I want to be able to go back to it.” And I think the idea too, is that maybe people who aren’t in the course, but are interested in science fiction will want this anthology and absorb the material that comes with it.

Seth Villegas

Okay great, that sounds like a really great vision. I’m looking forward to kind of following its progress.

Emanuelle Burton

Well, thank you. I’m really glad to hear it. We’ve been working on it a long time and I’m excited for it to be in the world.

Seth Villegas

(55:19) Thank you for listening to this conversation with Emanuelle Burton. You can find more information about DigEthix on our website digethix.org, and more information about our sponsoring organization, the Center for Mind and Culture at mindandculture.org. If you’d like to respond to this episode, email us digethix@mindandculture.org. Or you can find us on Facebook and Twitter @DigEthix, or on Instagram @DigEthixFuture.

As we’ve talked about in prior episodes of the podcast, it may not be all that easy to teach ethics in the first place, especially inside of other disciplines like computer science and engineering. For Emanuelle, the way that we start to gain an entrance into an ethical understanding is through stories. Stories are already a part of our childhood formation influencing us well beyond what we can imagine. But the stories that we encounter in the present can also be used to build our senses of sympathy and attention, which can help us to begin to think differently about how we might view a given situation.

The act of description and of asking questions are real skills that take time to develop. However, as Emmanuel points out, we want to be careful becoming armchair ethicists, too involved in theory while avoiding practice. In other words, we don’t want to be the sorts of people who don’t actually make a difference. And for that reason, it will always be important to have a wide conversation between academics, engineers, and concerned members of the general public. I think from within that, as I talked about in the introduction for this episode, we can begin to imagine different kinds of alternatives and weigh the value of those relative alternatives have. But unless we have options, and can think about what those options are, like, perhaps we won’t be able to arrive to any kind of solution anytime soon.

One of the other benefits of talking about something like virtue ethics is that it puts into focus two different groups of people. First is the engineers themselves, the people who are in charge of making technologies. And second, there are the people who use these technologies. When we talk about the potential benefits, for the engineers, I think those have a lot to do with making those people the right sorts of people to build systems that we actually want to use. And when I talk to young people inside of computer science labs, people who are building products and applications for other people to use, they really are socially concerned in a way that I’m not sure I necessarily saw when I was in college. In many ways, that gives me a lot of hope for the future. However, it is also the case that we as the people using these technologies have to think about how they’re involved in our lives. While they might make some things more convenient, we have to think about the trade offs of that convenience. How will that convenience begin to form us in the long run. And I think it’s only then when we’re actively paying attention to the ways in which we use these technologies, that we can help these technologies to better fit into our lives and into the lives that we actually want to lead rather than having them leading us by the head.

So with that, I thank you very much for listening to this episode. And I hope to hear from you before our next conversation. This is Seth, signing off.

--

--

Center for Mind and Culture
DigEthix

Research center innovating creative solutions for social problems in the mind-culture nexus. Powered by a global network of researchers & cutting-edge tools.