Designing for Ethics

Interview with Samuel Woolley, Director of the Digital Intelligence Lab, The Institute for the Future (IFTF)

Roya Pakzad
Humane AI
7 min readApr 18, 2018

--

“My team will get back to you on that, Senator,” and “AI tools.” These seemed to me to be two of the most-used phrases during Mark Zuckerberg’s congressional testimony.

What struck me the most was Zuckerberg’s tendency to portray “AI tools” as if they are magic bullets for solving the issue of misinformation. But is there a downside to making disembodied “algorithms” the center of our debates around online propaganda? What is lost when we talk only about the things, and not the people? Who designs algorithms, develops them, and optimizes them? What assumptions guide their work? What standards do the makers of algorithms have in mind?

Back in February, with a hope of finding answers to these questions, I attended a workshop at the Institute for the Future (IFTF) in Palo Alto called “Computational Propaganda: A Conversation With Tech Industry Employees.” As a part of his work at IFTF, Sam Woolley, the director of the Digital Intelligence (DigIntel) Lab, gathered technology workers to discuss the role that tech workers themselves play in the global fight against political propaganda.

Long before all these congressional hearings and hasty regulations, Sam has spent the last five years of his life (from the University of Washington to the Oxford Internet Institute) researching the role of bots in disseminating political false information around the world. The recent result of his work, in collaboration with Philip Howard and Ryan Calo, was published as a paper called “Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration.”

Below is an edited transcript of my conversation with Sam about his experiences working with tech workers to find solutions for “computational propaganda.”

Samuel Woolley (Credit: Sam’s Twitter Account)

Roya Pakzad: Sam, I really liked your workshop. It was very unique in terms of involving tech workers themselves in the issue of computational propaganda. So, I’m curious to know, how did you start this initiative and what’s your next step?

Samuel Woolley: So, IFTF has a Future for Good fellowship program. Myself along with Michelle Miller — who is a founder of coworker.org [link ] — are both fellows together. Coworker.org helps workers to organize in the digital world because lots of unions have been disbanded.

The goal is to help workers to be thoughtful, to challenge the status quo and to fight for their own rights and their own voices. Michelle and I thought the problem of computational propaganda or digital disinformation is something that seriously affects tech workers. They also have a real hand in helping to combat it; most of them are not CEOs.

But they know the platform and they have an interest in making it better, so let’s involve them. The workshop we held here was just a trial run for bringing together tech company workers to lend their voices to this problem and I think it’s going to be part of a larger process of hosting these workshops around the country (and potentially around the globe) and help to rise tech workers up.

Roya: While many people have been looking for some policy and legal solutions to fight against “computational propaganda,” you took a different path and decided to work directly with technologists. Why?

Sam: I have been an ethnographer, so I was most interested in hearing the human perspective on why the technology gets built, how it gets built, by whom and for whom.

I really wanted to understand beyond the policy parameters of what the technology companies can do to combat this stuff. And it was actually [informed] by understanding the profile of the people who were building these bots and technologies. What I learned is that a lot of them weren’t building bots for political manipulation but they build them to make money. It was kind of illuminating to know that all of these people are struggling to make a living and they were building bots to then rent it out to clients in other countries.

In other contexts, I did find technology workers who were very vested politically. So I began to understand that there is a dichotomy. On the one hand, you had people who were struggling to make money, and on the other hand there were people who were doing that to make politics. There is a disparity between these two groups.

I wanted to point out a couple of things: first to underscore the fact that people in developing countries were doing what they had to do to make a living and earn money — to some extent it was similar in the Western wealthier countries. The other thing I wanted to underscore is the ethics of design. Especially when I started to talk to people working at the bigger technology companies (like Twitter, Facebook and Google), what I began to understand was there is not a lot of reflexivity, there is not lots of thought on the part of the engineers when they are building tools on how the tools might be used for manipulation.

The idea [is] that the people who build technology have a role in production and how the technology gets used (and how it might be misused). The tools — whether social media platforms or the algorithms that dictate what trends get shown — are not agnostic and not free from bias.

Roya: It’s great that you mentioned the concept of ethics for design. Technology companies have been thinking about offering ethical training for their technical employees. What do you think about these types of trainings?

Sam: Well, this is my opinion and my guess: I think a lot of the training in those companies was more to do with interpersonal communications in a more routine and old school way. What wasn’t really being conceived of was the way that the tools themselves might be normalized for misuse, like the ways that they would be co-opted by powerful people to try to manipulate the masses.

I just think it was a lack of foresight in the development of these tools. The companies were thinking about the ethics of gender norms, or the diversity of their hirings, but they weren’t thinking about the ethics that they were baking into the tools themselves.

When I was a researcher at Google Jigsaw, we would bring in people who all look different from each other to try out the tools and talk about how this tool might be used or misused. I would say research first, then design and launch, rather than design and build, “move fast and break things.”

For instance, now, I’m working with Jane McGonagall [link,] a female game designer, on an ethics challenge toolkit. It basically has three challenges that the people who build technology should go through. There are series of [ethical] questions that they should ask themselves when they build a tool. The idea between me and Jane is to challenge people to think ten years in advance about possible scenarios. The goal is to help people to achieve clarity rather than a concise and exact conclusion [as a result of their new design.]

Roya: During the past few months, we have witnessed movements by technologists who criticize their companies from within. People who call themselves “tech-reformists” or I might call them whistleblowers. Sometimes their ethical concerns are ignored or even they decide to leave their companies. What was your experience working with them? How can your work empower them?

Sam: Currently, I’m collaborating with people like Renee DiResta and Tristan Harris who are thinking about these questions from the perspective of people who work at Google or Facebook. those people aren’t really whistleblowers but are more like tech-reformists. They do believe the technology has to be changed so that it doesn’t manipulate the psyche as much, like Tristan’s idea of “Time Well Spent” [link.] In the past I’ve absolutely spoken with people internal to the companies but I don’t know if I’d call them whistleblowers so much as I’d call them concerned employees who love their work. They were not trying to leak information out but they want to talk about the process.

I think groups like coworker.org are on the frontline in discussing how to prevent these kinds of miscarriage of justice [tech workers ethical concerns being ignored or getting fired, etc.] Their hope is to make sure workers are heard and have more of a collective voice. We do need more groups like coworker.org but what we do at the DigIntel lab is to give voice to same sort of people but through research, I discuss these things with them and write research to say, “here is what actually comes from tech workers themselves.” My way of giving voice might look different from NGOs like Coworker but we are doing similar work, just through different avenues.

Roya: And here is my last question: If you wanted to recommend two books for people who are interested in your work, what would they be?

Sam: One of them is The New Leadership Literacies by Bob Johansen [link,] the former Executive Director of IFTF. It’s written almost to business audience but also presents people in government and civil society with notions of how organizations and leaderships are becoming liquid, how they have becoming distributed. It’s really related to the problems we are seeing in today’s world. We are seeing reluctance to face the fact that digital environment means that the world looks more flat rather than like a pyramid in terms of leadership structures.

For the second book, I think people should read Twitter and Tear Gas by Zeynep Tufekci [link.] She is a great thinker in this and her voice is really important. As a primer for people who wants to understand what’s going on and what’s happening [in online political movements] I think they should really read her book.

We wrapped up here. This interview has been lightly edited and condensed.

This conversation was part of the interview series for my newsletter Humane AI. I will continue talking with both policy and technical experts in the field of ethics of AI in future installments. Tune in to know their opinions about many issues including AI for social good, History of technology and its connection to AI, Digital Humanities, and much more. To subscribe to the newsletter, click here.

--

--

Roya Pakzad
Humane AI

Researching technology & human rights, Founder of Taraaz (royapakzad.co)