When computers make decisions, it’s political

This is a transcribed and edited interview with David Robinson, founder of Upturn, a tech-policy consultancy that focuses on civil rights and social justice issues. This interview was recorded for the NetPosi podcast at EyeO in Minneapolis, Minnesota. EyeO is a conference about technology, art, visualization, data technology, and activism.

Full audio of the interview. Edited transcript below.

Drew: Tell me a little bit about yourself, David.

David: Upturn is a small consulting firm based in DC and we have a totally public interest docket. We work with civil rights groups and other sorts of social-sector organizations, and we do a little bit of work with government as well.

We are dedicated to helping folks claim a seat at the table in situations where technology is changing a policy issue that matters to everyone, not just to engineers. For example, we work on predatory lending, civil rights, and criminal justice reform where police are using computers. If a computer is deciding who gets a loan or who gets a job — those are traditional civil rights issues — but now you need to have a nerd in your corner to really pull apart the technology that’s driving some of this.

Drew: Where do you come at this from? What brought you to this work?

David: I have a mild case of cerebral palsy and my handwriting is really wobbly. When I was a kid in school, I was the “bad writer”, which meant I had bad penmanship. But when I got a word processor in about fourth grade, it turned out that I really like writing. It was this hugely empowering thing for me. So I’ve always had this vivid personal sense of technology as a tool for empowering people and making good things happen. One of the things that was really striking to me about my own experience with that was that it wasn’t new invention that really changed my life, it was a change in the rules. Computers already existed. But what changed that really made the world a lot better for me as a student in school was that the school begun allowing kids to have computers, the rules changed.
 
Upturn is about is making sure that when opportunities are out there for technology to do good, or for us to prevent technology from having harmful effects, that we line up the rules in a way that’s going to makes those things take their best form for people.
 
Drew: In your consultancy, you work primarily with public interest groups. But just to be clear, it’s a private consulting firm, is that right?
 
David: That’s right.
 
Drew: So what led you to take exclusively public interest clients, how did that come about?

David: Trial and error actually. We began with the idea that we would help law firms to represent big companies that have privacy issues and then we would be able to do a pro-bono practice on the side to do the work we really think is important that’s the public interest work. But over time, with the help of the Ford Foundation and some others, we were able to develop a docket of completely public interest work. I think that part of the reason that we’re a consulting practice is because we really have a sense of our role being a supportive and collaborative role. Everything that we have been able to achieve has been in partnership with other organizations. One of the things that we’re able to do is to be dedicated to helping other groups fulfill their goals. We’re really the agent of somebody else who’s doing work — that’s been our approach so far.

We’re constantly in conversations about what’s the right approach and we’re constantly fine tuning our approach. In any version of what we do it’s really important that we’re doing it in partnership with the subject area experts who’ve always done this, whether it’s criminal justice or lending or whatever the area area may be.

Drew: How does that work? How do you interface with these other groups?

David: We have support from some of the same funders as the major civil rights organizations, so places like the Open Society Foundation or the Ford Foundation. Over the last couple of years, we’ve been working intensively with a coalition of civil rights groups that’s based at the Leadership Conference on Civil and Human Rights, which is the oldest, largest, and most diverse coalition of civil rights groups in the United States. Part of the way that this works for them is that when they get a grant from Ford, not only do they get financial support and other kinds of support from Ford but they also get the opportunity to call on Upturn when technology issues when technology issues come up that are relevant to their work.

We’ve worked on a string of issues, including online ads for payday loans, which Google recently announced they’re going to prohibit. We’re very exciting to be part of the conversations that help to spark that result.

Drew: What’s the story about the Google’s online ads for payday loans?

David: We broke a report a year and a half ago about how lead generation works online. So that’s when, if you’re interested in the payday loan, the system will keep track of you — if let’s say you’re entering a search, like “I need money to pay my rent” on the web on a search engine. People who are in the advertising trade will figure out how to create an online ad that most appeals to you as the financially desperate consumer. It turns out making the ad is one job and making the loan is another job. So the advertisers are what’s called “a lead generator”. They will auction you off to whoever thinks that they can make the most money off of you — to whichever lender thinks they can get the most interest and fees out of you. These are expensive ads, we’re talking $10 or $12 per click is what the advertisers pay for these. It’s a high stakes business.

An example of online ads for predatory “payday loans”.

People have this idea of the Internet as this place where there’s no geography and where the same rules have to apply. So you have state rules, let’s say that protect consumers differently in different US states. They have this idea you couldn’t replicate that on the Internet. But the truth is that things are much more advanced. There’s really this way where you can geo-target ads to each person. That means that if you’re running a big platform, like the Google or Microsoft ad platform, it’s possible to have different rules in Pennsylvania and different rules in Minnesota. These companies can actually match this the careful rules that states have for protecting consumers. So we were able to point that out in a report, which then opened the door for public conversation and, actually, some quieter conversations with several of the operators of large search engines and other online platforms.

There was able to be a discussion about what the options are and what the right ways are to protect consumers. The upshot of that, as was reported in the press, was that Google decided to ban ads for payday loans across the United States and had similar rules now were wide better coming in. We should also say that Facebook also now has a policy that prohibits online ads for payday loans.

What does Upturn do and what’s our role? The important thing here is that we’re talking about knowing about consumers and their needs and about financial justice. But that’s not enough on its own in the Internet context. You also need to be comfortable talking about how online platforms work and how we want to change it. That catalytic ingredient that we bring to the party.

Drew: Who are the people that do this with you?

David: We have a great team, we’re so fortunate. Harlan Yu, my co-founder, holds a Ph.D. in Computer Science. My training is in law. Our original name for the firm was Robinson and Yu because, well, we couldn’t pick up anything more clever and had to fill out the form and get started.

Aaron Rieke is the third principal of the firm. He came from the Federal Trade Commission and has a background in privacy protection. He has done a blend of legal and technical work there. Logan Koepke works with us as an analyst. We have Miranda Bogen joining us in just a few weeks, who came out of diplomacy and foreign policy background. So we’ll be five people probably by the time folks are hearing this podcast.
 
Drew: You said earlier that when you were originally thinking about forming this consultancy, you were considering doing primarily work for private clients and then you would do some work pro-bono in the public interest. I’m curious about that. I have this suspicion that it’s difficult for folks doing technology to break out of that “pro bono mentality” of doing things on the side — doing a little bit of coding where you can and dropping in and dropping out organizations. How did you come to the decision of doing this public interest work full-time?
 
David: I think this is a great question because I think it’s easy to do a little bit here and there. What’s ends up happening is that you tend to be very absorbed in whatever they are primarily doing, especially people who are very good at what they do. So it becomes very difficult for people to give their first and best effort to a public interest project if they also are in a corporate context. Of course they are happy exceptions to those kind of rules but for the most part it really helps if you can focus totally on public interest goal.

For us, it naturally evolved over time. We wanted to be the kind of place where, you could focus totally on what you’re doing that’s a public interest goal.

Before this, I was in an academic setting, doing research at Princeton University Center for Information Technology Policy. That’s where Harlan and I first met. We decided to start the firm together. Another thing that I’ll say is that a lot of public interest organizations operate on an unfortunate financial model where they underpay people and people are only able to work there for sort of only a short part of their lives, like people who are just starting out and whose financial requirements may be less because, for example, they may not have a family that they are helping to support.

There’s this temptation to fall into the idea that sustainable talent is cheap, and I think the cost actually of that model is considerable because you don’t have people able to accumulate more experience. You have more turnover, if you’re not able to do something that really sustains someone for a sustained run of their career. So we’re new enough that it’s hard to say whether or not we’re able to be successful with this. But having started in 2011, I’m coming up on five years myself. My hope is this can be a longer term thing for people. If you look at it in terms of dollars-per-hour that end up being spent to give someone a sustainable salary, it can be more than you might want to spend. But if you look at it in terms of the amount-of-impact-per-dollar that you’re able to achieve, having something that is sustained over a longer period really has better returns.

Drew: I’m thinking about that. There are five of you working on this full time. You’re up against really big corporate interests, which may have hundreds or thousands of people who work on technology or who practice law, and some who work specifically at the intersection of law & technology. That seems like a real David and Goliath scenario to me.

So I want to ask you, do you think that this model will allow the public interest to triumph over corporate interest?

David: I’m not a person who sees it as a dichotomy between public and private interests. There are some situations where that’s exactly the right way to see it, but I would hesitate to generalize about the public interest and private interest always being at loggerheads with each other.

A lot of the art of what we’re able to do in the Washington situation in particular, is really look for win-win situations. And I want to be very clear, I’m not saying that is the only kind of social change that we need or that it gets the whole job done. I don’t believe either of those things. We need more systemic change as well. History teaches us that striving for social change, it’s valuable to have complementary and different levels of how aggressive folks are, how much and what kind of change they’re seeking, how compromising or how collaborative they are with other interests in the scheme of things. One thing you can really do is create situations where people see the problems clearly. That is something that I think we at Upturn are able to do.

I used to think that policy all happened at the line of scrimmage and it was like trying to get votes for your amendment or trying to get a bill passed. Actually, the more I work on this the less I think of that as being the right way to look at it. By the that time something is about to happen and we are debating the details on how it’ll be implemented, there has already been all this structuring that’s gone on. By that point we already have this pretty narrow range of reasonable-seeming outcomes that might happen. Our conversations at Upturn are around data, discrimination and civil rights, a lot of that stuff. And it’s all about drawing lines on the field that will one day be the line of scrimmage, but that are not now.

This is much more about, where are the goalposts, what are we even going to consider reasonable for fights that might happen in 5, 10 or 15 years in the future. Speaking persuasively, being clear and having reality on your side are really helpful. A lot of organizations that are bigger find it very hard to speak clearly. There are a lot more incentives and more complicated sort of trading off they have to do. We certainly have seen this while collaborating, even very successfully and very happily collaborating, with colleagues in larger organizations — it takes them a long time to say or do much of anything because it’s like trying to shift a giant cruise liner. At our best, we’re like a little tug boat. We’re not going to drive everything that happens, but I think if we pick our points of intervention carefully it’s possible to have a big impact.
 
Drew: So tell me about those points of intervention. Thinking ahead 5, 10, 50, 100 years from now, what are those lines of scrimmage? In other words, if you could wave a magic wand and have those lines drawn out for the acceptable frames for dialogue about these issues, what would those look like in the future?

David: One of the big ones is that when computers make a decision, it is political. We have a very strong cultural presumption that when a computer makes a decision it is objective, or more fair than if a person had made this decision. I think that is false.

Computers seem more fair because there isn’t that same sort of psychological element. But it’s really like holding up a mirror to the world around us and the biases that are in the world around us are going to be reflected in that mirror.

So if you’ve had police who have been differentially arresting communities of color and then we are using that data to predict where to send the police tomorrow, well sure, the computer isn’t racist, the computer is just doing arithmetic. But those outcomes are gonna be racist. Those outputs are racist because the inputs are racist, it can easily happen.

Trying to get people to interrogate what a computer does, and really ask what it might have absorbed and whether that matches our values, that is the core of what UpTurn is. We do that in concert with all of our allies and friends in the field.

When someone says, “okay, here here’s a new automated decision that’s important to shaping someone’s life”. There isn’t this presumption any more. This unearned, in my view, patina of objectivity or fairness surrounding a computerized decision.

Relatedly, there’s been a lot of concern recently about transparency, “tell us the algorithm”, “give us transparency around the procedure that is used by a computer to make an important decision about who gets pulled out of line at the airport or about who gets into college”, or other important things.

Yes, the algorithm is important, but it’s only one piece of the story. With machine learning, the rule of decision maybe changing every few minutes and so it’s hard and it’s also inscrutable. It’s so complex that Google can’t give you a clear view of exactly why a given search result ranked first. It doesn’t fit inside one person’s mind, what all those factors were. When we hear “algorithm”, it’s just a pattern finder, it’s looking for patterns, it’s looking somewhere in particular for patterns. The really important question is, “where’s it looking and where are those patterns are coming from?” And “what kind of data is it based on?” And “how is that data generated?” Those are the sort of questions that we’ve got to get people asking in the future.

Drew: With that in mind, if that’s the goal that you want to reach? From the world that we’re in right now, to that world that you described where people critically question algorithms and data, do you have some ideas of what could bridge the gap? What gets us there?

David: That’s what keeps us busy is pondering, what are the right ways to do this? Things are happening here at the EyeO festival are good, interactive visualizations and even games that allow people to play their way through various things and see how small biases can magnify and amplify.

Nicky Case give a talk earlier at the beginning of our conference, called “The Parable of the Polygons,” where basically it’s squares and triangles living together and they’re all a little bit “shapist” — they don’t want to be in a tiny minority of triangles living in a neighborhood of squares. So if they’re way in the minority, they’ll move around.

As you watch that play out across the city, neighborhoods form of triangles and squares. What you find is a great deal of segregation coming from even minor preferences. This is a playable game that let’s people get a sense for how those kinds of biases play out. I think that’s an example of something I’m hoping we’ll see a lot more of, interactive things that immerse someone. Not just someone talking to you about how bias comes out in the world, but really, it’s about seeing how even your own reasonable choices can really have surprising impact on the world around us.

So that’s one piece, and I think a second piece that we need code literacy and also, engineering literacy, which is in my mind not exactly the same thing. It’s not just about learning to code, it’s about learning to think like an engineer. Nicky made one game about showing how emergent behavior can have surprising effects in housing. But also Nicky turned that into a system that lets anyone else make a game that can show emergent behavior in other contexts, because that is a sort of abstraction of the game properties. You can apply that to other situations where data leads to bias. You can give people other illustrations without having to reinvent the wheel of how the system works underneath. Helping people to reason like engineers, that’s another big piece.

As we all grow up and as our children grow up with technology that “just works” and seems magically to just get to the right answer, it doesn’t require us to look under the hood as much. I’m hopeful we’ll be able to sustain a culture of inquiry and working with the nuts and bolts, even when that doesn’t automatically seem necessary.

This interview is originally from NetPosi, a podcast about activism and technology. The best way to find out about future episodes is to join the email list. You can also subscribe using iTunes or by following the show on Soundcloud or on Twitter.

Connect with David & UpTurn: Check out Team Upturn’s website & Equal Future on Medium, which is a weekly update on technology and social justice. You can find David Robinson on medium and follow him on twitter at @dgrobinson.