A hammer is not a solution
Structural violence in, structural violence out
Like most of the tech world this week, I’ve been committing the low-level self harm of watching Mark Zuckerberg testifying to Congress, looking like he got caught stuffing the ballot box in the 8th-grade class election, getting softball questions because the rich boys in school never actually have to serve detention. Congress is asking a lot more about content than about how they use data that isn’t for advertising, and a few of them are hitting hardish but it’s it’s hardly going to lead to demands. So, like most of the tech world, I’m also feeling like this is mostly an infuriating sideshow, where he gets to use insincere apologies for abusing his power to quietly congratulate himself for having it in the first place.
Zuckerberg keeps answering questions with promises to build AI to improve content review. How will you handle dangerous content? What about fake news? My 13-year-old loves Instagram (OK, I don’t think anyone knew what was happening there). How will you determine what is and isn’t hate speech? Plenty of organizations work on these issues already, and yes, a universal global solution probably isn’t even possible, but instead of talking about improving the range of people in the room when addressing these issues, everything is about AI. He’s even apologized for being ‘optimistic’.
It’s obvious to most people by now that AI isn’t an answer. It’s just a tool. Answering a question about a problem with the name of the tool you happen to have is not the same as dealing with the problem. It’s like asking someone, “What are you going to do to fix the garden shed you smashed up?” and having them answer, “I’ll use a hammer!” It might not be the right tool, and you might just be terrible at building sheds. You should probably call a professional.
Who is misusing the platform?
In 2013, I wrote a Facebook note about domestic violence prevention. It was removed by Facebook, at the request of a convicted abuser, who had been named in it. It wasn’t defamatory. He’d been convicted and therefore there was no ‘good name’ to protect. It had even been in the news. But he didn’t want it there because people were sharing it, and someone at Facebook agreed, so I got a message that my content was removed because it was offensive. It was a misuse of Facebook.
A day or two later, after calling some people who called some people, the post reappeared with a message about it being removed by ‘mistake’. I managed to CTRL+C it before it disappeared again a few hours later, and I got the same warning message about my problematic content. By then a few blogs had picked it up (I’m not going to link it here because it’s not the point of what I’m saying) so at least it wasn’t lost forever.
At the time, I think most of the Facebook content reviews were done by hand. This complaint, as far as I was told, was directly to Facebook, from the person named.
Naming the problem shouldn’t be a problem
Machine learning, done well, and applied to the right things, is cool as shit. But creating AI to speed up the process for making human decisions means training by humans, on a history of human decisions such as the one made about my note. It’s not an accident, and it’s not a bias that will work itself out when more data washes through it, like a kink in a garden hose. More data and more decisions won’t make it better. More likely it will make it worse, in part because of what and who gets defined as the problem, and who gets to be a protected category (it’s white men, but not black children).
I’m not trying to toot my own horn and I don’t ever want to think of these things again, but I do know that that post helped a few women. A couple of women I’d never met contacted me to tell me this. A couple I knew a little bit also said so. So Facebook also took down a post that helped a couple of women make their lives less shitty. It’s not an isolated problem, and it’s a much bigger problem for people who experience more structural oppression than I do. This manual review problem is part of a bigger problem.
It’s 2018 and people still argue that women aren’t smart enough to code. We still have to have arguments about whether all-male or all-white speaker lineups at conferences are a problem. We still have to fight pretty hard to demonstrate that yes, people experiencing multiple oppressions are disproportionately affected by all kinds of policies, and this injustice transfers into the digital world. We are tired because basic acknowledgement of inequality in the world is still such a puzzle.
Because, as Sara Ahmed says, “When you expose a problem, you pose a problem.”
Structural violence in, structural violence out
This isn’t a Facebook-as-monolith problem, and it isn’t an individual-bad-actor problem. I suspect a lot of people who work at Facebook feel conflicted about being there, and probably there’s a whole layer of internal, NDAed reasons we can’t even know about. And maybe we should be glad that those people who speak up and take on responsibilities are willing to work there. Even though the infrastructure can’t be fixed by a group of ‘nice’ people in a damaging system, Facebook needs people with a conscience to work on it. If they all left, we’d be stuck with a platform where all the decisions about a huge piece of infrastructure would be made by people who don’t care about other people.
But in this case, in 2013, someone at Facebook made the decision to believe a guy who thought four judges were basically overreacting. This person was empowered to do this by other people in the company who determined their hiring, firing and promotion. The standards they acted on are set by leadership, both technical and business. And those standards include upholding the values of the society we live in, in which it’s absolutely a feature, not a bug, to act on the automatic assumption that women are vindictive complainers, and to take the word of someone they could have literally just googled and learned that oh, there’s nothing actionable here. It might even have felt to them like a fairly arbitrary decision, which is what makes it so dangerous: when you have power, the world can’t afford your arbitrary decisions.
That’s the stuff AI would learn on — those standards, and how we act on them — and then it would continue accelerating and scaling the very system we’re working to dismantle, but this time untouched by human hands, so no one has to be responsible — just like domestic violence. It’s perfectly ok to talk about ‘algorithmic bias’ or ‘content review’ in the abstract, using passive verb constructions and buzzwordy material nouns plus other nouns, but don’t talk about the people who do it.
I’m sure very few people would have been offended if I’d just spoken about violence that was done to someone, it was when the issue was forced that it was by someone that it became a problem. And I became the problem.
Machine learning is cool, but it isn’t magic and we can’t use it to create our superior selves. If we want machines to help us be better, we need to start making decisions that make us those better selves, whether or not we’re training machines on those decisions.
A tool is not a problem
Technology is just a tool. Like all tools, when used without careful thought, it’s more likely to do unintentional harm than unintentional good. It’s not just about ‘AI ethics’, it’s about people ethics, otherwise known as ‘caring about other people.’ A hammer is very useful, but if you swing it around without looking where you’re going, you won’t accidentally build a new shed. You weren’t optimistic, you just didn’t care. And if you’re sincere about wanting to do better, you don’t promise to get a better hammer. Probably you call a professional.
It’s also why it matters, not just who is part of what we build, but that the people who speak up about problems get heard, without being told “Nobody but you thinks that way,” which is an even ruder way of saying “You don’t matter.” The least we can do is not get offended when someone speaks up about something that could do harm in ways we didn’t intend, especially to people who aren’t in the room. That’s why when Mark Zuckerberg swatted away questions about diversity as irrelevant to the topic, it’s just as big a problem as his promising to use AI to fix a platform that is also hurting the world. It can’t be a diversity issue, it has to be a design-and-build issue but you can’t do it without diversity.
Plenty of industries have regulation to minimize harm to people, animals, and the rest of the environment. There are safety standards, rules on disposing of hazardous materials, water standards, and ingredients and substances that are just plain banned because they’re too dangerous to be worth the business benefits. You can’t walk into a construction site, pick up a hammer, and start swinging it around with your eyes closed just because you’re in a place where hammers are used a lot. You can’t paint a bunch of new apartments with lead paint and then blame the damage on your sunny outlook.
Solving hard problems is supposed to be our thing
It can’t be that hard for an industry that prides itself on solving hard problems to solve these hard problems without taking away a social media platform that is central to so many of our lives (including mine — I like using Facebook).
As designers, writers, and people who make things in code or in physical form, we can ask new questions when we design and build things for human people. We already ask what color the buttons should be because we care about how color, shape, and text affect users. We argue about single or multi-tenant architecture. We have opinions about frameworks. We want stable performance and reliable uptime. We talk about security issues, compliance, privacy, and now, GDPR. We already have discussions about risk, and we can start adding line items about harm as a completely standard part of solving problems for all users, especially those who already experience structural violence, without getting our dominant-group feelings hurt. We should check out things like Listenup.tech, inviting voices from outside the industry. And hiring them (if they want).
And we can’t put the responsibility on users to point it out to us because minimizing the risk of violence should not require a detailed knowledge of database architecture or APIs, or even to know the difference between HTTP and HTTPS. We should listen when they tell us, but it can’t be their job to protect themselves from what we build for them.
Did the person back in 2013 get in trouble for removing my note? Probably not. Should they have? I don’t know, but maybe some kind of “we need a policy around this in the future” meeting should have happened, or someone in charge of content review for Ireland should have had a crash course on the 2009 Defamation Act. What I do know is “Do you care about other people?” should matter at least as much as technical skill when building companies, forming teams, and hiring product owners, engineering managers, and even content designers (hi! definitely us!), and right now, not caring enough about other people isn’t enough of a hindrance to get ahead (just like in the non-tech world).
Watching Zuckerberg’s testimony makes me think that if you care quite a lot about other people, in a way that’s outside of what users might or might not ‘like’ (or, at worst, a paternalistic, ‘will they become addicted to this?’), and you speak up about it, your options at Facebook might be narrower, your upward mobility slower, and your career there a bit shorter. As long as we punish people across the industry who speak up about their concern for other people, Zuckerberg will get more and more chances talk about a ludicrous ‘community of two billion people’ and remind us that he’s barely out of his dorm room even though he’s 33 and a billionaire and dabbling in settler colonialism.
We don’t have to build technology tools that actively save the lives of people who experience structural and systemic oppression (because tools can’t do that), but we can, when we have the chance, at least try to build things that don’t make it easier to end them, even by accident.