Our social media car needs service

So this is probably going to get me in a lot of trouble with the free speech wing of my corner of academia, but …. it’s time we have a serious conversation about what social media regulation looks like. The past two years I find myself favoring it more than I used to.

Early in my academic work I was drawn to the power of self-publishing. In many ways I still am. It’s important to remember that giving people a platform with a microphone gives ordinary people a type of power they don’t have in a world of big media. I still believe in that.

But those were ideals at early stages. Scale reveals flaws and problems of design. And some of these mass shootings the past few years have revealed some serious holes in the free-speech argument.

To recap, we have a live video here, spread by social networks, and a manifesto that is imminently linkable. The flow from self-publishing to attention to news brings attention to the source material. The goal for the terrorist here is publicity, and he’s getting it.

News gonna news. Its job is to cover things. But the system of news production we have is based on a pre-internet world where there are no links to the source material. You got everything through the journalist gatekeeper, filtered in a world without hyperlinks.

But in a social-internet world, news is publicity. It’s an invitation to google or search social media for the video or manifesto. Not everyone will do it, but many will. @zephoria has talked about this extensively, that it is an unwitting engine for recruitment of radicals.

Unleashing social live video is akin to experimenting on ourselves. Like many things that were done by the internet’s early builders, we did it because we could. And so we did.

Clearly there are a lot of good things being done with live video. I can rattle off many, from Standing Rock to police shooting accountability. But at what cost when we’re talking about inflicting trauma on audiences via the acts of terrorists? I know “at what cost?” is loaded and cliche. But it strikes me we aren’t even having the conversation about costs. It often gets brushed off as the price of free speech. But that’s dismissive and disrespectful, and it discounts the lived experiences of those who have to deal with the consequences of ideological purity.

I don’t even know how to weigh this, so don’t ask me how. I just know my kids are growing up in a world where they’re going to hear about and be able to search (or be unwilling viewers of) live shooting videos because our tech simply can’t keep up with extremists.

Here’s a good example of the flow:

This is too fast for any system, human or otherwise. There’s only one choke point we can reasonably put in the system, and that’s to deplatform extremists. But not after they spew their filth, but before it. I can hear you “what about”ing from here.

The example I keep hearing most is whether we want Facebook to decide. I think that’s the wrong framing. We are currently letting Facebook’s audience decide and they’re the ones spreading it.

The enemy isn’t Facebook; it’s us.

If your car has faulty brakes, you recall the car and fix it, then put it out on the road. Live video is a broken system and enables some of the worst abuses on social. Put this thing back in the garage until we can design a system that doesn’t tear at our social fabric.

I am not an expert on the tech/code side of this. My sense, though, is we’re using things like algorithms and machine learning to deal with abuse post hoc. Social science can help here. We know abusers have certain patterns. We can stop the horse before it leaves the barn. Maybe not all the time, but we can do a better job catching this before it happens.

I can hear your objections: it sounds Orwellian. But we we are not designing solutions to meet a serious threat. We can’t have a social media shooter making their sick version of a live documentary every month and expect it to not damage society in profound ways.