A chapter I contributed to the Hackademic 2018 book, Anti-Social Media?:
The Impact on Journalism and Society.
In all the urgent debate about regulating, investigating, and even breaking up internet companies, we have lost sight of the problem we are trying to confront: not technology but instead human behaviour on it, the bad acts of some (small) number of fraudsters, propagandists, bigots, misogynists, and jerks.
Computers do not threaten and harass people; people do. Hate speech is not created by algorithms but by humans. Technology did not interfere with the American election; another government did.* Yet we demand that technology companies cure what ails us as if technology were the disease.
When before have we required corporations to monitor and mediate human behaviour? Isn’t that the job — the very definition — of government: to define and enforce the limits of acceptable acts? If not government, then won’t parents, schools, clergy, therapists, or society as a whole — in its process of negotiating norms — fill the role? But all that takes time. In the face of the speed and scale of the invention and dissemination not only of technology but of its manipulation, government has no idea what to do. So in their search for someone to blame, government outsource fault and responsibility, egged on by media (whose schadenfreude constitutes a conflict of interest, as publishers wish to witness their new competitors’ comeuppance).
Why would we ever expect or want corporations to doctor us? Indeed, isn’t manipulation of our speech and psyches by technologists what critics fear most? Some argue this is the platforms’ problem because it’s the platforms that screwed us up. I disagree. It’s not as if before the net the world was a choir of angels. To argue that the internet addicts the connected masses, makes them stupid, turns them into trolls, and transforms them into agents of society’s ruin is elitist and fundamentally insulting, denying people their agency, their intelligence, their goodwill or lack thereof. The internet is not ruining humankind. Humankind is still trying to figure out what the internet can and should be.
It is true that internet technology has provided bad actors with new means of manipulation and exploitation in the pursuit of money and lately political gain or demented psychology. It’s also true that the technologists were too optimistic and naive about how their powerful tools could be misused — or rather, used but for bad ends. I agree that Facebook, Google, Twitter, and company must exercise more responsibility in anticipating and forestalling manipulation, in understanding the impact they have, in being transparent about that impact, and in collaborating with others to do better. There’s no doubt that the culture of Silicon Valley is too isolated and hubristic and must learn to listen, to value and empower diversity, to move fast but think first. Do I absolve them of responsibility? No. Do I want them to do more? Yes.
The terms of the conversation
But what precisely do we expect of them? For a project underwritten by the How Institute for Society, founded by Dov Seidman, I interviewed and convened discussions with people I respect as leaders, visionaries, and responsible voices in journalism, technology, law, and ethics. What struck me is that I heard no consensus on the definition of the problems to be solved, let alone the solutions. There is general head-shaking and tsk-tsking about the state of the internet and the platforms that now operate much of it. But dig deeper in search of an answer and you’ll find yourself in a maze.
At Google’s 2018 European journalism unconference, Newsgeist, I proposed a session asking, “What could Facebook do for news?” Some journalists in the room argued that Facebook must eliminate bad content and some argued that Facebook must make no judgments about content, good or bad. Sometimes, they were the same people, not hearing themselves making opposing arguments.
In my interviews, Professor Jay Rosen of New York University told me that we do not yet have the terms for the discussion about what we expect technology companies to do. Where are the norms, laws, or regulations that precisely spell out their responsibility? Professor Emily Bell of the Columbia School of Journalism said that capitalism and free speech are proving to be a toxic combination. Data scientist Deb Roy of the MIT Media Lab said capitalistic enterprises are finely tuned for simple outcomes and so he doesn’t believe a platform designed for one result can be fixed to produce another, but he hopes innovators will find new opportunities there. Technologist Yonatan Zunger, formerly of Google, argued that computer scientists must follow the example of engineering forebears — e.g., civil engineers — to recognise and account for the risks their work can bring. Entrepreneur John Borthwick, founder of Betaworks, proposed self-regulation to forestall government regulation. Seidman the ethicist insisted that neutrality is no longer an option and that technology companies must provide moral leadership. And philosopher David Weinberger argued that we are past trying to govern according to principles as society is so divided it cannot agree on those principles. I saw Weinberger proven right in the discussion at Newsgeist, in panels I convened at the International Journalism Festival, and in media. As Rosen says, we cannot agree on where to start the conversation.
The limits of openness
In the web’s early days, I was as much a dogmatist for openness as I am for the First Amendment. But I have come to learn — as the platforms have — that complete openness invites manipulation and breeds trolls. Google, Facebook, and Twitter — like news media themselves — argue that they are merely mirrors to society, reflecting the world’s ills. Technology’s and media’s mirrors may indeed be straight and true. But society warps and cracks itself to exploit these platforms. The difference between yesterday’s manipulation via media (PR and propaganda) and today’s via technology (from trolls to terrorists) is scale; the internet allows everyone who is connected to speak — which I take as a good — but that also means that anyone can become a thief, a propagandist, or a tormentor at a much lower cost and with greater access than mass media permitted. The platforms have no choice but to understand, measure, reveal, and compensate for that manipulation. They are beginning to do that.
Good can come of this crisis, trumped up or not. I now see the potential for a flight to quality on the net. After the 2016 elections and the rising furore about the role of the platforms in nations’ nervous breakdowns, Google’s head of search engineering, Ben Gomes, said that thenceforth the platform would account for the authority, reliability, and quality of sources in search ranking. In a search result for a query such as ‘Is climate change real?’ Google now sides with science. Twitter has recognised at last that it must account for its role in the health of the public conversation and so it sought help from researchers to define good discourse.
For its part, Facebook downgraded the prominence of what it broadly considered public (as opposed to social) content, which included news. Now it is trying to bring back and promote quality news. At The Newmark J-Schools Tow-Knight Center at CUNY, I am working on a project to aggregate signals of quality (or lack thereof) from the many disparate efforts, from the Trust Project to the Credibility Coalition and many others. We will provide this data to both platforms and advertisers to inform their decisions about ranking and buying so they may stop supporting disinformation and instead support quality news. [Disclosure: This work and that of the News Integrity Initiative, which I started at CUNY, are funded in part by Facebook but operate with full independence and I receive no compensation from any platform.]
Are these acts of self-regulation by the platforms sufficient? Of course, not. But I argue we must view this change in temporal context: We are only 24 years past the introduction of the commercial web. If the net turns out to be as disruptive as movable type, then in Gutenberg terms that puts us in the year 1474, years before Luther’s birth and print-sparked revolution, decades before the book took on the post-scribe structure we know now, centuries before printing and steam technology combined to create the idea of the mass.
Causes for concern
We don’t know what the net is yet. That is why I worry about premature regulation of it. I fear we are operating today on vague impressions of problems rather than on journalistic and academic evidence of the scale of the problems and the harm they are causing. I challenge you to look at your Facebook feed and show me the infestation of nazis there. Where is the data regarding real harm?
I worry, too, about the unintended consequences of well-intentioned regulation. In Europe, government moves aimed at challenging the power of the platforms have ended up giving them yet more power. The so-called right to be forgotten has put Google in the uneasy position of rewriting and erasing history, a perilous authority to hold. Germany’s Leistungsschutzrecht (ancillary copyright) gave Google the power to set the terms of the market in links to news. Spain’s more aggressive link tax led to the exit of Google News from the country. I shudder to think what a pending EU-wide version of each law will do. Germany’s hate-speech law, the Netzwerkdurchsetzungsgesetz or NetzDG law, is all but killing satire there and requires the devotion of resources to killing crap, not rewarding quality. The EU’s General Data Protection Regulation (GDPR) will leave Google and Facebook relatively unscathed — as they have the resources to deal with its complex requirements — but some American publishers have cut off European readers, balkanising the web. Anticipated ePrivacy regulation will go even farther and I fear an extreme privacy regime will obstruct a key strategy for sustaining journalism — providing greater relevance and value to people we know as individuals and members of communities and gaining new revenue through membership and contribution as a result. Thus this regulation could artificially extend the life of outmoded mass media and the paternalistic idea of the mass.
I worry mostly that we may be entering into a full-blown moral panic, with technology — internet platforms — as the enemy. Consider Ashley Crossman’s definition: “A moral panic is a widespread fear, most often an irrational one, that someone or something is a threat to the values, safety, and interests of a community or society at large. Typically, a moral panic is perpetuated by news media, fuelled by politicians, and often results in the passage of new laws or policies that target the source of the panic. In this way, moral panic can foster increased social control.” Sound familiar? To return to the lessons of Gutenberg’s age, let us recall that Erasmus feared what books would do to society. “To what corner of the world do they not fly, these swarms of new books?” he complained. “The very multitude of them is hurtful to scholarship, because it creates a glut and even in good things satiety is most harmful.” But we managed.
When I was invited to contribute this chapter, I was asked to write “in defence of Facebook.” With respect, that sets the conversation at the wrong level, at the institutional level: Journalism vs. Facebook. Thus we miss the trees for the forest, the people for the platforms. No matter what we in journalism think of Facebook, Google, or Twitter as companies, we must acknowledge that the public we serve is there and we need to take our journalism to them where they are. We must take advantage of the opportunity the net provides to see the public not as a mass but as a web of communities. We cannot do any of this alone and need to work with platforms to fulfill what I now see as journalism’s real job: to convene communities into civil, informed, and productive conversation. If society is a polarised world at war with itself — red vs. blue, white vs. black, insider vs. outsider, 99% vs. 1% — we perhaps should begin by asking how we in journalism led society there.
* I expect someone on Twitter to respond to this paragraph with a picture of the bumper sticker declaring that guns don’t kill people; people do. The sentence structures may be parallel but the logic is not. Guns are created for one purpose: to kill. The internet was created for purposes yet unknown. We are negotiating its proper and improper uses and until we do — as we are learning — the improper will out.