Moral Authority as a Platform

Jeff Jarvis
Whither news?

--

[See my disclosures below.*]

Since the election, I have been begging the platforms to be transparent about efforts to manipulate them — and thus the public. I wish they had not waited so long, until they were under pressure from journalists, politicians, and prosecutors. I wish they would realize the imperative to make these decisions based on higher responsibility. I wish they would see the need and opportunity to thus build moral authority.

Too often, technology companies hide behind the law as a minimal standard. At a conference in Vienna called Darwin’s Circle, Palantir CEO Alexander Karp (an American speaking impressive German) told Austrian Chancellor Christian Kern that he supports the primacy of the state and that government must set moral standards. Representatives of European institutions were pleasantly surprised not to be challenged with Silicon Valley libertarian dogma. But as I thought about it, I came to see that Karp was copping out, delegating his and his company’s ethical responsibility to the state.

At other events recently, I’ve watched journalists quiz representatives of platforms about what they reveal about manipulation and also what they do and do not distribute and promote on behalf of the manipulators. Again I heard the platforms duck under the law — “We follow the laws of the nations we are in,” they chant — while the journalists pushed them for a higher moral standard. So what is that standard?

Transparency should be easy. If Facebook, Twitter, and Google had revealed that they were the objects of Russian manipulation as soon as they knew it, then the story would have been Russia. Instead the story is the platforms.

I’m glad that Mark Zuckerberg has said that in the future, if you see a political ad in your feed, you will be able to link to the page or user that bought it. I’d like the platforms to all go farther:

  • First, internet platforms should make every political ad available for public inspection, setting a standard that goes far beyond the transparency required of political advertising on broadcast and certainly beyond what we can find out about dark political advertising in direct mail and robocalls. Why shouldn’t the platforms lead the way?
  • Second, I think it is critical that the platforms reveal the targeting criteria used for these political ads so we can see what messages (and lies and hate) are aimed at whom.
  • Third, I’d like to see all this data made available to researchers and journalists so the public — the real target of manipulation — can learn more about what is aimed at them.

The reason to do this is just not to avoid bad PR or merely to follow the law, to meet minimal expectations. The reason to do all this is to establish public responsibility consumate with the platforms’ roles as the proprietors of so much of the internet and thus the future.

In What Would Google Do?, I praised the Google founders’ admonition to their staff — “Don’t be evil” — as a means to keep the company honest. The cost of doing evil in business has risen as customers have gained the ability to talk about a company and as anyone could move to a competitor with a click. But that, too, was a minimal standard. I now see that Google — and its peers — should have evolved to a higher standard:

“Do good. Be good.”

I don’t buy the arguments of cynics who say it is impossible for a corporation to be anything other than greedy and evil and that we should give up on them. I believe in the possibility and wisdom of enlightened self-interest and I believe we can hold these companies to an expectation of public spirit if not benevolence. I also take Zuck at his word when he asks forgiveness “for the ways my work was used to divide people rather than bring us together,” and vows to do better. So let us help him define better.

The caveats are obvious: I agree with the platforms that we do not want them to become censors and arbiters of right v. wrong; to enforce prohibitions determined by the lowest-common-demoninators of offensiveness; to set precedents that will be exploited by authoritarian governments; to make editorial judgments.

But doing good and being good as a standard led Google to its unsung announcement last April that it would counteract manipulation of search ranking by taking account of the reliability, authority, and quality of sources. Thus Google took the side of science over crackpot conspirators, because it was the right thing to do. (But then again, I just saw that Alternet complains that it and other advocacy and radical sites are being hit hard by this change. We need to make clear that fighting racism and hate is not to be treated like spreading racism and hate. We must be able to have an open discussion about how these standards are being executed.)

Doing good and being good would have led Facebook to transparency about Russian manipulation sooner.

Doing good and being good would have led Twitter to devote resources to understanding and revealing how it is being used as a tool of manipulation — instead of merely following Facebook’s lead and disappointing Congressional investigators. More importantly, I believe a standard of doing good and being good would lead Twitter to set a higher bar of civility and take steps to stop the harassment, stalking, impersonation, fraud, racism, misogyny, and hate directed at its own innocent users.

Doing good and being good would also lead journalistic institutions to examine how they are being manipulated, how they are allowing Russians, trolls, and racists to set the agenda of the public conversation. It would lead us to decide what our real job is and what our outcomes should be in informing productive and civil civic conversation. It would lead us to recognize new roles and responsibilities in convening communities in conflict into uncomfortable but necessary conversation, starting with listening to those communities. It should lead us to collaborate with and set an example for the platforms, rather than reveling in schadenfreude when they get in trouble. It should also lead us all — media companies and platforms alike — to recognize the moral hazards embedded in our business models.

I don’t mean to oversimplify even as I know I am. I mean only to suggest that we must raise up not only the quality of public conversation but also our own expectations of ourselves in technology and media, of our roles in supporting democratic deliberation and civil (all senses of the word) society. I mean to say that this is the conversation we should be having among ourselves: What does it mean to do and be good? What are our standards and responsibilities? How do we set them? How do we live by them?

Building and then operating from that position of moral authority becomes the platform more than the technology. See how long it is taking news organizations to learn that they should be defined not by their technology — “We print content” — but instead by their trust and authority. That must be the case for technology companies as well. They aren’t just code; they must become their missions.

* Disclosure: The News Integrity Initiative, operated independently at CUNY’s Tow-Knight Center, which I direct, received funding from Facebook, the Craig Newmark Philanthropic Fund, and the Ford Foundation and support from the Knight and Tow foundations, Mozilla, Betaworks, AppNexus, and the Democracy Fund.

--

--

Jeff Jarvis
Whither news?

Blogger & prof at CUNY’s Newmark J-school; author of Geeks Bearing Gifts, Public Parts, What Would Google Do?, Gutenberg the Geek