The 3 Laws of Journalism Bots
Isaac Asimov’s three laws of robotics are universally famous. At least among science fiction nerds like myself. I’ve seen a few posts referencing these laws for bots in general. I haven’t seen distinct laws around journalism bots though. I think it’s time for these laws.
Isaac Asimov’s Three Laws of Robotics
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Journalism in the U.S. operates within a variety of laws. Most importantly the first amendment which includes the freedom of the press. There are also ethics, credibility and trust involved for every truly journalistic institution.
Increasingly the public’s view of the press is negative. Our executive editor at The Washington Post, Marty Baron, has spoken about this lack of trust as our greatest challenge. I believe news bots can help with these issues through the personal one-to-one experience through messaging.
What goes into a news bot and what information that news bot learns is already based on the journalistic integrity of the organization powering it. But, when you combine algorithms and machine learning, I believe new laws are required.
The Three Laws of Journalism Bots
- A journalism bot may not misinform a reader through out of context information, express opinion on non-opinion hard news, or ignore corrected information when updated.
- A journalism bot must obey the orders given to it by human journalists except where such orders would conflict with the First Law.
- A journalism bot must know that its reason for existence is to distribute journalism to readers as long as such distributed journalism does not conflict with the First or Second Laws.
A bot should learn from its users but one only needs to look at Microsoft’s bot Tay that became a Holocaust-denying racist based on interactions on Twitter. Twitter is a much more open ecosystem of information and Microsoft wasn’t operating the bot with journalism in mind. However, it still points out the perils of machine learning when anyone can send information to a bot.
Bots need editors. They should inherit already edited journalism. Editors should be able to train news bots for specific types of information. That information should be corrected the same way it is handled now per that organization’s policy.
The future is largely unknown for news bots and chat bots in general. These laws are basic and a work in progress. It’s how we are thinking about bots at The Washington Post. The technology to build and power them is not the challenge. Our challenge is creating an experience that reminds the world of one simple fact: we are journalists.