What is the Role of Journalists in Holding Artificial Intelligence Accountable?

The Wall Street Journal is experimenting with a new approach for reporting how smart algorithms work, beyond simply describing them.

Francesco Marconi
WSJ Digital Experience & Strategy
7 min readMay 21, 2018

--

Image Credit: Gabriel Gianordoli/ WSJ

Journalists, who routinely ask questions of their sources, should also be asking questions about an algorithm’s methodology. The rules created for algorithms need to be explicit and understood. The Wall Street Journal has been experimenting with a new approach to explain how AI works by letting readers experiment with it.

“Interactive graphics can provide insights into how algorithms work in a way beyond simply describing its output. They can do this by acting as safe spaces in which readers can experiment with different inputs and immediately see how the computer might respond to it,” said deputy graphics director Elliot Bentley.

“To make this accessible and non-intimidating, it’s important to design a straightforward interface with minimal controls, and also provide informative and immediate feedback,” Bentley added.

Image Credit: Gabriel Gianordoli/ WSJ

The most recent example of letting readers experiment with algorithms is our story, “What Your Writing Says About You,”published as part of the Leadership issue of Journal Reports. The news experience offers an interface allowing people to enter text such as an essay, cover letter, blog post or business email and receive results from algorithms that rate the content by different parameters. By including detailed methodology and source notes, we allow our audiences to understand how machine learning and natural language processing can determine context, language mastery, meaning and even your mood from the choice of words.

“These explorable explainers allow us to not only go deeper, but also to give the readers a perspective on subjects like AI that we can’t give them by simply writing more great stories. It immerses them in a unique way in a subject we know they care about,” said Journal Reports editor Larry Rout.

In a previous Graphics project entitled “How Facial Recognition Software Works,” Bentley explained that readers need only to enable their webcam and begin moving their head around in order to play with a facial-recognition algorithm. It then provides clear, real-time feedback using a series of visual overlays. Another example of this is “Build Your Own Trading Bot” in which we attempted to demystify algorithmic trading by designing a user-friendly interface and a rewarding feedback loop to encourage readers to experiment with the mechanics.

How Facial Recognition Software Works. Credit: Elliot Bentley/WSJ

Journalism and algorithmic accountability

We might not notice it, but artificial intelligence affects multiple parts of our lives. These algorithms decide whether an individual qualifies for a loan, whether a resume is seen by a recruiter, which seat a passenger is assigned on an airplane, which advertisements shoppers see online and what information on the internet is shown to users. Transparency of the data that feeds these processes is crucial both for consumers to better understand what they encounter and for organizations to shape their business strategy.

Given the challenging nature of auditing algorithms, it’s important to consider how the practice of journalism can be leveraged to hold AI systems accountable. In his forthcoming book, Northwestern University professor of computational journalism Nicholas Diakopoulos introduces the notion of algorithmic accountability reporting as an approach to highlight influences that computer programs exercise in society.

“Operating at scale and often affecting large groups of people, algorithms make consequential and sometimes contestable decisions in an increasing range of domains throughout the public and private sectors. In response, a distinct beat in journalism is emerging to investigate the societal power exerted through such algorithms.

There are various newsworthy angles on algorithms including discrimination and unfairness, errors and mistakes, social and legal norm violations, and human misuse. Reverse engineering and auditing techniques can be used to elucidate the contours of algorithmic power,” Diakopoulos explained.

The “black box” problem in AI

When certain decisions are derived through an algorithm, it’s often hard to pinpoint why or how an automatic output was derived. This introduces the problem of the “black box” algorithm whereby correlations are made without rules set by humans. This term is often used as a metaphor for algorithms in which the process to reach a certain outcome cannot be seen in full.

“Auditing algorithms is not for the faint of heart. Information deficits, expectation setting, limited legal access, and shifting dynamic targets can all hamper an investigation. Working in teams, methods specialists working with domain experts can, however, overcome these obstacles and publish important stories about algorithms in society,” Diakopoulos added.

It’s indeed relevant to dissect how computers make decisions and to comprehend how smart systems are created. For example, the AI powering the set of analysis in “What Your Writing Says About You” is provided by Factbase, an AI company which makes its algorithms open source, peer reviewed, and available for examination.

In What Your Writing Says about You,” we explain the underlying scientific methodology behind each output, including the Flesch-Kincaid Grade Level — developed in 1975 by the Department of Defense to review readability level of military materials — as well as the Treebank methodology created by the University of Pennsylvania to evaluate linguistic structure of text.

“It’s important, as much as is possible, to understand the parameters under which the AI or algorithms arrived at its conclusions. What parameters it examines, and how it analyzes it, provides transparency to its thinking, per se, which in turn makes it more clear how it decides what it decides,” said Bill Frischling, founder of FactBase.

This issue is prevalent in artificial intelligence, partly because the systems are not necessarily designed to explain how they do certain things, but to just do them. This is also a byproduct of algorithms learning by themselves; they make causal links not based on human instruction but on self-identified patterns.

Newsroom collaboration

The Wall Street Journal’s news hub in New York City.

There are, of course, technical gaps to developing this type of reporting on algorithms, which can be addressed by working cross-functionally with data scientists, computational journalists and technologists. Increasingly, it’s important to foster a culture of collaboration throughout the newsroom and bring multiple perspectives into the process of story planning and development.

“A project such as this which taps so many areas of expertise and aligns them is a pleasure to be part of. What started with WSJ Lab’s original outline of possibilities was honed by a team of editors at Journal Reports to focus on specifically what our writing reveals about us. Our interactives team wrangled the code, user interface and graphic visualization,” said news editor Demetria Gallegos.

“Then, privacy experts from our legal and data teams, our social and off-platform colleagues and homepage and mobile editors weighed in to ensure the experience is optimized for every reader,” Gallegos added.

The odds for a successful collaboration can be increased if the organization is able to foster an environment where journalists are encouraged to test new ideas, to seek feedback, and to share best practices even if experiments are unsuccessful. Building this “feedback loop” can enable news professionals to mitigate the uncertainty of experimentation as well as inform the broader newsroom strategy.

“When we are thinking about how to create an innovative news experience, we have to consider how readers already ingest news — and how much further they are willing to go. In our discussions during the story planning process, we ran through various scenarios of how the tool could work, based on different criteria. We then ruled out things that would require too much time or too many steps. We also had to be sensitive to how much information people are willing to disclose. We designed this interactive story to be fun enough to get readers in, engaging enough to have them read through it, take the quiz, play the game etc. And if they end up sharing their results on social media, we know we did it right,” explained news editor Cristina Lourosa.

Journalistic standards and technological evolution

Just because a certain result came from a computer, it doesn’t mean it’s right. Artificial intelligence is programmed by humans and consequently it can make mistakes. The ethical considerations inherent to using AI are far and wide.

“Understanding the source of information whether it’s from a person or algorithm is not only crucial for the news industry but as well, for democracy,” said Kourosh Houshmand, a computational journalist at Columbia Journalism School.

The practice of journalism is about questioning the world around us, and that same principle still applies even when a piece of software played a role in a particular outcome such as determining the price of a product, evaluating how a person feels based on their writing or selecting a candidate for a job interview.

“We can help readers understand how technology works by explaining how the algorithms get their results and then pointing to the source documents and formulas that power the calculations,” said graphics reporter Nigel Chiwaya.

An effective way to understand AI is to experiment with it, comprehend the nuances of how algorithms make decisions and how those decisions may affect our lives.

Interested in journalism and data science? WSJ is hiring a Data Science Lead.

--

--