The age of the algorithm needs editors

Amelia Pisapia
Apr 5, 2018 · 4 min read
Photo by Craig Whitehead on Unsplash

In Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, data scientist Cathy O’Neil declares that we are living in the age of the algorithm. Artificial intelligence is quietly revolutionizing every industry, from media and marketing to healthcare, transportation, financial services and beyond. And the pace with which it’s happening is quickening.

Spending on A.I. and machine learning is expected to grow from $12 billion in 2017 to $57.6 billion by 2021, according to analysts at the International Data Corporation. Researchers at Deloitte predict the number of pilot programs using machine learning at medium and large companies will double in 2018 and double again by 2020. According to a recent study by PwC, AI’s potential contribution to the global economy could be $15.7 trillion by 2030.

Though A.I. has the potential to do good for society, it’s causing a lot of harm. Platforms are being manipulated to spread disinformation and influence elections around the world. Programmatic advertising was found to be funding terrorism. Search engines are reinforcing racism. Big data is increasing inequality. Facial recognition software discriminates based on race and gender. A.I. is being used to police, profile and punish the poor. And predictive software used in courts around the country is biased against black defendants.

“The best parts of algorithmic influence will make life better for many people, but the worst excesses will truly harm the most marginalized in unpredictable ways,” Anil Dash noted in “Code Dependent: The Pros and Cons of the Algorithm Age,” by Pew Research Center.

How can we use the skills of editors to help hold these systems accountable?

The Pew report identified seven themes of the algorithm era:

In my work as a fellow in CUNY’s entrepreneurial journalism program, I’ve been exploring some of these themes. How can editors help guard against bias? How can they push for algorithmic transparency and oversight? How can they ensure that respect for the individual is accounted for in these systems? How can they help anticipate ethical implications?

You don’t need to look beyond the headlines to see that many engineers do not have a solid understanding of the ethical issues inherent in their products. Research has found that current engineering ethics education is not effective. Though software engineering teachers may include modules on ethics, ethics courses are not a requirement in most programs. We’re just starting to understand the far-reaching implications of this.

Beyond algorithmic accountability reporting along the lines of ProPublica, what role can journalism play in holding these systems accountable? And should it be a role of journalism to do so? The question was raised in “Artificial Intelligence: Practice and Implications for Journalism,” a report published last year by the Tow​ ​Center​ ​for​ ​Digital​ ​Journalism and the Brown​ ​Institute​ ​for​ ​Media​ ​Innovation. The report mentioned that algorithms used in many industries are seldom tested independently. I’d argue they need to be and that editors should be trained to do so.

In its most basic sense, editing is a form of data analysis. Editors are skilled at putting information in context, assessing the accuracy of data and weeding out bias. They view issues from multiple angles, connect the dots and uncover human stories in complex systems. Editors, like engineers, are talented problem solvers.

Additionally, editors operate using established ethical frameworks. According to the Accountable Journalism database, there are more than 400 codes of media ethics around the world. Though they vary by practice and scope, all have five values in common: accuracy, independence, impartiality, humanity and accountability.

The principles of journalism serve as a strong foundation for algorithmic inquiry. Over the last few months, I’ve been developing a consulting service that uses these frameworks to investigate algorithms so that companies create more inclusive technology. Beyond this work, I hope to spark a larger conversation around the role of editors in society.

“Perhaps the public editor of the future will also play the role of algorithmic ombudsman,” Nick Diakopoulos noted in his report, “Algorithmic Accountability Reporting: On the Investigation of Black Boxes.”

If the mission of journalism is to hold powerful systems accountable, could this sort of consulting be considered an act of journalism? What does it mean to be an editor today and what could it mean in the future?

If you would like to get in touch or receive project updates via email, click here.

Journalism Innovation