Algorithmic Accountability Reporting

The power algorithms exert over us and society as a whole is expanding into every sector. We talked to Jonathan Albright from the Tow Center and Lorenz Matzat from AlgorithmWatch about their research into these opaque power structures.

Freia Nahser
Global Editors Network
8 min readMay 31, 2018

--

Algorithms play a decisive role in how we live on a daily basis. Algorithms can decide whether we’re eligible for a loan, social housing, or certain types of credit. They can decide what news we read and information we have access to, how people are treated in the criminal justice system, and whether our CV makes the cut for a job interview. Yet the the decision making processes of algorithms are mostly opaque: after an algorithm chooses who to parole or which YouTube video to play next, it can’t lay out its reasoning for us.

But is there a way to blow open these black boxes?

Article 22 of the EU’s new GDPR rules states that an individual has the right not to be subject to a decision based on automated profiling. Article 13 states that data controllers have to reveal the reasoning behind an automated process if it significantly affects the individual.

[…] There’s a whiff of algorithmic accountability’, seeing as data controllers will now have to take steps to prevent errors, bias, and discrimination.

As the influence of algorithms continues to increase, it is likely that pressure will grow to make their workings more traceable and transparent far beyond the new rules in Europe. We talked to Jonathan Albright, Research Director at the Tow Center for Digital Journalism and Lorenz Matzat, co-founder of AlgorithmWatch, about cracking the German credit tracking system and to what extent algorithms are guilty of blurring the lines between fact and fiction.

Who are we investigating?

‘It’s not the software that makes the decisions: their decision making processes are dictated by humans’, Matzat told us. ‘Software is vulnerable to error: unconscious human bias, deliberate manipulation, and it can transmit certain worldviews’.

ProPublica found that predictive justice algorithms were spitting out discriminatory sentencing recommendations to judges. For example, in the US, two 18 year old girls who were trying out a little kid’s toys in the street were reported to the police by a bystander. They walked away after being chased by the mother of the kid, but they were charged with burglary and petty theft regardless. One of the girls had a record, but for misdemeanours when she was underage. Somewhere else, a man with a criminal record who had already served five years in prison, was caught stealing tools from a home depot store.

While you might think these crimes are almost incomparable, a computer programme decided that the girl with the juvenile record was more likely to reoffend than the man. She is black and he is white.

Two years later, the girl didn’t reoffend, but the man was back in prison for breaking into a warehouse and stealing thousands’ of dollars worth of electronics.

While the algorithm was clearly wrong, machines only learn from what we show them. It’s therefore been suggested that a more useful term for machine learning would therefore be machine teaching, as this would put the responsibility where it lies: on the teacher.

‘I feel strongly that understanding algorithms as technological processes represents only half of the problem; We should know more about the inherent linguistic, biological, and cognitive processes that lead to certain signal and ranking decisions — and lines of code — existing within these algorithms. As many have stated, algorithms tend to amplify existing biases, power structures, and representations’, said Albright.

How are we investigating?

‘Proprietary software and machine learning make it particularly difficult to understand the decision making processes. Even if machine learning processes are open source, they are barely impenetrable without the training data sets that were used. This is why it’s no longer enough to ask for transparency — we also need traceability’, said Matzat.

Crowdsourcing for reverse engineering

Seeing as algorithms are often proprietary, reverse engineering is a good way to uncover what’s going on behind the scenes.

Nick Diakopoulos defines reverse engineering as ‘the process of articulating the specifications of a system through a rigorous examination drawing on domain knowledge, observation, and deduction to unearth a model of how that system works’, in his Tow research report on algorithmic accountability. More simply, in this case it means figuring out how an algorithm was set up and reconstructing it.

This process requires researchers to obtain as much data as possible, part of which can be crowdsourced via plugins and other data collection processes.

An example: Cracking Germany’s credit scoring system

SCHUFA (Schutzorganisation für Allgemeine Kreditsicherung) is a private company that keeps credit records of all people living in Germany. It knows all the bills you’ve ever paid (or haven’t) and when you want to apply for a loan, a flat, or open your landline; your bank, landlord, or Deutsche Telekom will check your SCHUFA score to see if you’re trustworthy. Ten million people in Germany are disadvantaged as a result of a low SCHUFA score.

But what if the score is calculated by a biased model? AlgorithmWatch has launched a crowdfunding project in collaboration with the data team from Der Spiegel and German public broadcaster Bayerischer Rundfunk, to shed some light on the inner workings of the credit scoring system. According to the Algorithm Watch website, no one, not even the German government, knows how accurate SCHUFA’s data is and how it computes its scores.

AlgorithmWatch started collecting SCHUFA information from those willing to take part in May 2018. People can upload a photo of their information (SCHUFA info in Germany is sent to people only via a printed letter!), which is then read using optical character recognition (OCR). Participants are also required to disclose demographic data, such as age, sex, where they live, as well as information about their financial situation. One of the challenges, according to Mazat, is finding a balance between having as much information as possible to work with, whilst ensuring the anonymity of the volunteers.

Schufa record clipping

Once all the data has been collected, a team of around ten data scientists and journalists use statistic methods to analyse it and identify patterns and relationships. Through this process of reverse engineering, they will hopefully be able to come up with an approximate model of how the SCHUFA algorithm works. We’re excited to see the results.

Where else should we be looking?

Not always the usual suspects: Is YouTube a misinformation engine?

‘One of the most concerning themes at the moment is how algorithms play a large role in distinguishing between fact and fiction’, Albright told us. ‘In a way, the truths we understand and process as reality are surfaced through algorithmic systems like Google search, YouTube’s trending videos, and, of course, Facebook’s News Feed’.

While a lot has been written about the impacts of Facebook and Twitter on election results for example, YouYube has come off rather more lightly.

Last year, Zeynep Tufekci tweeted that Youtube is the most overlooked story of 2016 and that its search and recommendation algorithms are misinformation engines.

Youtube does not allow us insight into its layers of algorithms. In a conversation with the Guardian, Guillaume Chaslot, former YouTube algorithm engineer, said ‘YouTube is something that looks like reality, but it is distorted to make you spend more time online. The recommendation algorithm is not optimising for what is truthful, or balanced, or healthy for democracy.’

After the Parkland shootings in February 2018, Albright searched YouTube’s API for ‘crisis factors’ and obtained the ‘next up’ recommendations for all of the results. This led him to discover a network of 9000 conspiracy themed videos.

‘Exactly what the content of each of these videos entails, I’d rather not know. 90% of the titles, however, are a mixture of shocking, vile and promotional. Themes include rape game jokes, shock reality social experiments, celebrity pedophilia, ‘false flag’ rants, and terror-related conspiracy theories dating back to the Oklahoma City attack in 1995’, wrote Albright in a medium post, detailing his research.

Some returned video topics from Albright’s research

He wrote that every time there is a mass shooting or a terror attack, the YouTube conspiracy genre grows in size and economic value. The search and recommendation algorithms will therefore naturally make sure that conspiracy videos are connected so that they have more reach.

In an article on BuzzFeed News, Albright concluded that his results suggest that the conspiracy genre is embedded so deeply into YouTube’s video culture that it could be nearly impossible to eradicate.

Audit yourself

Mathematician Cathy O’Neil has launched a service offering businesses to test their algorithms for fairness. But ‘companies aren’t knocking down her door yet (she has only six clients), wrote Erin Winick in MIT Technology review’s The Algorithm. ‘But they should be: not only is it in society’s best interest, it’s also good marketing. Getting your algorithm certified for fairness can prove to customers that your service is equitable and effective’.

Matzat told us that editors and publishers should also audit themselves: ‘Where do we use software? Are decisions already being prepared without us being aware?’ On top of this, he said that newsrooms should clearly communicate that users are monitored at every turn when they visit a news website. He suggested that this data could be shown through infographics and data visualisations: Did we write these stories because people click on them? Are we doing A-B testing on headlines?

Impact

Change will be challenging for multinationals

‘I think that Facebook has been successfully held accountable by journalists, technologists, and researchers. Now, whether they will be held duly accountable by policymakers, legislative committees, and the public(s) in the locations they operate is a different story. ‘Change’, at least in quotations, will be uniquely challenging for a company like Facebook: their business model is built upon intercepting, filtering and recommending information through complex and proprietary (non–transparent) algorithms to algorithmically segmented audiences’, said Albright.

Inspiration

Here are some people to watch, according to Jonathan Albright

Julia Angwin, formerly at ProPublica, has helped lead the way in this type of reporting, meant to shed light on ‘black box’ algorithmic systems — especially Facebook’s ad placement algorithms and its news feed.

Guillaume Chaslot, a former engineer at YouTube, is another person who’s done incredible work on mass recommendation bias and content prioritization.

Zeynep Tufecki, who’s more academic than reporter, helps to promote interest about the role of algorithmic systems to the wider public through her regular contributions in The New York Times and Wired.

Researchers that have contributed to public understanding of algorithmic accountability, at least as a field of computational media study include Nick Diakopoulos and Tufecki, as well as Frank Pasquale, Cathy O’Neil, and more recently, Safiya Noble at USC.

--

--