Blaming the Algorithms

Who is responsible when technology fails?

Bris Mueller
Accountability Cubed
4 min readMar 13, 2017

--

Google and Facebook currently capture 90% of new online advertisement spending, 73% of which is executed as programmatic ads. Last week I wrote about Google’s failure to design features supporting accountability into their online advertisement platform. Perhaps some of these features are now in the works as the number of lawsuits¹ alleging that Google provided material support to terrorist groups keeps increasing. Just last month, The Times reported that top brands inadvertently funded extremists via advertising purchases:

On YouTube, an advert for the new Mercedes E-Class saloon runs next to a pro-Isis video…A commercial for the F-Pace SUV from Jaguar, the British carmaker, runs next to the video…Sandals Resorts, the luxury holiday operator, is advertised next to a video promoting al-Shabaab, the East African jihadist group affiliated to al-Qaeda…Adverts for Honda, Thomson Reuters, Halifax, the Victoria & Albert museum, Liverpool university, Argos, Churchill Retirement and Waitrose also appear on extremist videos posted on YouTube by supporters of groups that include Combat 18.

A month since The Times article was published, Martin Sorrell, the CEO of WPP (the number one company in the advertisement industry by revenue), told UK Google’s managing director Ronan Harris to take accountability for their algorithm. According to Business Insider, during a panel discussion organized as part of The Festival of British Advertising, Martin Sorrell had this to say:

You have to take responsibility for this as a media company because you are not a passive, sitting there, digital engineer, tightening the digital pipes, with your digital spanner, and not responsible for the flow-through content … you are responsible and you have to step up and take responsibility. You have the resources, your margins are enormous, you have control of the algorithms, and you don’t explain to people how those algorithms work.

Companies are spending more of their marketing budgets on programmatic advertising, which is surprising considering the risk it imposes on their brand’s reputation. Programmatic advertising allows marketers to automatically microtarget individuals and chase them around the internet, but the algorithms making the ad placement and purchasing decisions do not seem to account for anything else. Had a marketer made the decision to pay Isis for an advertisement placement on their recruiting video, they would have been fired. Realistically, the marketer would have never gone ahead with such an objectionable purchase. But the high level of automation in programmatic advertising creates a “moral buffering effect,” lessening the marketer’s feelings of moral responsibility and shifting accountability for the ad placement to Google and its algorithms². Still, we have to wonder out loud, is the return in investment so enticing for programmatic ads that it is worth giving up control of your brand? And, why does it seem that there is no one to blame for the failures in programmatic ad placement?

Even though marketers have to sign off on programmatic and are supposed to be accountable for the execution of the marketing campaign, there is no word that any have been fired because of reprehensible ad purchases. Similarly, Google is not experiencing any market setbacks despite owning the technology in question. The engineers who designed the algorithms for programmatic are not being reprimanded for poorly developed solutions. The general public seems to be okay by all of this and is not even close to being incensed. But why? Not only can technology increase the moral distance between our actions and their consequences, it can also change our collective view of accountability. People generally understand failures by technology the same way they understand a weather event, as an act of god if you will. At least one study³ suggests this much. The authors’ conclusion is eerily similar to what we have been observing anecdotally. Google’s strategy to keep avoiding responsibility for their failures is to keep blaming the algorithms they created.

Notes

  1. Orlando shooting victims’ families and Father of woman killed in Paris terror attacks sued Google, Twitter and Facebook for allegedly providing material support to extremists.
  2. “Because of the inherent complexity of socio-technical systems, decision support systems that integrate higher levels of automation can possibly allow users to perceive the computer as a legitimate authority, diminish moral agency, and shift accountability to the computer, thus creating a moral buffering effect.” — Cummings, M. (2006). Automation and accountability in decision support system interface design
  3. “Our results suggest that people treat technology as a more disembodied and random force, not worthy of the same degree of reaction as is human action. It suggests also that technology may be potentially used strategically as a scapegoat to avoid perceptions of liability” — Naquin, C. E., & Kurtzberg, T. R. (2004). Human reactions to technological failure: How accidents rooted in technology vs. human error influence judgments of organizational accountability

--

--

Bris Mueller
Accountability Cubed

Thinking about accountability without being transparent about my process.