Why Algorithms Erode Democracy

Lauren Toulson
CARRE4
Published in
7 min readFeb 23, 2021
Photo by Luke Michael on Unsplash

We are in an age where our news is shared with us digitally, based on our preferences. An age where Netflix shows us the most relevant films for us, and Amazon knows exactly what we want to buy next. The big data age is the age of convenience and comfort, where we don’t have to think about what we want to watch, buy or get informed about. If hearing that doesn’t already, it’s time to get uncomfortable.

Every day that passes by is another day where big technology companies get to decide what you like. With constant digital interventions into our tastes, desires, and political preferences, we have lost our autonomy for diverse thought, becoming increasingly inclined towards extreme viewpoints and less open to be reasoned with by contrasting ideas.

This blog explains the basics about the powers of surveillance capitalists, how big technology companies like Facebook have the power to undermine democracy, and what we could do about it.

Surveillance Capitalism

“A new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction and sales; …global architecture of behavioural modification; …The origin of new instrumentarian power that asserts dominance over society and presents startling challenges to market democracy” — Zuboff, 2019

Zuboff outlined the new threats to liberty and democracy with her compelling book, The age of surveillance capitalism. Silicon Valley giants are building their economic success from their impressive ability to know what we want and what we want in the future, the ‘behavioural futures market’. Collecting data on what we like, buy, where we go, what our friends are up to, they can market on the basis of personalisation and fine tune the content you see to steer you into predictable behaviours that favour those out to make money from you.

Traditional authoritarian powers undermine democracy through brute force, using terror and military power to control behaviour. Achieving a similar level of control through more subtle means, the instrumentarian power of surveillance capitalists quietly erodes democracy by replacing freedom with certainty.

Virtually A Political Scandal

Photo by Franz Wender on Unsplash

62% of US adults use Facebook as their news source, and very few question the validity of the source itself as well as the assemblage of viewpoints carefully pushed to them through their news feed. With algorithms like the well publicised Cambridge Analytica scandal, Facebook can know precisely what its users think, do and believe, and use this knowledge to meet political agendas. Cambridge Analytica played a role in the 2016 election of Trump and the UK’s EU referendum voting for Brexit, and the effectiveness of behavioural tampering is evidenced by a social experiment conducted on Facebook. The “I’ve Voted” experiment found that those who were shown messages indicating their friend voted were more likely to vote themselves than those that were not shown the message. This form of social contagion resulted in an extra 340,000 votes, considerably more than enough to swing elections. By using clever psychology and combining it with data about our political inclinations, Facebook are eroding our freedom to think and do political activities of our own choosing.

The Filter Bubble

Photo by NeONBRAND on Unsplash

Today’s internet is an attention economy; a plethora of competing internet sites, all competing for our attention, use algorithms in an attempt to make the content more appealing to us, and therefore monetise our attention. These sites hope that by showing us more content that we like, we are less likely to go to another site and instead choose to stay, scrolling or watching, forever. These filters are abundant on most popular websites; Facebook, Twitter, Google, Amazon, Spotify, Instagram, YouTube. Based on content we have consumed before, and of the content that our friends or fellow users have liked, we will be shown more similar content. This content appears on the home page results, and content will be ordered according to preference to keep us hooked.

Pariser popularised the term ‘filter bubble’ to refer to the way that these algorithms lock us into a ‘bubble’ of our own tastes and preferences, rarely showing us things we may disagree with or not like. The result, some argue, is that it is causing our tastes to become more polarised. The aim of the algorithm is “to connect people with information they are likely to want to consume, by making some items easier to access than other items, resulting in a personalized stream of content [that fails to offer] users a set of alternatives to choose from.” Importantly, the values reflected back on us are our own, some may be in the form of news articles, posts by friends, and some through advertising which aims to target us through what people of similar group demographics and tastes liked. Individuals on the internet become clustered into groups surrounded by ideas and people of a similar disposition. As a result, this reinforces a positive view of their own tastes, and further distances them from other tastes which become more and more negative to them, especially with regard to political views. This “bubble” which reinforces our own tastes and views causes as a result political polarisation and ever more extreme views. This is where, for example, extreme Trump supporters emerge. They are so immersed in news content that supports their views, and content that paints the opposing party in a negative light, that its hardly their own fault that they don’t know any better.

Kwame Dublin, Co-founder of Digital Bucket Company says “We founded Digital Bucket Company because we recognised areas of concern that needed to be addressed. We build on the conversation every day with our teams to educate about these issues, and with this knowledge and a diverse team, we build solutions for a better world. We hope that more tech companies can do the same.”

Photo by Mika Baumeister on Unsplash

Sites that participate in surveillance capitalism, exploiting our attention, shaping our behaviour and views, use algorithms that keep us in bubbles surrounded by content we like because it keeps us online, making them more money. But ultimately it is this behavioural control, and the limiting of our exposure to diverse knowledge, that is eroding our democratic freedom.

Towards a better future

“As a society we already control content of a sexual nature on social media platforms because people find it offensive. There’s no reason why we shouldn’t similarly control violent, extremist or racist content. However, we don’t have to ban it. YouTube and Facebook should put contributors of extremist materials into ‘Pay to view’ categories.”

Tom Foale, who will be joining us at our Summit in March to discuss privacy, regulation and surveillance capitalism, suggests that by putting polarised, extreme content into paywalls it will deter people from radical, extreme content and thus promote a better society. Ivana Bartoletti, another of our influential speakers, explains that

considerations about privacy go beyond just information, it should “safeguard autonomy and freedom of thought in the age of AI.”

This is especially relevant when thinking about how to design AI systems that promote democracy and freedom of thought, rather than splinter us into extreme bubbles of easily controlled bots.

For this, it is the role of the platform to allow us a balanced ‘information diet’, as filter bubble theorist Pariser notes. Rather than feeding us a constant stream of the informational dessert we want, the platform should also enable some information vegetables to be thrown in, that challenge and expand our views. Because while we can try to follow alternative sources ourselves, navigating towards ideas that we like is in our nature.

The following blogs will continue the discussion about the critical concerns in AI today, including how we could govern AI, whether ethical standards can be applied globally or if we need different regulations for diverse cultures. Check out the first blog in the series which looks at how difficult it is to use algorithms to reduce human bias:

The blog is written by Lauren for Digital Bucket Company, who are hosting their full-day AI Summit on 30th March 2021 with leaders in Tech and Government around the world joining to discuss the key issues in AI: Bias and discrimination, Privacy and Governance, Data Ethics, Women in Tech and the Future of AI. Stay up-to-date with ticket launch on their page.

--

--

Lauren Toulson
CARRE4
Writer for

Studying Digital Culture, Lauren is an MSc student at LSE and writes about Big Data and AI for Digital Bucket Company. Tweet her @itslaurensdata