Echoes and bubbles

Manipulating opinions on the internet is good when it works in your favour… right?

Lee Machin
LGTM
5 min readNov 22, 2017

--

Where is the content that can challenge my point of view, and help me learn more about the subject?

Commentators in the US and UK alike are gripped by a growing controversy around just how much fake Twitter and Facebook profiles influenced their democratic process. Russian propaganda, they will have you believe, is responsible for both Trump’s presidency and Britain’s exit from Europe. Of course, this too is propaganda, but it’s propaganda that supports the narrative of two supposedly powerful countries being powerless victims of the inexorable advance of technology, which can be easily abused and manipulated by bad actors.

Anyone living outside of the US this morning may have opened up a site like Reddit and wondered if they were teleported into another dimension. Page after page after page of pro net-neutrality posts, mere days after scrolling through page after page after page of Star Wars Battlefront 2 complaints. One of these is understandably an important issue for Americans who want fair access to the internet, and it is incredibly difficult to support any justification for stripping the population of that freedom and putting that power into the hands of monolithic, faceless corporations. On the surface of things it’s impressive that such a concerted effort has been made to make it a topic that is impossible for the average reader to avoid, or at least impossible for me to avoid because I have no idea how this looks for anybody else.

But this is just the other side of the coin, and it is a far more public effort to influence public opinion en-masse. It’s easily to say this is acceptable because it supports your own beliefs, and you agree with it and you want other people to agree too, but in order to have this kind of influence it must be understood that it cannot be possible without accepting that other people can use the same technology to spread their own influence. If you can co-ordinate to control the entire basic experience of a website like Reddit, or Twitter or Facebook for a casual visitor, so can everybody else.

History is rife with examples of people arrogant enough to claim their ideology is so good that they want to keep that influential power for themselves, while being utterly uncompromising towards people who don’t fall on their side. One such instance is the continuing debate about encryption, surveillance, and backdoors, where the government believes it deserves to violate every expectation of privacy because they’re the good guys, and the bad guys can’t do any of that because, well, they’re bad. Another is inventing the nuclear bomb, and then being completely surprised when other countries use this as inspiration to build their own. The end result is that alarming efforts are made to maintain that position as a good-willed gatekeeper, without really understanding why something is good when you do it, but bad when someone else does it.

Coming back to the echo chamber, or the filter bubble as some might call it, these are all microcosms of the internet with enormous outreach across the world. The problem is not that people with different perspectives can manipulate those environments to promote their own point of view, or pump out propaganda, and it makes no sense to tackle the issue by blocking people or restricting access. This isn’t about freedom of speech or censorship (where as a European I’m quite comfortable with the basic boundaries established around what constitutes free speech, after the second world war), but about how those platforms have so much power to meld-minds and treat every possible opinion as equally valid and worthy of support.

Facebook and Twitter are real life examples of the consequences of building platforms with Utopian ideals

The real problem is the content recommendation algorithms and constant analysis of a person’s behaviour on the internet, coupled with an increasing tendency to view everything as a pro-or-anti position where you are either for, or you’re against, with absolutely no nuance. It’s in the interest of a platform like Twitter or Facebook to keep its users engaged and the easiest way to do that is to pander to them, and show them things that they will unquestionably like. This means that someone understood to believe that the world is flat will be delivered posts that reinforce that viewpoint in order to validate those beliefs, either by supporting them or generating outrage when someone challenges those beliefs. People get to group up with all of those people who are with them and protect themselves from all those against who apparently want to pop that bubble, and the whole time they are reassured that their opinion is as equal and valid as everyone else’s. Your entitlement to an opinion isn’t an entitlement to always have a correct opinion, which is the line that is more brazenly crossed (and often celebrated) on social media.

AI technology, or deep learning, is not so advanced and futuristic that it can recommend content that can generate a healthy scepticism about your point of view. It is not intelligent enough to encourage reflection, nor can it challenge your biases or facilitate a more compassionate world-view by presenting many perspectives of one situation. If you choose to seek that out it will not understand your intention for doing so, it will just decide how much of that perspective it wants to shove in your face later on.

Considering that the profit motive in these businesses outweighs the educational motive by infinite orders of magnitude, these simple algorithms are optimised to entertain and keep eyeballs and cursors within the site in order to increase advertising revenue, or boost vanity metrics for investors. They are incredibly easy to manipulate, as evidenced not only by troll accounts and bots on Twitter, but by the way in which journalistic content is written in general. Clickbait, for example, takes advantage of ambiguity and human curiosity, often delivering outrageous headlines that are far more memorable than the more balanced content inside the article. Other articles are polarising and in one way or another promote a strong us-versus-them mentality, which is a common tactic used in political circles to widen the division between the rich and the poor, the left and the right, black people and white people, women and men, and so on.

Inevitably these social media systems create a taxonomy of identities, labels, and beliefs that serve only to compartmentalise groups, isolating them from others because that middle ground where connection can happen is not allowed to exist. This is a kind of surreptitious segregation that makes it so easy to pit one group against another and deliver them the exact content they need to rile them up and manipulate them into action, because there is absolutely nothing stopping you from creating the bots and adverts that only one group will see.

At the end of it all, the controversy about Russians influencing elections is merely a symptom of a technological culture that optimises for profit without understanding the damaging effects it has on societies all over the world. I can’t claim to have a solution for that but I don’t think it takes a genius to start with the relationship between gigantic media platforms, popularity systems, and the reliance on ad revenue to survive.

--

--