The Great (Human) Anti-Hack:

Robert Elliott Smith
The Startup
Published in
6 min readOct 4, 2019

How we the people can fight social media manipulation in the 2020 elections

“We just put the information into the bloodstream of the internet and then watched it grow.” So said Mark Turnbull, Managing Director of Cambridge Analytica, in The Great Hack, Netflix’s exposé of how data analytics and dark digital propaganda influenced the 2016 Brexit referendum and the US presidential election, and which many believe will play a significant role in the 2020 US elections.

While the documentary leaves viewers vitally more informed, it also induces an overpowering feeling of hopelessness. Analogies to some sort of unstoppable digital epidemic abound: CA conjures up “viral” campaigns, identifies “vulnerable” voters, and employs techniques once used in military “psyops” to ensure contagion. All of which leaves one wondering what can be done against mega-funded manipulators who, with the aid of artificial intelligence (A.I.), can adeptly tailor our world views, shift our moral norms and conjure up our deepest frustrations and fears?

Furthermore, scientific research conducted by myself and colleagues at University College London seems to confirm the truth of Turnbull’s metaphor. In our lab, we simulated social networks by modelling people as computational “agents.” Each of these agents listens to binary signals (ones and zeros) from the agents surrounding them and then decides what to rebroadcast, in a continual loop of social influence.

We considered these signals to represent opposing positions on divisive issues, such as Leave or Remain in the Brexit debate, Trump v. Clinton, vax v. anti-vax, etc. While most of the agents in the network were “rational”: rebroadcasting the majority signal they received, a few were “motivated reasoners,” broadcasting a single unwavering signal into the network.

Our results showed that these motivated reasoners successfully recruit nearby agents, causing them to adopt their single signal broadcast. This recruitment then proliferates, until eventually, the whole network becomes polarised, resulting in the “echo chambers” that we see on social media today. Although most people may assume “motivated reasoners” are boorish friends on Facebook or political campaigners, the reality is they are also algorithms designed to influence opinions single-mindedly through curated news feeds.

However, while these results confirm the real-world power of online political operatives like CA, they also suggest relatively simple actions to help recover more healthy and honest political discourse in today’s hyper-connected world. Because despite what people fear, data analytics firms like CA are not the purveyor of some modern-day psyops alchemy, nor are algorithms so devilishly super-intelligent that people are putty in their virtual hands.

The reality is algorithms remain little changed over my 30-year career in AI. What has changed is how we receive our news; the deregulation of the environment within which tech companies operate; and, the vast amounts of personal data we make freely available to them. It is this that makes the public vulnerable to disinformation and enables the dynamics we see online in our simulations. This means there are things we can do to fix the problem at the regulatory, organisational, and individual level.

“The medium is the message,” is truer today than it has ever been. In the 1960s, when Marshall McLuhan coined the phrase, the FCC’s Fairness Doctrine granted broadcasters licenses only if they demonstrated their coverage was honest, equitable and balanced. The legal basis for the doctrine was that the airwaves were a limited resource, and thus government should protect them from monopolising opinion makers. However, as the limitations of bandwidth disappeared, so did the doctrine, in the late 1980s.

Today, one can seriously argue that the publics attentional bandwidth is the limited resource in question and the profit-optimised algorithms that curate our newsfeeds unfairly dominate this resource to the benefit of the corporations they serve. Furthermore, given the inherent dynamics of social media, it is now vital to regulate the fair use of that limited resource, via regulation of the algorithms themselves.

However, there’s no doubt that changing the media regulatory environment — which must be done to halt the contagion — will be a slow process and one that isn’t divorced from politics itself. Thus, we, the voter, must be proactive in seeking electoral outcomes that will facilitate the enactment of new controls that not only protect us from being bombarded by unscrupulous companies but protect society from the deeply polarising civil war enacted on social media.

Furthermore, if disinformation is the ill of the modern era, we would do well to examine how the real immune system fights disease in the human body. In 1991, I participated with computer scientist Stephanie Forrest in research work modelling immune networks with genetic algorithms. This work revealed that the immune system naturally preserves a steady-state of diversity rather than driving to extremes.

Today, this work provides inspiration for how we might technically be able to protect people from informational viruses such as CA’s disinformation campaigns, just as it inspired Stephanie’s work on combating computer viruses. Media and news organisations could achieve this by promoting diversity in the information delivered as a principle, in contrast to the current modus operandi of attention-grabbing polarising headlines driven by profit-seeking algorithms designed to feed us clickbait.

Other current academic research is also showing that such strategies are realizable. The Wisdom of Polarized Crowds in Nature’s Letters on Human Behaviour shows how healthy debate on divisive issues actually improves consensus and the quality of information online, in this case in Wikipedia articles.

Integral to this are the rules governing the editing of Wikipedia articles. Those rules, which notably reduce the speed at which people can insert edits, appear to convert ineffective flame wars into slower, more productive discussion. Studying and enacting controls on the speed at which content propagates via algorithms could thus have a substantial effect on social media dynamics.

While these are all essential long-term treatments to restore the health of our public discourse, the 2020 election season is upon us, and there is no time to wait for governments and tech companies to evolve their policies and algorithm design principles. So, what can we, who are all a vital part of online media dynamics, do to impede the contagion spread by these manipulative media dynamics?

Firstly, our UCL studies showed that increased connectivity is always better connectivity. That means we should all consider re-connecting with people who hold different opinions. Even if you scroll past their posts, the added connectivity will impede the polarization effects caused by motivated reasoners, be they human or algorithmic.

Secondly, resist reacting in outrage at online posts, because agencies like CA and Russia’s Internet Research Group specifically create inflammatory content (some of it blatantly false) in the hope of grabbing attention and creating division. Don’t aid them in their sowing of constant discord. Instead, promote credible, independent content that reflects views and policies you believe in, rather than counterpunching against the shares of others.

Finally, we can all counteract the adverse effects of social media dynamics in the upcoming election by acting less like algorithms ourselves. By reintroducing the human element into our shares and likes we can impede negative algorithmic effects. So, always read articles before posting, get to know the authors, insert your own comments when sharing, and make them as deep and individual to you as you can. In this way, we can hack the flaws in the current system, the simple-mindedness of algorithms, to reclaim some of our agency and stem the tide of this growing epidemic of online segregation and informational gerrymandering.

(For more on this subject, read Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All)

--

--

Robert Elliott Smith
The Startup

AI Expert, BOXARR CTO, author of Rage Inside the Machine: The Prejudice of Algorithms, and How To Stop the Internet Making Bigots of Us All. Tweets:@DrRESmith