Addicted to tech?

The attention economy, distraction inequality, and policy

DataKind UK
DataKindUK
5 min readOct 30, 2020

--

Written by Francesca Garcia, member of the DataKind UK Ethics committee

Image by Pexels from Pixabay

Individuals, companies, and communities are becoming increasingly critical of social media platforms like Facebook, Instagram, and TikTok. There might be a number of reasons to distrust these companies; the latest reason is that these platforms are deemed to be highly addictive.

Addiction? Isn’t that a bit extreme?

Across the debate, some dispute the term ‘addiction’. Author Nir Eyal, for example, argues that it is simply wrong to categorise the majority of people as ‘addicted’ to technology, rather, it should be viewed as a ‘distraction’ and that the mislabelling of this phenomenon only serves to make people feel helpless and like they can’t do anything to change their habits. Eyal’s commentary is certainly not met with strong opposition.

Addiction is medically defined as:

“a psychological and physical inability to stop consuming a chemical, drug, activity or substance, even though it is causing psychological and physical harm.”

Perhaps unlike substance addiction, whereby the ‘harm’ of the drugs and subsequent withdrawal are typically immediate and highly apparent to the users themselves, the harm from social media might manifest itself differently. While some users feel sadder after using social media, and there seems to be a plausible correlation between its use and disorders like body dysmorphia, the connection between the two issues is perhaps less linear than in drug use. Equally, some users join Facebook or Instagram as children, and might not fully be cognizant of the influence and harm it may have over their choices, ideas and emotions.

Their monetisation strategy relies on engagement

Semantics and other hypotheses aside, what cannot be ignored is the way these companies make their money. Their monetisation strategies rely pretty much exclusively on user engagement: the more seconds, minutes, and hours spent on a platform, the more successful the platform. It’s called the attention economy after all. And, with the plethora of data points and inferences these companies have about users, they know exactly when to plug what commercial at what time to ensure a sale is successful.

Using the same techniques and methods as casinos and gambling sites, software engineers at Facebook (etc.) are actively designing the platform to get the user hooked. It is not about value added, such as: utility, happiness, education or representation of the right information, (the list goes on)… Indeed, it has been documented that fake news spreads faster than true stories, and by a substantial margin.

And, this needn’t be the case. The metric of success of companies doesn’t need to be engagement, even for digital platforms. This is not a utopian view of technology — there already exist AI products on the market whereby the success metric might actually be tied to the opposite — optimisations might be for example, streamlining emails to spend less time on Google Mail, Twitter, or Facebook (machine learning startup Loomi looks at providing personalised virtual personal assistants in this way), and to receive balanced viewpoints on news items from reputable fact-checked sources. It is true, however, that these services are typically paid for by the user, unlike social media platforms. And it goes without saying that adtech is utterly lucrative.

Having said this, it is important to give these platforms a reality check in a Covid-19 context. The debate can be quite polarised, but these platforms do have some value. Months being locked down, where it was/is literally illegal to physically visit other people, is isolating. Being able to engage with friends and family virtually is a huge benefit. Even in this context of a pandemic, we saw the biggest collective action in recent history in response to the murder of George Floyd, and social media facilitated the sharing of information for the Black Lives Matter movement, such as where to protest, and how to protest in a Covid safe way. (Of course, the flip side is that a huge amount of misinformation was also spread.)

Image by Andi Graf from Pixabay

Distraction, addiction, and inequality

In addition to these thorny issues, we are beginning to suffer another form of inequality. A decade ago, people perceived technology to divide society as rich kids would have access to iPhones and computers, and poor kids wouldn’t even have access to WiFi. Whilst for the lowest income groups this remains true, there is also an increasing number of low-income families whose children spend much more time on screens than their rich counterparts, because their parents might not have the time to monitor their devices or they may not have access to extracurricular activities where they live. Furthermore, it is clear that some people are better placed to be more resilient to tech addiction (for example, by having access to a garden or an instrument). This clearly puts children from poorer households at a greater disadvantage.

OK, so what should we do about it then?

Shoshana Zuboff argues these platforms need to be outlawed — not exclusively because they are addictive, but also because she believes they are an “assault on human autonomy” and a threat to democracy. This raises some questions: What kind of illegal platforms might pop up in its place? With zero regulations? Can we really try to put this back in Pandora’s box?

Another alternative might be good legislation. However, technology policy is often reactive instead of proactive, and even then, technology advances quickly and policy doesn’t — policies may become outdated quickly, or maybe even fail to be practical. GDPR is a good starting point for data management questions, however, GDPR only protects ‘personal’ information — yet copious amounts of non-personal data, with machine learning, can quite accurately infer, and consequently reveal, a lot about an individual. Furthermore, the nature of GDPR places more of the onus on the individual as opposed to companies, as individuals need to request their data.

It is clear that policymakers need to be (literally and metaphorically) in touch with developers of machine learning systems, to ensure they are comprehensive, stress-tested and fit for purpose. There’s a lot more to be said on potential policy approaches, but a good start is to ensure safeguards are put in place, and protect the most vulnerable in society. Activists and celebrities, and famously Jameela Jamil, have successfully campaigned for the prohibition of advertisement of diet products to minors on social media platforms. The success of this campaign highlights the great progress that might be made by thoughtful observers. There are, as with any policies, limitations to this approach. For example, it might be commonplace for underaged users to lie about their age, to be able to access certain types of content. In any case, this is a step in the right direction. We need many more steps (a marathon, if you will) to tackle the multi-faceted problems of addiction to technology.

Francesca Garcia is Policy & Research Manager for machine learning at Digital Catapult, and a volunteer for DataKind UK, sitting on their Ethics Committee.

If you’re interested in joining our Data Ethics Book Club for more fascinating data ethics debate, please follow us on EventBrite!

--

--