Regulating the Artificial Intelligence of Addiction

Dionysis Zindros
6 min readJan 6, 2020

--

While waiting in the super market cashier line, I flick my phone out of my pocket. I scroll over my Instagram feed all of which I’ve seen before. Tap through a couple of stories on Instagram, Snapchat, and TikTok. It’s my default action while I’m waiting. At the supermarket and the bus stop. At the metro station and the airport lounge. Flick, tap. On average, I pick up my phone and mindlessly scroll through feeds more than 100 times daily.

My usage statistics from last week. Some of us pick up our phone 100 times daily to look at social media.

I’m not alone and this is not a mistake. My Instagram feed is created by an artificial intelligence recommender system based on my past activity, personalized with immaculate precision, with a single goal in mind: Increasing my on-screen time. The algorithm is intelligently designed to optimize my long-term addiction, to interact with me so that I waste the maximum possible amount of my life on the apps. My individual willpower is meagre compared to the zillion computers that train the neural networks powering Facebook. My one-man psychological resistance is futile against their tens of thousands of highly educated expert engineers specializing in machine learning. I succumb.

Like every industry of addiction, creating a dependency lies at the heart of social media. It’s no mistake that we’re called users. Have you tried abstaining from computers completely for a week or a month? It feels slow and lonely. The fear of missing out on all the wonderful things your friends are doing is insurmountable. And when you return, you realize you hadn’t missed much, but you like the fuzzy feeling of being there. Scroll, flick.

Unlike tobacco and alcohol, social media does not cause physical addiction. Similar to gambling, the addiction is psychological. As a new industry, regulation is scarce. Facebook remains under regulatory scrutiny for election meddling after the Cambridge Analytica scandal. A lot of controversy has sprung up around political advertising on social media and Twitter even banned such ads. Nevertheless, the politics of social media addiction are rarely if ever spoken of. If governments are expected to regulate other industries of addiction, shouldn’t they be regulating the artificial intelligence that intentionally gives rise to social media addiction?

A Greek tobacco product includes a health warning label.

The question then becomes how should we regulate? Other industries have age restrictions, limited advertising, and helpline regulations in place. Some gambling websites even employ artificial intelligence to detect addictive behavior patterns and close down user accounts when they’re spending their paychecks month-to-month. Social media is in a unique position, because we can offer insightful transparency of its practices to users to help them understand exactly what these artificial intelligence algorithms are doing and why. Only then can the user start taking informed decisions.

In traditional artificial intelligence, a neural network is trained based on past data. In the case of a newsfeed, this dataset includes the past behavior of you and other people, such as whether you spent a lot of time on a post writing a comment, reading it, liking it, reacting to it, or clicking it. The training is performed by focusing on a particular optimization goal, a mathematical objective mandating that we wish to maximize user engagement, on-screen time, advertisement clicks, or some other measurable quantity. Once the network has been trained, it can then be used to take decisions. It can be asked to predict whether a new post will be interesting to you and to give a confidence level of how interesting it thinks it will be. Your newsfeed is then constructed by asking the neural network to predict which posts will optimize the stated goal — typically of keeping you on-screen the most.

A puppy to keep you reading this post.

When the neural network answers these questions, it tells us what decisions to take (such as whether to include a particular post in your Facebook feed or not), but not why it took that decision. In fact, it functions as a stand alone brain and even human experts on artificial intelligence, its own creators with access to its source code, cannot explain why a particular decision was taken other than saying something of limited interest such as “oh, the math worked out that way when these past signals were combined.” A recent area of research in the field of artificial intelligence is Explainable Artificial Intelligence. When explainability is introduced into an artificial intelligence system, the system is asked to both optimize a particular goal such as keeping you on the screen, and in addition produce a human-readable report of why it took a particular decision. Mind you, these explainable reports are only readable by the experts in the field and not by the general user. However, such an explainable report can be used as the raw material to ultimately produce an explanation readable by anyone.

Regulators should mandate explainability from artificial intelligence systems taking decisions for our lives. Initially, we can ask that companies publicize explanations of what their optimization goals are. It is one thing to theorize, almost conspiratorially, that social media software is trying to take over your life. It’s quite a different thing to see these optimization goals stated clearly and cynically, even hidden inside Terms of Service legaleze. Such transparency will allow journalists to compare and contrast different services and study whether the cliché saying around social media is true: “If you’re not paying, you are the product.” Eventually, it will steer social media companies towards adopting optimization goals that are more benign and aligned with the users’ interests, because they can no longer hide potentially nefarious purposes.

The next step in regulation is to mandate that explainability is embedded in every piece of content that the user sees, disclosing why a particular post is shown to the user. If you’ve ever wondered “Why the hell is it showing me ads for career options in telemarketing?” then now you will have an answer. This includes ads as well as organic content.

A word of caution for explainability is needed here. Today, explainable AI systems cannot take as good decisions as traditional AI systems. It seems that there will always be a trade-off between good explainability and better decisions, although research is improving the decision quality of explainable AI systems. Furthermore, explainability produces reports that need to be interpreted so as to be shown to a general audience, and there are multiple subjective ways of doing that–including more artificial intelligence. This may raise regulator difficulties in asking for explainability. Encouragingly, research indicates that it’s possible to create explainable AI reports without harming trade secrets.

As a thought experiment, I’ve included an instance of how explainable reasoning for social media content could look like. While my examples are science fiction today, I hope they point to the right direction of what we could do as a society to regulate addiction powered by artificial intelligence.

An explanation of why a particular Instagram post is being chosen for your feed by artificial intelligence. The confidence towards the optimization goal (20 more minutes of scrolling), the personalized data (browsing history) significantly contributing towards the optimization decision as well as features extracted from the content (image recognition of puppies) signalling in favour of the decision are highlighted.
An explanation of why a particular post was chosen by the feed’s artificial intelligence algorithm. The confidence towards the optimization goal (10 minutes of screen time), the personalized data significantly contributing towards the optimization decision (sexual orientation, political beliefs) as well as features of training data (angry reaction, commenting) signalling in favour of the decision are highlighted.

--

--

Dionysis Zindros

Blockchains/Cryptography PhD student at the University of Athens