Beyond Tech Addiction

Rebooting digital well-being in the wake of the “techlash”

Erin McAweeney
Data & Society: Points
7 min readSep 30, 2019

--

By Data & Society Research Analyst Erin McAweeney and Research Lead Mary Madden

Pink, green, and blue rectangles repeating in rows

“Have Smartphones Destroyed a Generation?”

In their September 2017 issue, The Atlantic posed this question about people born between 1995–2012 — or as the article referred to them, “iGen.” After nearly two decades of largely unfettered optimism about the influence of digital technologies on society, Americans’ attitudes have started to shift. Since 2015, the share of Americans who believe technology companies have had a positive impact on society has fallen from 71% to 50%, according to a 2019 Pew Research Center survey.

A growing chorus is voicing concerns about tech’s potential to diminish users’ health and well-being. Smartphone “addiction” has become a topic of interest, with news outlets describing its effects with terms like “technology withdrawal” and “digital heroin.” In Silicon Valley, former designers and engineers have started calling out design mechanisms that target users’ “reward centers” to maximize screen time and manipulate their “most vulnerable human instincts.”

The current wariness about the health impacts of our devices has inspired a new marketplace of products — many of them in the form of digital wellness apps — that perform the circular work of using technology to better manage our technology use. However, quick technological solutions to complex social problems may not create the results we both want and need. Helping people interact with technology in a “healthy” way requires a shared understanding of what boundaries constitute unhealthy use. To do this, we need better research to understand which design features promote digital well-being and which detract from it.

Helping people interact with technology in a “healthy” way requires a shared understanding of what boundaries constitute unhealthy use.

Concerns about the lure of screens and the potential for negative impacts on behavioral health have existed for decades prior to the invention of smartphones. In particular, there is a long history of moral panics surrounding children’s vulnerability to new media. As UCLA child psychologist and researcher Yalda T. Uhls has noted, in the 1950s, television was said to make kids “aggressive and irritable as a result of over-stimulating experiences.” In the 1980s, there was widespread fear of the arcade game Space Invaders, which was supposedly turning kids “crazed, with eyes glazed, oblivious to everything around them.” And in the early 2000s, the culprit wasn’t “alcohol or drugs… it was video games,” which was “as powerfully addictive as heroin.” The concept of “Internet Addiction Disorder” was first introduced in 1995 by psychiatrist Ivan K. Goldberg as parody intending to critique the incessant pathologizing of everyday behaviors. Yet the term, which was conveniently situated in a longstanding cycle of hyperbolic concerns and questions around media effects, ultimately resonated with researchers and the general public alike.

Research about problematic internet, cell phone, and smartphone use has recently gained momentum, centering on the question of whether the new wave of persuasive technology poses unique risks. Parents have been sounding the alarm, expressing a wide range of concerns about their screen-obsessed kids. Prominent pediatricians have cited evidence that suggests heavy technology use may cause the brain to overproduce dopamine and cortisol, creating a physiological dependency. Others argue that the development of self-regulation and executive function skills — the ability to concentrate, prioritize tasks, and control impulses — might be impaired when a parent offers an iPad game to pacify a child rather than a hug or attention. However, just as it was with previous waves of new media, it may be difficult for researchers studying how technology affects public health to approach these complex issues with nuance when the problem is already framed as pernicious and pervasive. Scholars have written about the limitations of studying media effects and causality, particularly in light of the reactionary cycle that accompanies each successive wave of new media technology. One thing most everyone agrees on is that more research into these questions is urgently needed.

Despite the widespread coverage of tech addiction in popular news media and the emergence of organizations that advocate for healthier engagement with tech, the threat of tech addiction as a disorder is not widely accepted in the research community. Instead, researchers generally agree that our relationship to smartphones is not fundamentally good nor bad. In a review of phone addiction studies, three key features of addictive behaviors, such as loss of control, tolerance, and withdrawal, have received limited empirical support. And to date, excessive social media or smartphone use has not been recognized as a disorder by the World Health Organization or the Diagnostic and Statistical Manual of Mental Disorders. Amy Orben, a psychologist who studies teens’ mental health and technology, told NPR that “a teenager’s technology use can only explain less than 1% of variation in well-being. It’s so small that it’s surpassed by whether a teenager wears glasses to school.”

Limiting screen time is not a panacea.

In accordance with the prominent addiction narrative, many of the proposed design-based solutions for problematic technology use center on providing tools that help users manage their time on a platform or device. For example, the SMART Act, proposed by Senator Josh Hawley (R-MO) to fight “tech addition,” would require tech companies to limit screen time through scrolling limits, “natural stopping points” for users, tracking the time users spend on platforms, and limiting each user to 30 minutes a day on each platform. Several major tech companies, including Apple, Google, Twitter, and Facebook, have already debuted various suites of digital health tools and features, many of which are based on self-managed time limits, time tracking, and dashboards for reporting time spent on specific apps. Implicit in these tools’ design is the notion that changing the user’s behavior, through various constraints on screen time, is the solution to a range of complex and interconnected social, psychological, and market-driven dynamics.

However, limiting screen time is not a panacea. One of the challenges in trying to understand problematic tech use is the tendency for many studies to “flatten” a wide range of digital environments and behaviors into a single measure of screen time. In addition, the call to disengage from technology also raises questions about equity. After all, assumptions about the benefits and feasibility of disengaging from technology are largely rooted in an upper-middle class problem of abundance; these concerns may look dramatically different across the socioeconomic spectrum. To what degree could healthier technology designs account for these and other differences across various communities of users?

A cracked screen

Outside of the public debate about tech addition, the field of Human Computer Interaction (HCI) has generated a robust set of frameworks to operationalize a more holistic concept of digital “well-being” into design practice. Within this body of research, screen time alone is rarely a measure for meaningful interaction. In general, this field of research seeks to systematically incorporate well-being measures into the design and evaluation of technology. And while the concept of “well-being” itself has been the subject of study and scrutiny spanning disciplines, HCI researchers work to understand users’ needs and the specific social contexts that inform their technology use.

Central to HCI researchers’ work on digital well-being is uses and gratifications theory, a model with a nearly 50-year history of exploring users’ interactions with popular media, including VCRs, television, and social media. This literature largely posits that technology users are not simply the passive recipients of the media they consume, but are instead active agents seeking out media experiences that meet specific needs. For instance, uses and gratifications theory has been used to examine “meaningful” versus “mindless” interactions with technology. This more nuanced concept of technology use was explicitly acknowledged by Mark Zuckerberg during Facebook’s first quarter 2018 earnings call, when he said Facebook’s well-being research “suggests that when people use the internet for interacting with people and building relationships, that is correlated with all the positive measures of well-being that you’d expect… whereas just passively consuming content is not necessarily positive on those dimensions.”

Ultimately, the addiction model may create a self-fulfilling prophecy.

Ultimately, the addiction model may create a self-fulfilling prophecy. As one HCI study argues, research on technology use that begins with the assumption that tech addiction is a design problem to be solved tends to produce bad results and bad analogies. It “uses measures that are borrowed wholesale from scales confirmed to measure other lifestyle addictions, such as gambling.” Despite having features that mimic slot machines, a smartphone is not the same as a casino. Assuming that screen time is the problem can detract from the consideration of other indicators that measure whether time spent was valuable or meaningful. With the momentum of the techlash behind the call for well-being-focused technology, there’s a window of opportunity to ask for more: for deeper changes in design and new business models that support meaningful interactions with the technologies that have become so deeply embedded in our lives.

Erin McAweeney and Mary Madden are researchers with the Health and Data initiative at Data & Society.

--

--