From Epidemics to Algorithms — How Technology Design Shapes Our Lives and Our Health
1854
In 1854 a cholera outbreak struck London. Debilitating cholera outbreaks had been a global scourge for decades. People would come down with diarrhea, vomiting, get dehydrated and be dead within days or even hours. Pandemic waves would wipe out thousands, and no one really knew why it was happening.
At the time, the tides of the Industrial Revolution were flowing through rural farms and villages, carrying people away to massive, roiling cities at an unprecedented scale. All of these new arrivals swelling the populations of urban centers needed to eat. In mid-19th century London, cowsheds, slaughterhouses, and grease-boiling dens lined the streets. No doubt the stench was unimaginable, but the real danger was underfoot. Animal droppings, rotting matter, and other putrid contaminants filled the streets. Filth from the outside would mix with cesspools of human waste that collected underneath building cellars and flow into sewers. Eventually the London government decided to step in. They started dumping the waste into the Thames. One hundred and sixty-five years ago, as a teeming, technological transformation was reshaping the very nature of the modern human experience, not only had sanitation not been invented yet, the entire concept of germs was still unknown. Louis Pasteur wouldn’t propose his germ theory until 1861, so no one knew that cholera was spread by ingesting the cholera bacteria, commonly found in water contaminated by feces.
In 1849, more than 14,000 people died of cholera in London alone. That was the year a London physician named John Snow published an essay titled, On the Mode of Communication of Cholera. The prevailing scientific theory of the day was that disease was caused by “miasma,” or particles of “bad air,” but Snow proposed a radically different idea: there was a direct connection between the disease and the city’s water supply. By the time the 1854 outbreak rolled around Snow was on a mission. The disease would claim 127 lives within 3 days, but Snow went out to the areas where people were getting sick and dying, neighborhoods people were fleeing in panicked droves, to talk with the affected residents. From his conversations, he plotted the incidences of cholera on a map, and what he found led him to trace the outbreak back to its source — a contaminated water pump on Broad Street.
Snow brought his findings to the town officials, and although he still didn’t fully understand the mechanism by which cholera was spread, and they still didn’t buy his crazy theory, he nevertheless managed to convince them to remove the handle from the pump, making it impossible for anyone to draw water from the contaminated well.
Reverend Henry Whitehead, who served in a church in the Broad Street area at the time wrote about the outbreak thirteen years later, “The removal of the pump-handle… had probably everything to do with preventing a new outbreak, for the father of the infant, who slept in the same kitchen, was attacked with cholera on the very day (September 8th) on which the pump-handle was removed. There can be no doubt that his discharges found their way into the cesspool and thence to the well. But, thanks to Dr. Snow, the handle was then gone.”
Snow’s work would lay the foundation for the entire field of epidemiology, but what is it that John Snow actually did? He conducted user research and uncovered that a popular technology was contributing to a widespread degradation in human health. His solution? A change to the user interface.
Removing the pump handle didn’t eradicate global cholera pandemics, of course. It took sanitation and the development of a model for how bacteria is spread, not to mention wide social education on the subject. As it happens, Vibrio cholera, the cholera-causing bacterium, was isolated the same year as the outbreak, but it would still take decades for this finding to become well-known and accepted. And it took a reconceptualization of health as something that occurs at the scale of a population. In 1848, England passed The Public Health Act, a groundbreaking move that set in motion an entire legislative discipline based on the impact of environmental factors on human health. The Public Health Act reframed the stewardship of the health of a public as the responsibility of a government. Less than two centuries downstream from this single piece of legislation, its ripple effects have transformed the conditions for how people around the world live, and even our how long they can expect be alive — In the UK, life expectancy in 2011 was almost double what it was in 1841.
But the currents of history flow from the removal of the Broad Street water pump handle to our present day in another way as well. This origin tale is a seminal story about how technological revolutions always come with public health consequences. It is a story about how the pervasive technologies that define our lives affect our health. And it is a story about how decisions about the user experience can affect the health experience of an entire society. For good, and for ill.
2011
By the second half of the 2010s it had become clear something was going wrong with America’s teenagers. As the first waves of Gen Z, the generation born after 1996, started entering college, the kids-these-days narrative popular of their Millennial predecessors — entitled, profligate dilettantes — began to erode into something else.
“Students Flood College Mental-Health Centers,” the October 2016 Wall Street Journal headline announced to its readership, old enough to have ostensibly just deposited their children on campus the month before. Ohio State, The Journal reported, had seen a 43% jump in the number of students being treated at the university’s counseling center since 2011. “At the University of Central Florida in Orlando, the increase has been about 12% each year over the past decade. At the University of Michigan in Ann Arbor, demand for counseling-center services has increased by 36% in the last seven years.”
The article came on the heels of the American College Health Association’s Spring 2016 survey of 95,761 students, which found that 17% of the nation’s college population had been diagnosed with or treated for anxiety problems, and 13.9% had been diagnosed with or treated for depression during the preceding twelve months. By the 2018 survey, more than 60% of college students responded that they had experienced “overwhelming anxiety” in the past year. Over 40% were saying they felt so depressed they’d had difficulty functioning.
“Rates of teen depression and suicide have skyrocketed since 2011,” psychologist and author, Jean Twenge, wrote in The Atlantic. “It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones.” According to Twenge’s research, “Significant effects on both mental health and sleep time” appeared after two or more hours per day on electronic devices. After three hours, there was a 35% greater likelihood of “a risk factor for suicide, such as making a suicide plan.”
As the prevalence of psychological distress among teenagers was growing, so were cohorts of young people whose lives had become increasingly enmeshed in digital technology with each successive year. In 2011 only 23% of teens had smartphones; by the end of the decade it was 95%.
2011 also happened to be the year Facebook switched to a new machine learning algorithm for its newsfeed. A year before their IPO, this algorithm, which would take more than 100,000 factors into account to determine the best content to serve to a user at any given time, was essential for extracting the kind of engagement Facebook needed in order to achieve the next level of its meteoric ascent. As CEO, Mark Zuckerberg, would succinctly put it in a congressional hearing just eight years later when asked how it is that Facebook makes money if it gives its service away to users for free: “Senator, we run ads.”
Indeed, for its business to be successful, Facebook depends on an algorithm that’s very effective at showing people the content that’s going to get them to keep scrolling and watching and engaging and refreshing for longer and longer, being served those ads Facebook makes its money on all the while. Today, of course, all of the most popular social technologies are, by definition, powered by algorithms explicitly designed to extract attention. Much has already been written about the way these algorithms seem to inevitably become radicalization engines. YouTube’s recommended videos sequence, for instance, autoplays in a literally endless daisy chain reverse-engineered for the dopamine loop. To keep us engaged and sticking around longer, the content the feed algorithms serve us must become increasingly emotionally inciting and increasingly extreme, continually upping the ante lest we develop a tolerance and click away. You might go in to find out how to fix a timing belt, and come out a Nazi. The algorithms are a black box.
We have barely begun to reckon with the ways these persuasive technologies built to extract population attention are influencing population health.
In 2018, an experimental, controlled study at the University of Pennsylvania showed a causal link between increased time spent on Facebook, Snapchat, and Instagram and detrimental effects on mental health. The study, published in The Journal of Social and Clinical Psychology, found that participants who limited their use of the social media platforms to 30 minutes per day — brute force negating the work hundreds of thousands of designers, product managers, and technology creators do every day to achieve the exact opposite result — experienced statistically significant decreases in depression and loneliness compared to a control group that continued using the apps as they normally would. That study, published barely over a year ago, was the first ever of its kind.
Five months before their IPO, Facebook decided to conduct their own research. Though it wouldn’t come to light until the findings were published two years later, in January of 2012 Facebook randomly selected 689,003 users to become unwitting test subjects in a human experiment. (Generally a frowned-upon no-no in research, Facebook would later argue participants had consented to this kind of manipulation when they accepted the terms and conditions contract, and, arguably, hadn’t we all?) For the one-week run of the experiment, some Facebook users would get exposed to more negative posts showing up in their feeds, and others to posts with a more positive tone. You’ll never believe what happened next.
JK. You will.
As The New York Times reported:
The researchers found that moods were contagious. The people who saw more positive posts responded by writing more positive posts. Similarly, seeing more negative content prompted the viewers to be more negative in their own posts.
The goal of all of this, Facebook says, is to give you more of what you want so that you spend more time using the service — thus seeing more of the ads that provide most of the company’s revenue.
If John Snow, the father of epidemiology, were alive today, what would he make of these new, contagious phenomena? Technologies of viral hyper-growth with the capacity to affect the health of billions of people who have contracted them? Like cholera in the time before germ theory, we too have encountered an epidemic whose mechanisms we do not yet fully understand.
2020
Today we are living through a new era of technological transformation. But not an entirely unprecedented one. Much as in the time that originated the discipline of public health during the Industrial Revolution, technology is once again fundamentally reshaping the very nature of how we live our lives.
The early days of digital, networked technology, when we thought it was all fun and games and, you know, for kids, before we understood the ramifications (we arguably still don’t fully, but at least we are beginning to understand that) are like the time before sanitation, before germs were identified, before we began to conceive of health as a public concern and understood that environment can influence health at scale. From sanitation to building codes to emissions standards, every day decisions are made about the built environments in which we live that affect our health in profound ways.
So what about this environment?
This space in which we increasingly spend so much of our days?
Just as our physical environment shapes our health and wellbeing so does our digital environment. In every age, people’s lives are designed by the design of the technology of the day. And as the curious cases of the Broad Street water pump handle and the Facebook algorithm show — our health experience is tied to our user experience.
In the 21st century, digital technology is a determinant of health.
For designers and technologists, it’s fashionable to talk about “design thinking,” and how do we bring that into our organizations to make the products and services we create more intuitive and easy-to-use and sticky. But today it’s easier than ever to design highly usable experiences that degrade the health of the people who use them. So how do we move beyond just design thinking? How do we embed health-thinking into our products and design decisions?
What I’ve realized in the course of designing healthcare technology as a UX Lead at athenahealth is that many of the design values inherited from mainstream technology don’t actually help me create better products. After all, if I’ve gotten a doctor to spend “more time on site,” that’s less time with a patient. I’ve failed. This dynamic is often also true for enterprise technology, cybersecurity, govtech, fintech, edtech, and any other technology space where success is measured by how well we can help users get sh*t done.
Over the past year I’ve been giving talks on Health-Thinking for Product Design, in which I present a new UX maturity framework based on values drawn from a range of health disciplines — from public health to emergency care to pain science — as a source language for both technology creation and criticism. Whether we work on consumer apps or enterprise software, the choices we make directly affect the physical, mental, social, and professional health of our users. As designers, and more broadly, as technology creators, we are agents of public health. So what do we do with that? How do we create technology aligned with health-centered design values?
These questions are for you if you are designing technology, making business decisions about technology, or crafting public policy in a world mediated by technology. They are at the intersection of practical methods and tools for design practitioners, strategic business decisions for long term user and product health, and public health legislation informed by the realities of a digital century. They are also an opening of a conversation for humans who exist in the modern world today, surrounded by digital technology as we are, to examine how our experience of health, both personal and societal, is shaped by the design of the technologies that shape our experience of the world.
As Sandro Galea, the Dean of the Boston University School of Public Health, says, “Let us think about the forces that generate health so we can create a world that encourages those forces.”
Let us think about the forces that generate health so we can create technology that encourages those forces, as well.