When Things Go Beep in the Night

The AI system that is keeping me alive is ruining my life.

Laura Forlano
Data & Society: Points
6 min readMar 15, 2023

--

Photo by Megan te Boekhorst on Unsplash

I am a Type 1 diabetic who uses an AI-driven pump system in order to deliver the insulin I need to stay alive. But the AI system that is keeping me alive is ruining my life. My annual doctor’s visits always began with a single statement: “I can’t continue to live like this,” I say. My calls to the customer service department of the company that makes my pump — some nearly two hours long — are typically met with empathy but without explanations. My conversations with Silicon Valley software engineers about the problem have ended with them expressing disbelief, and the face-to-face equivalent of SMH (shaking my head). Countless internet searches have yielded a wide variety of tips and tricks — ranging from the scientific to the almost mystical. But none of that got me closer to a good night’s rest.

For four years, I used a “smart” insulin pump that required frequent calibration in order to insure the continued accuracy of the sensor system. The constant demands of the system meant that it was impossible to sleep through the night more than a few times a week. My husband also complained of sleep interruption: I read in an online support group about one couple who were forced to sleep in separate bedrooms due to the system’s failure to fit into the temporalities of everyday life.

The promise of the AI system is that it can more dynamically adjust blood sugar when compared to the previous linear system, which used pre-determined amounts of insulin at specific times of day and night. By using sensor data and adjusting these amounts in real-time, the device aims to help diabetics achieve better control of their blood sugar — avoiding severe lows and easing high blood sugar down to a statistically “healthier” curve.

But while the system has achieved its ultimate aim for some, the requirement of near constant human interaction — in my case, sometimes totaling over 24 alerts and alarms per day, based on data collected in July 2019 — is testament to a harmful and unwelcome design. As experts on a variety of computational technologies will attest, often automation is not really automated. Human labor is a necessary component that is often pushed out of view or, in my case, into the middle of the night. The labor of calibrating the sensor system means 1) pricking your finger with a small needle called a lancet; 2) putting the tiny drop of blood into a meter (a separate device that interoperates with the system); and, 3) wirelessly sending the blood sugar data to the sensor system. In order for the system to continue working, I was prompted to perform this sequence of operations approximately every six hours, and sometimes more frequently.

When I say that this AI system I rely on is ruining my life, I mean it (somewhat) humorously and provocatively. But such systems have the potential to cause significant harm — and they do. Extreme sleep interruption and deprivation results in a range of social and economic consequences that serve to dehumanize Type 1 diabetics. This is because the “technology solution” intended to keep us alive actually destroys our ability to work, maintain relationships with family and friends, live happily and even feel human. Clearly, the loss of daily connection and intimacy with one’s partner as a result of the use of a medical device may have detrimental social and emotional costs for people with disabilities. This deserves serious consideration when we talk about what it means to design technologies with a sense of ethics, accountability, and responsibility for the human lives that are being made and unmade with these systems.

In recent years, AI has been heralded for driving a wide variety of innovations in healthcare, including medical diagnostics, robotic surgery, and organ donation. But while these so-called advances may be responsible for care, they can cause significant harm when the social and economic consequences are ignored. Last year, for example, a company that manufactures bionic eyes went bankrupt, leaving recipients of these retinal implants suddenly unable to see. In my case, the tech fix was in some ways worse than the condition of diabetes itself. And while individual experiences differ, based on the number of people who have shared their own stories on social media and in Facebook groups, it is easy to conclude that tens of thousands of people have suffered in the same way as me.

Rather than dismiss this particular system as bad engineering, unlucky consumer choice or unethical technology, it’s more useful to think of it as a bellwether for a world in which autonomous systems are likely to be increasingly embedded in everyday life. How might we characterize different kinds of algorithmic harm? Who is considered to have been harmed and why? What are the philosophical and legal precedents for these cases?

Instead of treating these failures as aberrations, we should recognize such problems for what they really are: the default, in systems created by fallible humans. Reflecting on my own disabled identity, I’ve come to think that technology itself is disabled — full of flaws and failures, gaps and glitches, seams and symptoms, errors and omissions, bugs and biases. Rather than seeing these qualities as representing a lack of something, we can learn from disabled identities that celebrate our lives as expansions of what it means to be human and adopt a positive relationship to these anticipated technological breakdowns. And rather than viewing humans and technologies as separate from or even opposed to one another, we can see them in more relational terms. Disabled people are no strangers to this: We develop mutual and interdependent relationships with technologies because they are literally who we are.

I write from the perspective of “the disabled cyborg” to acknowledge the ways in which my disability, my technologies and my politics are shaped together through my experiences of living with machines that keep me alive. I continue to persist despite the gaps, failures and breakdowns I write about. My existence is literally and precariously tied to the fate of my machines. Thus, software updates, supply chain delays, international airport security procedures, power outages and other relatively mundane everyday events can be scary and life threatening. Humans and technology can fail together, and we can also succeed together if we reject the myths of objectivity, perfection and solutionism that tend to pervade our current debates around AI systems.

While this approach might sound illogical or counterintuitive it will greatly improve the social consequences of autonomous systems. Thinking through not only the immediate, first order consequences but the secondary and tertiary impacts allows for an essential broadening of the problem space and opens up a more diverse set of stakeholders, people, contexts, and situations. Asking “What could go wrong?” allows us to embrace a generative mode of questioning and a more critical and speculative form of design futuring. It is in these gaps that alternative futures for human-machine relations can be experienced. So, to ensure that our technological innovations do not cause more problems than they solve, let’s start by asking: “What happens when this fails?”

This year, I began using a new “smart” insulin pump and sensor system. I no longer need to calibrate the sensor in order for it to work and my sleep is greatly improved. This doesn’t absolve these systems and their designers from their ethical responsibilities. And, it certainly doesn’t give me back the four years of my life that I spent living with a system that was incompatible with, and even hostile to human life. With every update and upgrade, there are inevitably a host of new issues to consider — as new functions are made available, other possibilities are taken away. But, I’ll save that for another day, once I’ve gotten some more sleep.

--

--

Laura Forlano
Data & Society: Points

Professor, College of Arts, Media, and Design, Northeastern University. Editor of Bauhaus Futures (MIT Press, 2019) and digitalSTS (Princeton, 2019).