Bits and Behavior

This is the blog for Amy J. Ko, Ph.D. at the University of Washington and her advisees. Here we reflect on our individual and collective struggle to understand computing and harness it for justice. See our work at https://faculty.washington.edu/ajko

A muted, fuzzy photo above the clouds, with the bright sun in the corner, possibly from an airplane window.
No matter how high we fly, we are still live on the ground.

We are not bits: AI and dehumanization

Amy J. Ko
5 min readMar 10, 2025

--

Amidst federal chaos, genocidal attacks on trans civil rights, existential attacks on universities and research, and looming economic decline, big tech seems to show no signs of backing off generative AI hype. In fact, the aggressive obsession on artificial general intelligence (AGI) and its promise of economic transformation seems to exist either entirely independent of the President’s authoritarian impulses, or because of them: they both seem to share a common idea that people are mere abstractions; inconsequential, aside from what capital they generate. That’s led me to ponder where this shared viewpoint stems from. Below is a hypothesis, and maybe, a foundation for resistance.

Nearly eighty years ago, Claude Shannon made an assumption: that any message, of any kind, could be reduced to a sequence of bits. That profound conjecture, which began narrowly as a way to improve the reliability and clarity of phone calls, has been proven broadly right. Digital computers, the internet, and every piece of data on it demonstrate the incredible range of this simple idea: books, movies, music, messages, documents, images, structured data, unstructured data, and nearly every imaginable form of human artifact seems to have a digital analog. This assumption, even if not true at the limit, seems to be pragmatically unbounded, to the point where a massive collection of bits can represent probabilistic generative AI machines capable of generating nearly all of the digital information ever captured, and rearrangements of it, on command.

And yet, Shannon’s assumption was not fully true. We know this in our bodies and in our minds. The particular ache of our aging knees and the way it changes on cold days. The stigma we might feel as a party of one at a bustling restaurant. The particular sound of a muted record after a tragic scratch. The anticipatory squeak of our newborn child before their bowel movement. None of these can be captured as a sequence of bits, because once captured, they are something else. And yet they are the kind of potent messages that drive our emotions and our behaviors. They are the messages of life.

Generative AI, for all its promise to replicate life, comes with this same limitation. In the most horrifyingly surveilled future we can imagine, all observable messages of life might be replicated by prompt. And yet, the replicas will only ever be that: divorced from context, they have no meaning, other than the meaning we give them. Imagine, for example, an image we generate for our social media profile. It only has meaning because we made it, at a particular time in our life, for a particular audience, and because it made our digital projection feel more like us, or made us laugh. And that meaning is part of the message only when the recipients know us, the things that make us laugh, the particular tragedies that befell us at that moment, and inspired the image. The algorithms that might translate that generated image into a machine-readable message about who we are and what our emotional state is know none of this, unless we disclose it. And why would we, to a machine that does not know us or love us?

Our digital worlds, for all the disembodied, decontextualized and often extraordinary communication and experience they enable, can only ever be that. They need the backdrop of our lives to have meaning. And the generative AI that guzzles up our messages, repeating them back to us in aggregated tones, will always need that meaning, always seeking more context, more organic exhaust, more insight into our inner lives. But its search will be futile, because inside us is not a low entropy sequence of bits, but flesh, evolution, consumption, decay, a billion organisms in a mystery dance of replication, competition, equilibrium, and death. A billion parameters are nothing when compared to a 100 trillion cells, each a world of their own, independent, but also aligned, in an infinitely layered tapestry of yearning, survival, and chemistry. That we make messages is a miracle; that we make computational messages that can make messages of their own is a miracle too. But that we exist, and that we know and feel we exist is still the biggest miracle of all, and one we are far from replicating.

The rising lawless fascism of the United States mirrors Shannon’s binary conjecture. It starts from Trump’s tendency to see other people not as people, but as objects. Objects to be judged, exploited, deported, manipulated. People, and the context of their lives, are stripped away, leaving only ideas to be erased or elevated. There is no love, or empathy, or understanding, or relationship in Trump’s transactions. There is no truth or identity; only extraction and profit. In the same way, generative AI is constructed, by people, to do much the same, recording, stripping, and recombining all of the incidental traces of our humanity in the pinnacle of context collapse. Our words, images, sounds, conversations, even our private expressions of fear, doubt, love, and respect reduced to collective probabilities. Both Trump and large language models do not see humanity. They see the idea of humanity through a kaleidoscope of digital categories, hierarchies, and predictions.

I don’t think Shannon expected his theory of messages to be taken as universal. He certainly was interested in AI, with his early experiments with mechanical mice. He was a key catalyst in our 80 year journey to create intelligent machines. But he was also very human: a juggler, a unicycler, a chess player, a frisbee thrower, an inventor. He was an embodied man, who lived in the material world, who created a family, who loved, who literally tried to walk on water. Near the end of his life at the turn of the century, he developed and eventually died of Alzheimers. He did not glitch, he did not reboot. He slowly and tragically lost his memory, and then his speech. The messages he sent in those final years were not ones of silence, but ones of context, subtext, and implication: his loved ones knew who he had been, and what he would have said, and what he was trying to say, but couldn’t. He was not bits, nor an object, and neither are we.

--

--

Bits and Behavior
Bits and Behavior

Published in Bits and Behavior

This is the blog for Amy J. Ko, Ph.D. at the University of Washington and her advisees. Here we reflect on our individual and collective struggle to understand computing and harness it for justice. See our work at https://faculty.washington.edu/ajko

Amy J. Ko
Amy J. Ko

Written by Amy J. Ko

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.

Responses (1)