Way More Things Are Conscious Than We Realize. Here’s Why.

Jeremie Harris
The Startup
Published in
18 min readOct 21, 2020

--

I’m as sober-minded a scientist as they come. I used to be a quantum physicist, I work on concrete problems in artificial intelligence, and I haven’t had so much as a sip of alcohol for literally minutes.

But I’ve been trying to wrap my mind around consciousness lately. And no matter how I look at it, I can’t help but draw a surprising — and even disturbing — conclusion:

I think far, far more things are conscious than we realize.

Let me explain.

Note: this post was inspired by a proposal explored by Greg Egan in Permutation City, a novel that he authored in 1994. If you find it interesting, then I highly recommend giving the novel a read.

Thanks to Edouard Harris for reviewing and discussing the post at length with me, and for helping me to think more crisply about dust theory. Thanks to Nick Cammarata for reviewing the draft and providing helpful comments.

Prequel: scaling up our empathy

When you put a bunch of humans together, they eventually find ways to assemble themselves into interesting structures. They form governments, start companies, and join organizations like the UK Roundabout Appreciation Society, which is a real thing that actually exists.

We can think of these human structures as the organs and appendages of a vast “human super-organism”. Just as our hands and our kidneys allow us to function and achieve our goals, each government, company and club plays a specific role in advancing the interests of the human super-organism.

“Wait…” you ask. “ the interests of the super-human organism? Jeremie, are you actually saying this thing is conscious? Come on, now.”

I used to think this way myself: how could the human super-organism be conscious, if every component organism that it’s made from — every human being on the planet — is making their own decisions? The super-organism just reflects the choices we make as individuals, so how could it possibly have a mind of its own, and a consciousness to inhabit it?

It’s easy to imagine the super-organism as a kind of mindless drone, unable to make its own decisions or think its own thoughts. But does this really make sense? Consider this: the behavior of a human being — yours and mine — is determined entirely by the behavior of each cell that makes up our bodies.

From this perspective, we’re slaves to whatever processes are unfolding in our cells at any moment in time, unable to act in any way that doesn’t correspond to the precise “wishes” and “desires” of all of our cellular building blocks. So our actions are every bit as constrained by factors outside our conscious control as those of the human super-organism.

Organisms that exist at different scales and levels of organization use different communication strategies. Our cells communicate via complex biochemical mechanisms that involve passing molecules and ions back and forth to one another, sending along messages that we lack the means to interpret or the context to understand.

The human super-organism is presumably similar: it relates to the world in a way that is as incomprehensible to us as the electrochemical signalling strategies used by our cells. And because we can’t communicate with cells or super-organisms, we don’t perceive any hints that they have a genuine conscious experience of the world.

Dogs salivate, so we assume they’re hungry. Flies avoid our swats, so we assume they don’t want to be killed. But cells… what do cells do exactly? They don’t communicate things like hunger or fear of being swatted in ways that we can understand, so we assume that they’re simply not conscious.

From that perspective, the “consciousness continuum” that many people imagine exists from atoms, to cells, to insects, to humans isn’t a continuum of consciousness at all, but rather a continuum of our ability to perceive consciousness.

So it’s our incompatible communication styles that make it hard to notice the consciousness of organisms that exist at vastly different spatial scales from our own. But the same is true for organisms that communicate at different temporal scales: plants don’t seem particularly conscious or relatable until you watch them interact with their environment on longer timescales than those we consider relevant for most human-to-human interactions.

None of this is definitive evidence that plants or super-organisms are actually conscious, of course. But it’s a hint — a teaser for what’s to come.

In what follows, I’m going to try to convince you that we can’t see or even conceive of most of the forms of consciousness that exists in the universe, and that there may be rich forms of conscious experience playing out right under our noses. I’m not going to tell you it’s because of quantum mechanics, or because “consciousness is the foundation of the universe”, or whatever new-agey woo-woo you might be worried about. All I’m going to assume is that consciousness arises from the physical state of the brain.

By the end, you may or may not agree with me on any of this (I’m not sure I do). But I hope we’ll both have learned something fairly surprising about what consciousness really is.

You are your program, not your hardware

Your body takes in new atoms when you inhale or consume food, and releases old ones when you shed dead cells, exhale, or go to the bathroom. On average, the atoms in your body are exchanged for atoms from the outside world about once a year, so you’re quite literally made from completely different stuff today than you were a few months ago.

But you still have just about all the same quirks, hobbies, hopes and dreams. So whatever those atoms are, they aren’t “you”.

So then, what are you?

If you ask most machine learning researchers — and increasingly, even most people — they’ll tell you that the human brain is essentially a computer, and that everything we do and experience is the output of some complicated program that we don’t fully understand, that’s being run on our brain’s neural hardware.

If that’s true, then “you” are a computer program, and so am I. The things that are true about all computer programs must also be true about us.

Here’s one thing that’s true about all computer programs: it doesn’t matter what hardware you run them on — they’ll always produce the same outputs. A tic-tac-toe algorithm works just the same whether it’s being instantiated on a PC, a Mac, in your brain, or on some experimental optical computer.

Likewise, the “you” program that’s running in your brain right now would work just the same way — and would presumably give rise to the same consciousness — if it was running on silicon chips rather than cells. If you believe that you’re fundamentally nothing more than a computer program running on biological hardware, then we should be able to swap out your hardware without taking away what makes you “you”.

This might not seem like a ground-breaking observation. But as we’re about to see, it’s actually the trapdoor that gets us to wonderland.

Oh look, a rabbit hole!

Rabbit hole, red pill, not in Kansas anymore

If we could understand exactly what role each and every cell in your brain or body plays in storing the “you” program, we could create a copy of you on a computer — any computer — as long as it had enough memory.

All we’d need to do is encode the structure and connections between every cell in your body — or if needed, every atom in every cell in your body — into bits and bytes, and voilà: a new you, every bit as valid as the original.

But of course storing something as complex as the “you” program will take a lot of memory. We might need to distribute our storage across many different computers (using perhaps a service like AWS or Google Cloud). Remember, this is a move we’re allowed to make: “you” aren’t your hardware!

This new, distributed “you” may be stored in bits and pieces on many different computers in many different parts of the planet, but as long as those pieces are split up in ways that are compatible with, and replicate, the “you” program that was originally your physical brain, “distributed you” should still be every bit as valid — and conscious — as “human you”.

At this point, distributed you exist as nothing more than bits on a computer. And there’s a bit for every degree of freedom that’s required to fully capture the nuances of what it means to be “you” — every memory and association. I have no idea what level of detail would be required to do this, whether cell-level data would be sufficient, or if atomic or even subatomic effects would need to be factored in. But whatever that level is, we’ll assume our electronic copy of you captures it.

Now, suppose AWS, or whatever service we’ve been using to store our copy of you, reduces their servers’ memory capacity dramatically. As a result, “you” are now being stored not on 4 servers, but on 40, or 400 servers.

Actually, scratch that — let’s take it even further. Suppose that you’re being stored on as many different servers as there are bits required to represent the “you” copy, so that each mini-server is literally storing just one bit of information about you:

You’re now spread all over the world in a nearly continuous way — you exist as a kind of dust, disembodied and distributed atom by atom, bit by bit, all over the planet.

What’s more, from one moment to the next, AWS might decide to move one of these bits over from one mini-server to another. Depending on these mini-servers’ locations, the little piece of “dust you” that they encode might have moved from one continent to another in the process, all without compromising the integrity of the whole! As long as AWS is keeping track of which server stores which relevant bit — as long as they remember the “right” way to interpret the contents of all their mini-servers — the “you” copy is intact.

But notice that caveat: the “you” copy persists only as long as AWS keeps track of how to interpret the bits in their mini-servers. In other words, at this stage “dust you” is quite literally composed of two things: first, the bits stored in those AWS mini-servers, and second, the interpretation of what those bits actually mean. Apply the wrong interpretation (say by confusing mini-server 170 with mini-server 443) and you’ll see nothing but incoherent garbage — it’s only when you look at the contents of those mini-servers in the right way that a mind pops into view.

Now, suppose that a developer’s finger slips — they accidentally delete one of these trillions of mini-servers! And wouldn’t you know it, that server happened to store a bit that represented one of the most important neural connections in your brain. Your entire personality, and perhaps some of your most cherished memories, hinged on that one neural connection!

What happens to you?

Thinking outside the server

Dust “you” only ever existed in our imagination to begin with.

We imagined that we were storing a genuine copy of “you” because we interpreted certain atoms as bits, and because we interpreted those bits as being part of a “you” copy.

And now, thanks to the negligence of an AWS employee, we’ve lost one of these bits — a bit that we had previously interpreted as representing a very important part of “you” in our stored copy.

But can’t we just interpret another atom, somewhere else in the universe, as representing the missing bit?

Let’s make this concrete. Imagine that we’ve been representing our bits using the charges of atoms in our mini-servers. We’ll say that a charged atom represents a “1” bit, and an uncharged atom represents a “0” bit.

Suppose that the bit that was lost when its mini-server was deleted had a value of 1.

Let’s also imagine that there’s an atom floating in the air next to that mini-server, that happens to be charged.

We *could* just interpret that atom as representing the missing bit, couldn’t we? Would that make you whole again?

Recall that the only reason we believed that we were storing a copy of “you” in the first place was that we chose to interpret each of our mini-servers as representing a specific part of you. Now that one of those parts has been deleted, what’s wrong with choosing another atom, somewhere else, and simply interpreting it as a new mini-server?

At the very least, it’s unclear to me what the problem with this move would be. Sure, it means using a new interpretation of what parts of the universe count as “bits that represent you”, but why would one interpretation of the arrangement of matter in the universe be any more or less valid than another? How could the story we tell ourselves about the roles that different particles play in defining “you” affect your conscious experience?

Dust theory

There’s no reason to limit ourselves to swapping out one mini-server bit for one nearby charged atom. In fact, we could interpret literally any atom or particle in the universe as representing any bit required to complete the “you” copy.

So why don’t we do that? Why don’t we just interpret a copy of you into existence right now, by assuming that the charge of some random atom in the Andromeda galaxy represents the bit value that we would have stored in mini-server 1, that the charge of another random atom somewhere else in the universe represents the bit value of mini-server 2, and so on?

Why don’t we interpret the particle dust the universe is made from, to create a copy of you in our imaginations? Wouldn’t that copy be every bit as “real” as any other copy we might have stored on a cluster of Amazon servers, or in a single computer, or in your biological brain itself?

This is “dust theory”: things exist simply because they can be interpreted to exist. Our experience of the world is itself nothing more or less than one interpretation of the arrangement of particles and fields all around us.

Dust theory suggests that any interpretation that can be applied to the components that make up the universe — every story that we could imagine telling — is legitimately real. And just as we can interpret “you” into existence from the positions and movements of particles all across the universe, we can equally interpret other consciousnesses, and other conscious beings into existence by the same mechanism.

When the ancient Greeks gazed up at the stars and saw bulls and hunters, they were doing the very same thing that AWS does every time it interprets bits in its servers into intelligible information for human consumption. When Netflix chooses to represent a datastream as pixels on your screen and sound vibrations in the air, they’re doing the same thing as I am when my brain converts text on a novel’s page into a simulated world with characters and scenery, action and emotion. According to dust theory, reality itself is in the eye of the beholder, and a pattern is real even if it emerges from pure noise.

A sufficiently large group of particles can be interpreted as representing just about anything — including the minds of creatures that don’t even exist in the version of the universe we see every day. But what we see around us is just that: nothing more than one version, one interpretation, of the universe. Other versions surely exist, but we’re unable to notice them because we can’t relate to the way they encode information, any more than we can relate to cellular communication or the communication patterns of the human super-organism we discussed earlier.

But as our technology improves, more of these interpretations may come into view. We may eventually recognize life encoded in the atomic states of dust particles spread across planets, galaxies, or the vast expanse of the universe itself.

Dust dynamics

So far we’ve been thinking about a static copy of “you”, that stores the states of every cognitively relevant neuron and cell in your body.

But consciousness isn’t a static experience — it’s intrinsically dynamic. So while we’ve seen that we can interpret a static brain into existence, what about the dynamic experiences of thought and perception?

Consider this. While I may be able to interpret the charge of some atom in some galaxy, and the spin direction of another atom in a different part of the universe, as bits that encode part of a brain, if I wait just a fraction of a second, those “bits” will change as their atoms bump into things and acquire a new charge, spin direction, or other property. The more bits I need to represent your brain in my interpretation, the faster I can expect my representation to degrade. Meaningful, self-reflective consciousness probably requires a fair bit of complexity to emerge, so an interpretation that produces a long-lived copy of any conscious brain is going to be very rare.

So most ways of interpreting your brain into existence from the states of atoms in distant galaxies will only be a faithful representation of your brain for a very short period of time. Very quickly, the correlations between the atoms we depend on to encode your brain in our interpretation will fall apart. Presumably then, the experience of that brain would be of existing for a brief moment — like a flash in the pan of conscious experience — before dissolving into garbled incoherence and noise.

At best, we might find an interpretation of the behavior of particles in the universe that happens to reflect the time evolution of your brain as it really would unfold for perhaps several milliseconds. Out of pure chance, the atoms might jiggle in just the right way to cause our interpretation to correctly simulate a few milliseconds of conscious experience — but even then, rapid decay would follow.

But recall that AWS was able to relocate bits of the “you” copy to different mini-servers at will, without losing the integrity of the copy. As long as they kept track of which mini-server they should interpret as each part of “you”, their ability to recognize the “you” copy hidden in the dust persisted.

So why can’t we do the same? We can interpret your brain into existence by looking at the spins, charges and other properties of appropriately selected particles throughout the universe in just the right way. If, a second later, that interpretation no longer matches a reasonable brain state, why can’t we just change our interpretation to one that does? Isn’t that exactly what AWS does, when it assigns a “you” bit to a new mini-server?

Here’s where we land the plane. Our perception of the world arises from two things: first, from the states of the particles and fields all around us; but second and equally crucially, from our interpretation of what those particles and fields represent. And because the universe is made of an immense number of particles, spatial locations, times and fields, there’s enough raw material — enough dust — to interpret just about anything you’d care to imagine into existence.

So, why aren’t cells conscious? Presumably, because we simply aren’t interpreting them in a way that reveals them to be conscious.

This isn’t surprising: we’re comfortable saying that humans are conscious as long as we see “indications of conscious activity” that we know how to interpret, like language, non-verbal cues, and other forms of communication. Similar cues exist in many animals, which is why we’re fairly comfortable modeling them as conscious entities as well. Quite often, this myopic focus on easy-to-interpret indicators of consciousness causes us to mistakenly assume that people aren’t conscious, when in fact they are (for example, in the case of certain comatose patients).

Why aren’t human super-organisms, or rocks self-aware? Why is space dust itself not teeming with conscious life? Again, the answer may be the same: they are conscious, but in a way that we can’t appreciate because we simply haven’t found the right way to look at these things — the right interpretation of the behavior of their components — to reveal the consciousness hidden within them.

The trouble with dust theory

If dust theory is correct, then there are presumably an enormous number of conscious experiences waiting to be discovered in what appears to be pure cosmic dust.

But how many of these conscious experiences really exist, and what form would they take? One would imagine that there are far more ways to interpret simple worlds — say, worlds containing nothing more than a single conscious brain in otherwise empty space — than complex ones. If that’s true, then the vast majority of imagined worlds would presumably be far simpler than our 14 billion year old, formerly dinosaur-occupied, iPad populated techno-jungle. In the space of all possible consciousness-containing universe interpretations, the vast majority should look totally random and incoherent, except for the minimum level of order required to create and maintain a conscious mind.

Why, then, do we find ourselves here? I honestly don’t know. But we may be able to spot a hint of the answer to this question by looking back at the “you” copy we considered earlier.

I argued earlier that “you” are really two things: raw material (which I called “dust”), and an interpretation of that raw material. If that’s true, then any version of you, whether it’s made of dust or cells, must contain enough information in total — spread out between the raw material, and its interpretation — to reconstruct a complete “you” copy. But which component contains the most information will vary from one kind of “you” copy to another.

For example, “dust you” offloads most of its information content to its interpretation. In order for you to explain to someone how they could reveal a “you” copy in the cosmic dust, you’d have to explain to them how trillions of particles should be imagined to relate to one another, for every millisecond of the existence of the copy. The raw material — the dust — can be anything, and it doesn’t carry much relevant information at all.

“Biological you” is different: here, the interpretation is actually pretty simple — and it’s entirely specified by a handful of equations that encode the laws of physics. Half a dozen (or perhaps far fewer!) lines of math are in principle all that’s needed to map one biological brain state onto the next. The substrate does just about all the work!

But “distributed you” — the kind that exists as a copy stored in just 3 or 4 servers in different parts of the world — is somewhere in between. AWS has to keep track of which servers to interpret as which parts of your brain, but within each server, simple physical laws once again determine the interpretation required to bring coherent brain chunks into view.

Through this lens, it certainly seems like an odd coincidence that we just happen to perceive a world where complexity is forced onto the raw material that serves as the substrate for intelligence, rather than in the interpretation. And perhaps that makes sense: before there can be an interpretation, there presumably needs to be an interpreter — a system with enough cognitive capacity to keep track of how every part of a substrate needs to be interpreted to give rise to consciousness or intelligence.

Perhaps that system must be made of stuff itself: “computer you” only exists as long as the computer that stores the “you” bits keeps track of the right way to interpret those bits, by storing its interpretation on some other physical material. Likewise, “globally distributed you” is only “you” as long as AWS has stored the correct interpretation of the contents of their servers somewhere in another physical server.

So arguably, “cosmic dust you” would only exist if we were to actually find and physically store the right interpretation of all the dust particles that are needed to keep a coherent copy of you going over an extended period of time. This requires much more than just “imagining a dust you into existence”. Remember: almost all of the information required to create “cosmic dust you” is in the interpretation. In order to build you from the dust, we’d have to: 1) build a computer with enough memory to store the relationships between trillions of dust particles; and 2) actually go out and find a set of dust particles that can be interpreted as by our stored relationships as a “you” copy. This would be much, much harder than just building a simulation of you on well-structured computer chips in the first place.

From that perspective, it’s perhaps a bit less surprising that “cosmic dust you” can in principle be conjured into existence — because in practice, it’s about as hard to achieve as my intuition says it would be.

But it’s also possible that there may be a deeper principle at play, whereby the universe — for whatever reason — genuinely “prefers” simple interpretations to complex ones. Speculating about that would fall way, way above my pay grade though.

The bottom line is that the universe is damn hard to make sense of. No matter how rational, scientifically minded and sober you want to be about it, consciousness just seems to come out weird.

I’m writing a book about the physics of consciousness! If you’d like me to let you know when it’s out, just leave me your name and email via this form 🙂

If you have a comment or question, I’m always game to chat on Twitter at @jeremiecharris.

--

--

Jeremie Harris
The Startup

Co-founder of Gladstone AI 🤖 an AI safety company. Author of Quantum Mechanics Made Me Do It (preorder: shorturl.at/jtMN0).