On the Ridiculousness of Uploading Consciousness.

Ides Parmentier
5 min readJul 28, 2021

--

Something has been bothering me for a while. It bothers me every time I hear someone, futurist, techno-geek, transhumanist, philosopher, etc. fantasize about living forever through the uploading of consciousness.

Kurzschluss 12V20A Wikimedia Commons

The idea of consciousness as something that could potentially be uploaded has to take the notion of the brain as a computer as reality instead of metaphor. In every age, people tend to want to interpret what they don’t understand in terms of what they do. Descartes imagined organisms functioning as intricate mechanical clocks. Freud thought of human beings as functioning like hydraulic or pneumatic systems with pumps and pressures and valves. In our age, it is the computer. Imagining that consciousness is software is imagination guided by what has been invented by humans. It’s reducing life to that. It forgets that no one actually understands consciousness. It forgets that the computer metaphor is just a stand-in for that lack of understanding. And the theorizing is based just on the stand-in. How long do you think until ‘brain as hardware/software’ starts appearing as ridiculous as ‘emotions as hydraulics’?

Many are so convinced that organisms are in fact machines, that the brain truly does work like a computer, that they believe the only thing standing between them and their dream of eternal life is the rate of technological development. (And before you accuse me of straw man arguments — valid up to a point — a while ago I started reading a book on the modular theory of the mind and evolutionary psychology. I think the writer, Robert Kurzban, wrote “organisms are machines” flat out about ten times or so in the introduction alone, referring back to Pinker and Dawkins. It kind of put me off.)

People who believe in the uploading of consciousness have to believe they are their minds and that their minds are data, and nothing more. If only what can be measured now is real, and if what can be measured exactly can be replicated exactly, it’s assumed the original and the replicated will be one and the same. They smell immortality.

To me, the whole endeavor seems misguided and absurd. What could potentially be possible someday is that brain mapping technology advances far enough so that it becomes possible to develop a digital mind that is based on the specific complexity of a specific brain at a specific point in time. A problem, however, is that in reality, our brains are always changing. New neural pathways are strengthening while others are atrophying. Neuroplasticity continues until death. New memories form physical substrates so that the separation between brain matter and thought is not clear cut at all. Our memories are not stored in our brains like data on a hard drive. Each time we remember something it gets written again and our emotional state impacts how it’s remembered, which impacts the neural substrate, and so on. Intention can affect neuroplasticity. That’s a major difference with the hardware of a computer, which stays pretty much the same until it starts failing because of wear or malfunction, and where you can just erase the software and put something else on it. This was actually tried with brains at some point. They used electroconvulsive therapy to try to “wipe the slate clean”, after which they would try to “reprogram” the brain. A complete disaster that permanently damaged the victims. (You can find the fuller story in the second episode of Adam Curtis’ Century of the Self.)

The digital brain will be limited by the extent of our understanding of organic brains. I believe there is a lot more to it than we can at this stage measure or ascertain. To think that it is just wiring is again to make an assumption framed in terms of past human technological inventions. A rather crude and arrogant assumption, considering how little we understand.

But here is what I honestly don’t get, why do they all assume that they, the people who are hoping for an upload before they die, would get to feel what it is like to be the copy as if they’re not stuck being the entity that they are? However intricate the digital reproduction, this will always be a new entity that is based on the organic entity and mind. Starting out as a simulacrum and quickly becoming something other. Why would they assume the transmigration of their consciousness into the new digital entity? Why would they assume it is themselves that would endure in this new form? Isn’t there forever a vast metaphysical gap separating the two? Their consciousness will be as embodied as it ever was. And as for the potential consciousness of the digital entity, could it go beyond guessing and assuming? If the new digital entity seemed functional and even if it thought itself to be the original, its path would diverge instantly, as the organic “embodiedness” is removed and all that is not measurable, at the time of development, is left behind and a new way of sensing reality becomes the only option. Would there not be a high likelihood of it going insane, or being insane from the start, unequipped to process what’s going on?

Perhaps they believe that our consciousness can potentially be separated from the rest of our being, while remaining intact, under a hardcore dualist view of things? Perhaps they believe that a synthetic substrate could potentially be developed based on advanced brain mapping? And perhaps they believe that a way could be found to make their consciousness migrate to the synthetic substrate, similar to how you can move data from one piece of hardware to another? This would be a belief more akin to religious faith and rather far removed from any scientific foundation. They would have to ignore that in moving data it doesn’t actually move of course. A copy is made in the new location, and it is also still in the old location until it is deleted. And of course, an infinite amount of copies can be made. Also, the assumption of potential substrate compatibility appears to be based on nothing. I think the problem, assuming it is possible in theory and ignoring the metaphysical gap, is a bit bigger than trying to run windows software on a Mac operating system.

Is the craving for immortality driving efforts to develop the technology for creating digital minds, while the transmigration of consciousness onto a synthetic substrate is in fact a metaphysical impossibility because the dualistic mechanistic view of life is wrong, making the whole endeavor a fool’s errand? I think it is. I think it’s absurd. But I suppose lots of useful stuff can be invented along the way, and it might someday end in general AI. This AI might live forever, just don’t expect it to be Ray Kurzweil or Sergey Brin or Elon Musk, or whoever else is hoping for this.

--

--

Ides Parmentier

Multi-medium artist educated in philosophy with concerns about the future for life on earth