Welcome to the Machine

On Free Will (and Artificial Intelligence)

Sam Mateosian
Yarn Corporation
10 min readAug 9, 2017

--

Mashup of Photos by h heyerlein and John Schnobrich on Unsplash

TL;DR

Understanding yourself as a complex thinking machine provides an alternate framing for the “illusion” of free will that might promote ethical behavior and help maintain one’s sense of meaning.

The Problem of Free Will

The debate regarding the existence of free will has existed at least since the ancient Greeks. To summarize the problem: we understand the physical structure of the universe to be the result of deterministic processes — the laws of nature that govern everything from the movement of molecules to the birth of stars; human beings are physical constructions, the result of the same deterministic processes as everything else; our minds are the product of processes within the body and brain; ergo, what we think must be a result of deterministic process. The difficulty appears to lie in the fact that, as a conscious entity, it feels like what we think is then the cause of what we do. It feels to us as if we make choices and then act on them.

I chose to sit down and write this article because that’s what I wanted to do. If I had not made that choice, I would be doing something else at this moment. It certainly feels to me as if I have a freedom to choose, moment-by-moment, what to do next.

On its face, this feeling of freedom seems to be at odds with determinism. If my mind is free to choose what my body does, it would be an example of “downward causation,” impossibly breaking the direction of causality. Our minds, themselves a result of deterministic processes, cannot “decide” what happens next. They can have no new input into the chain of causation. Our decisions are the result of the universal, all-encompassing deterministic process, not the other way around.

The science is pretty clear in showing that the chain of causation is, in fact, unbroken. Benjamin Libet demonstrated that the neurological impulse responsible for an action occurs well before a person’s conscious “decision” to act. If the brain is making decisions about how to act before we’re even conscious of the choice, what should we make of this feeling of free will?

The prevailing scientific understanding of the mind asserts that free will does not actually exist, but rather, is an illusion.

The Illusion of Free Will

The description by philosopher Steven Cave in his 2016 Atlantic article, There’s no Such Thing as Free Will summarizes it well, “The contemporary scientific image of human behavior is one of neurons firing, causing other neurons to fire, causing our thoughts and deeds, in an unbroken chain that stretches back to our birth and beyond. In principle, we are therefore completely predictable.”

Cave describes “the illusion of free will” thusly, “The conscious experience of deciding to act, which we usually associate with free will, appears to be an add-on, a post hoc reconstruction of events that occurs after the brain has already set the act in motion.”

Anyone who’s struggled with writer’s block (or addiction, or dieting, or exercise) deeply understands that we are truly not free to do as we wish.

So while it certainly felt as if I had freely “chosen” to write this article, upon reflection, it’s clear that that choice was dependent on the entire chain of events proceeding it: that I had an opening in my day, that this article (and not some other) “wanted” to come out, that I had a goal to make my ideas more public, that I had consumed certain information, that it could support my work to start a new company, and so on and so forth, all the way back to the very beginning of me, and to the dawn of time.

An Open Can of Ethical Worms

This lack of freedom creates an ethical problem: if we are not freely making choices, how can we be held accountable for our actions? And, beyond the ethical and legal questions, there appears to be real danger in the popularization of the notion of free will as an illusion. As Cave explains, “when people stop believing they are free agents, they stop seeing themselves as blameworthy for their actions. Consequently, they act less responsibly and give in to their baser instincts.”

In experiments by social psychologists it has been shown that, when primed with statements about the illusion of free will, participants were more likely to cheat and to make other, more selfish decisions. We fall victim to what the philosopher Sam Harris describes as “mistaking determinism for fatalism.” When we hear that “free will is an illusion,” what we effectively hear is, “our choices don’t matter.”

I can feel this even in myself. That the more I dwell on the deterministic nature of the universe, the more I notice a sense of creeping nihilism. Why should I care about [insert giant problem here] when the fate of the world has been sealed since birth?

It appears that when human beings believe in free will, it makes us better at being human (and vice versa). So, how can we hold on to our sense of meaning while accepting the apparent non-existence of free will?

It is with this question in mind that I’d like to suggest an alternate framing for the illusion of free will.

A Thought Experiment of Thinking Machines

Let’s think about the mind as the result of unimaginably complex thinking machinery. As a thought experiment, let’s imagine that we are powerful artificial intelligences contained inside of the most advanced robotic humanoid bodies conceivable. Let’s imagine that these machines are capable of the full range of human emotion: sadness, rage, melancholy, desire, love, pain due to the loss of loved ones, and so on.

The film “Ex Machina” examines artificial intelligence as it relates to sociopathy.

Imagine that we machines reproduce by a process of mating in which we exchange and recombine encoded information about how to make a new, slightly different robotic offspring based on the code of the parents.

Imagine that we spend our robotic lives in the pursuit of a variety of individual objectives like acquiring energetic resources, finding mates, raising our offspring, all in service of the larger objective to ensure the continuing existence and success of our robotic species in the world.

Imagine that we find both joy and sorrow in this existence. Imagine that we question our nature and our true purpose.

Imagine that we organize our robotic lives into complex societies consisting of billions of robots living in relative harmony, sometimes at peace, often punctuated by violent conflict, sometimes collaborating, always exchanging massive amounts of information, sometimes conspiring with friends and sometimes with enemies, and always in search of the resources necessary for our own survival, for that of our offspring, and for that of our clans.

At every step, the thinking apparatus of these complex machines (us) must process an enormous amount of information about the world such as: the physical structure of our surroundings, the conditions inside our own robotic bodies; the actions of other thinking entities who might impact our own ability to act; the set of goals we have; the opportunities for action available to us; the possibilities of success of those actions, and so on.

From moment to moment all of this incredibly advanced thinking machinery makes thousands (maybe millions) of minute decisions about what actions to take, how to move our bodies in the world, how to respond to the stimulus presented, and what changes to make to our list of immediate and future objectives.

Some of those decisions are small and inconsequential. Some of them more serious. Should we consume the energy module we just found now, or save it for later? Should we attack a fellow being and steal its resources at the potential cost of our own life? Or, should we attempt to befriend a fellow being with the hope that we both can share resources in kind?

Some of those decisions happen near-instantaneously and seem almost automatic. Some decisions require a different kind of processing, a more deliberate and self-referential process in which our thinking machinery thinks about what it’s thinking.

Our thinking software estimates the possibility of certain future outcomes and the relative value of those outcomes upon its many objectives and objects of concern. We use a highly detailed internal model of our own robotic being, models of other beings, and models of the world in which we exist, to help make the best possible predictions of the future given the constraints of time and our available processing power.

To us thinking machines, this self-referential modeling and game of attempting to predict the future based on past experience is what it feels like to be conscious of our own existence and our own thoughts.

Now, let’s increase the complexity of this machinery to the point at which it is atom-for-atom, indistinguishable from human biological life. To be this thinking machine now feels exactly as it does to be a human being.

The Illusion Rebooted

My hope with this is to illustrate the possibility of a deterministic thinking machine that exhibits the full richness of the human experience.

At no point in this thought experiment did we require a change in direction of causality. All “decisions” are made by a highly complex software-like process that transforms a massive amount of information about the external world and its own internal state into new goals and actions.

In viewing the mind in this way, the experience of “free will” is not so much an illusion — it’s the experience of being a ridiculously advanced thinking machine as it executes its awe-inspiring thinking software. It is a free agent in the sense that it is free to do that which is within the range of its vast capabilities — and that, moment-to-moment, it makes decisions about what awesome thing to do next. While it is absolutely a deterministic process, there is nothing unreal about the machine, the software, or the significance of the decisions it makes upon its own future existence.

Yes, if we were to rewind time and play it back, everything would be the same. But this seems inconsequential as we will almost surely never be able to rewind the clock.

Machine Ethics

My other hope is that we can now see how a sophisticated thinking machine should not be fatalistic simply because it is deterministic. Its decisions matter. When it takes the extra processing time to deliberate and then decides to make a non-selfish choice, that choice affects how it is modeled and therefore how it is treated by other thinking machines. This is to say, that concepts such as trust, judgment, punishment, and so on, are all perfectly valid in the realm of thinking machines.

A thinking machine that communicates its intent to achieve a particular outcome, and then does so, should be more trusted and therefore more likely to be taken at its word by other thinking machines. It benefits the order and stability of the machine world that the majority of thinking machines pass judgment upon those machines who betray trust or break agreements — because doing so affects the future outcomes for all.

But they would also need to recognize how big of a role determinism plays.

Some thinking machines might start out with software that is naturally better at certain tasks than other machines. Perhaps every thinking machine comes with software that can rewrite its own code, improving its capabilities over time. Perhaps some systems start out encoded to be better at self-improvement than other systems. The starting conditions of such thinking software would certainly matter quite a bit. And, the environment in which such thinking machines exist would most certainly have an outsized impact on their output.

This would mean that, even though concepts such as responsibility still matter, a thinking machine is not itself entirely responsible for its behavior.

We see this played out in our legal system when it comes to issues such as the “insanity defense.” While that is a much maligned topic, there is reason to be optimistic that this line of thinking within our justice system is moving us toward the treatment of root causes and addressing “bugs” within the thinking apparatus, and away from simple corporeal punishment — movement, in my opinion, in the direction of an improved moral code.

This also raises the possibility that if we as a society of thinking machines so collectively choose, we could decide to modify our collective software to promote certain outputs (e.g. cooperation) over others (e.g. sociopathy). While the ethical considerations regarding reprogramming are many and profound, in the human world, we call this simply “education.”

Soul of an Old Machine

Many, if not most, people will chafe at the notion of themselves as thinking machines. Some will note that it is perhaps yet another misguided instance of the historic trend to model the human mind after whatever technology is the fashion of the era. For example, in the age of steam engines, Freud described mental states as pent-up energies that needed to be released through various valves — thus the phrase “to blow off steam.”

Surely, a description of the mind as software is conceivably as distant to the truth as a description of the mind as a steam engine.

However, despite its many faults, Freudian psychology proved to be far more useful than the previously prevailing theory: mind as an otherworldly spirit. So, perhaps, in the manner that the mind-as-steam-engine-model was useful in its time, so too can the mind-as-software-model be useful in ours.

So maybe, just maybe, thinking about ourselves as software beings could make us better at being human.

Let’s say that this whole article is the output of a thinking machine who had no choice other than to write it at the time that it did and in the manner that it did.

That thinking machine is me. I am software being software — formed by a lifetime of experiences and the entire history of the universe.

Somehow, thinking about myself in this way does not engender the same sense of creeping nihilism that I noticed before. It seems to me that this is simply what it is to be an agent. Without my past determining my future, who would I be? Without all the specific intricacies unique to my thinking machinery, how else would I define what it is to be me?

Without the machinery of thought, how else would we choose?

--

--

Sam Mateosian
Yarn Corporation

Co-founder of @yarncorporation and @bigroomstudios. #VR #AI #UX #hacker #maker #designer #artist