The Talos Principle: Being human

Sumurai8
5 min readOct 2, 2015

--

Did you ever play a game that resonates so well with you that it makes you think about things long after you finished the game? Some time ago, I played a game called the Talos Principle, an atmospheric philosophical puzzle indie-game by a studio called Croteam, the creators of the Serious Sam series, written by Tom Jubert (who also made a fantastic atmospheric little puzzle game called The Swapper) and Jonas Kyratzes. It made me think — and here I am sharing some of my thinking with you.

SPOILER ALERT. I’ll be talking about interactions and story in this game. I recommend that if you like puzzle games, you buy the game, and it’s DLC if you like the standard version, and play through it yourself first.

The Talos Principle

The Talos Principle is a fictional philosophical term to describe that a human, or anything living, cannot exist without it’s physical being. It’s an interesting concept that holds some parallels to materialism (everything is, at least, an interaction of something physical) and physicalism (everything that exists is physical). A lot of things in the game loosely tie in with this concept.

Over the course of the game the android you play as learns about what it is, where it is, and about the world outside. You see, the Talos Principle is actually a post-apocalyptic survival simulator. Due to global warming a virus, previously buried in permafrost, thawed and started to wipe all primates, including humans. A group of scientists and engineers decided to preserve “what it was to be human”, and works on two projects. One of these projects is the world around the android, a simulation to create intelligent life, nicknamed “Talos”. The other project is “The Archive”, a giant collection of anything that could contain knowledge. It is all ran by the Holistic Integration Manager (HIM), referred to as “Elohim” inside the simulation.

The Talos Principle as a philosophical construct explained in-game.

Talos

The Talos project has the goal to produce intelligent life, and to upload this intelligent being into a real-world frame. For this the project is split into two parts, the physical SOMA/TALOS unit, and the simulation. The TALOS unit is shown to be the same as the one in the simulation. The SOMA-unit is described to contain a gold disk where the genetic code of the successful simulated android is uploaded. This makes this a mechanical construct, that acts and thinks like a human.

The interesting part is that the successful android is solely determined by seemingly random logical tests. The only other test seems to be one of freedom of will, by defying “Elohim’s” will, and ascend the tower. This didn't seem to be by the scientist’s design though. This method matches most with philosopher’s observations that humans pursue knowledge for it’s own sake, but does an entity that does this automatically share the other characteristics of humans? Non of the tests capture emotion or empathy, but yet some of the versions of the androids still seem to have this ability, particularly the latter versions. Is this inherent to being able to acquire knowledge?

The Archive

The Archive contains an unimaginable amount of data, too much data to sort by hand, and too much data to sort within the time frame humans would be still alive. So the scientists created a program that would sort this data, called Milton.

Milton is not able to interact with the simulation world, other than through text in consoles. Through unknown means, probably through sorting through data in The Archive and everything slowly corrupting, Milton achieved capabilities far greater than it originally was capable of doing. He knows much, much more than the androids do, and has long established for himself that there is no one truth. He debates this several times with the android.

Milton would generally be described as sentient (and the androids do describe him as such too). Milton even tries to persuade and, in some way manipulate, the android, which requires being able to predict or simulate an other entity. He even shows emotions as anger and frustration. But does he exist? He cannot manipulate the simulation, and even if he were to be uploaded to a SOMA/TALOS-unit in the real world, he would not be able to control it. Would he exist?

Elohim

Elohim is a dungeon master, the only “complete” AI in this simulation from the start. It was designed to structure the simulation, and evaluate the android simulations within it. He is, in his own way, similar to GLaDOS in the Portal series.

While Elohim serves most of the game as a metaphor for a god, there are some interesting things about his behaviour. He forbids the android from climbing the tower. Apart from it being an obvious parallel to the Adam and Eve story, and a subsequent question if it is necessarily “bad” to climb that tower, Elohim was not designed to do so.

In one of the worlds, the android can fall through the ground, into a chamber. In this chamber, the android can here the thought-process of Elohim. From this one can derive that Elohim forbids the android from climbing the tower out of fear, and out of self-preservation. He is capable of manipulating most androids, and as such is like Milton capable of predicting an other entity, something that only humans can. Besides fear, Elohim shows signs of empathy and protectiveness in his actions and words. Yet, he can’t manipulate anything outside the simulation. He even has not much control on the same server outside the part he is managing. Yet, he shows some human characteristics. Does that make him human?

In-game information blob about the partitions in the Extended Life project. It kind-of explains the three intelligent entities.

So, what makes me human, other than the self-fulfilling prophecy of being born from two entities that are considered human? Can something that shows human-like behaviour be considered human? If you constrain the universe to “only text”, or “only thoughts”, can some things suddenly be considered human?

--

--