“Mister Tricorder” — http://en.memory-alpha.org/wiki/File:Mister_Tricorder.jpg (© Paramout Pictures)

The Measure of a Man — What is Sentience?

Alper Sarikaya
3 min readSep 29, 2014

For me, it is impossible to answer this question without one of my favorite scenes from Star Trek. The following video is from the episode ‘The Measure of a Man’ (1988), and I promise, this video is worth 6 minutes!

There are three criteria for being sentient brought up in the episode, and they are attacked one-by-one by Captain Picard in defending the android, Lt. Cmdr. Data: (1) intelligence, (2) self-awareness, and (3) consciousness. In the episode, there is also some interesting subtext about property being a ‘comfortable, easy euphemism’ of slavery.

Alan Turing brings up in his paper “Computing Machinery and Intelligence” that it is convenient for us (as humans) to treat computers as finite-state machines, but also quickly says that this tradition may not represent the physical nature of the machine in question. Turing actually specifically attacks the argument that the computer is just a finite-state machine, saying that instead of giving a definite answer with no ambiguity is worse than giving a probably right, approximately correct answer. This philosophy is widely used today when forming predictions from complex machine models.

To me, this is at the crux of the argument: how does one know when a machine is sentient? When the machine able to handle a wide variety of input (speech, text, commands, prompts) and propose sufficient responses to those inputs (answer, rationalization, actions, art; intelligence) or work to increase its self-worth so that it can better handle the input in the future (learning, self-aware). When an unexpected input or command is given to the machine, I would anticipate that clarifying questions or multiple avenues of introspection would be exhausted instead of immediately giving up and providing a null response. If all of the above behavior is given in a context in which the machine is aware of its current situation or place in the world, I would say that the machine would embody consciousness, as each decision would have self-preservation, self-esteem, or community-building reasons backing the given response.

In order to test for intelligence or sentience, I would pose questions based on inputs the machine is used to, inputs that are tangentially related to the tasks expected by the machine, and suggest potential extensions to the machine’s thinking repertoire. I would also expect that an sentient being would also be able to store previous experiences, and use the sentiment stored in that experience (this worked, this didn’t) as an internal input to help respond to unexpected stimuli.

Example of a constrained environment where a computer system thrives on probabilistic answering (© Sony Pictures Television)

The computer science field of machine learning comes closest to making models of the natural world, but generally each model is very specific to a particular domain area or format of data. To make an all-encompassing model of all information using a experience repository to help answer novel input would we very difficult to do with our current probabilistic machine architecture, and could even be something that quantum computing can help with! Quantum computing has the advantage of putting all the experiences and input together and convolving all relevant data and only collapsing the state (e.g. providing an answer) when explicitly asked. Sadly, like the field of artifical intelligence, a coherent, physical example of quantum computing is consistently 20–30 years away!

--

--

Alper Sarikaya

Data vis developer/researcher at @MSPowerBI. UW-Madison PhD grad. I tweet what I like.