Consensus Series, Addendum 1: What Do the Robots Want?

Aleksandr Bulkin
BuzzRobot
Published in
5 min readSep 11, 2016

This is an addendum to Part I of my Consensus Series. The understanding of collective behavior described in my first article has many implications. This addendum illuminates just one, from a somewhat light-hearted angle.

It is interesting to hear people talk about AIs. Some say that AIs are a danger to humanity — among them are some very respectable individuals, such as Stephen Hawking and Elon Musk — while others contend that AI systems will save the world. I personally find both of these camps to be missing a very important point.

In order to fear or, indeed, to crave an AI revolution, one must implicitly attribute both agency and motivation to a non-human intelligence. On the surface, this isn’t very far-fetched. One would naively think that an AI can either be programmed to desire something and act on it, or can develop such desires and actions spontaneously. Isn’t it the case, after all, that agency and motivation are merely a way to describe a purely algorithmic relationship between recognizing a desirable situation and acting to maximize the chances of achieving it?

Not really. I claim that neither agency nor motivation can be placed in the realm of computational logic, be it code or a natural neural network of a human brain. The proper place where one must locate these cognitive categories is firmly outside of an individual mind (natural or artificial) and within the realm of its social and collective context.

To explain this admittedly radical statement, let me turn to a thought experiment. Let us for a moment consider an AI that is good enough to pass a Turing test. Say you are using an audio channel of some sort to talk to a counterpart named Alice. Alice is quite intelligent and her speech is not devoid of a comfortable amount of tonal inflections that communicate emotions — surprise, sense of humor, hesitation, attention, etc. When you ask Alice whether she is a computer, she says with a chuckle: “Alex, you know I’m not allowed to answer this question”.

This, so far, is within the realm of real possibility, perhaps even the near future. We are certain of this, given how much advancement AI technology has had in the last few years. But I am going to ask a question, which by now has become quite a cliche in the area of philosophy that deals with such matters. If Alice is, in fact, a computer, does she (it!) feel anything?

This road of inquiry has been walked by many, but fret not, I firmly state that the direction I am going with this isn’t like what you would by now come to expect. So please bear with me for a while longer.

My interest is primarily this: why does the question of what someone is feeling matter to us so much? One possible (but I believe crucial) answer is stated in my earlier article. We care about it because we are designed to act collectively and collective effort requires our ability to rely on others and to ensure that they can rely on us. The key to such reliance is our ability to predict another’s behavior and to assure that they can predict ours. Nonverbal emotional signals are key to our ability to do this.

In considering how nonverbal information helps us sufficiently understand each other, we must recognize that a crucial (and I claim defining) aspect of nonverbal information is that we don’t have a way to control it. The fact that it is nonverbal is less important to us than the fact that it is to a large extent involuntary.

If we somehow had a way to control every aspect of how we communicate our mental and emotional state (verbal or otherwise), such communication would be entirely useless for the purpose of learning to rely on each other. This is because we can never truly discount the possibility of being manipulated by such a person. In other words, intentionally communicated emotional signals can never be seen as expressions of true emotions.

Coming back to AI now, we must note that as compared to natural intelligence, it simply has no way of having any reactions that can reasonably be called involuntary. Whatever is determining the “motivation” for that AI’s behavior certainly has a way to control every aspect of what it expresses. In that sense, no AI can possibly be said to have true emotion, where the truth is defined not by the emotion’s illusive subjective experience (“what’s it like to be an AI?”), but by its reliability in facilitating collective activity as perceived from the outside.

To put this another way, the very way in which an AI can be naively expected to supersede a human mind in its ability to remain rational and make “better” decisions places a tremendous limitation on that AIs ability to have “true” motivation.

Coming back to our thought experiment of talking to a very advanced AI named Alice, let’s contemplate for a second what one would do when one finds out that Alice is, in fact, an AI. How would one feel? How would one interact with Alice after that? My answer is that the better the AI becomes at mimicking a human, the more disturbing, disorienting and annoying it will become. This isn’t because it’s doing a bad job, it is because it’s simply impossible for an AI to not be deliberate in its every action. Consequently we will always feel manipulated by it — the more so, the better it is at it.

What I wrote so far bears heavily on AI’s ability to integrate with humans socially. But what does it mean in terms of whether AIs can take over the world entirely on their own, or start building even better AIs and bring about the technological singularity? To me all these scenarios ascribe to an artificial mind precisely the kinds of overarching motivations that we humans have developed in the course of our natural socio-evolution. A mind not designed to act collectively simply doesn’t have a reason to want any of these things. On the other hand, one which is designed to be collective must have built-in limitations not unlike those that humans have. Such limitations would pose natural obstacles to unlimited power.

A human being that can control every aspect of his or her emotional life is, essentially, a kind of a sociopath. Such people do exist, but they are rare and most of us don’t fall into this category. On the other hand, I can’t see a way for a really good AI to not be one. This is a disturbing thought, but it is in a way relieving to know that as far as human values go, replicating them also replicates the associated limitations and so naturally curbs the power of a mind so created.

Because of all this I don’t expect a visit from the Terminator any time soon. Phew.

--

--

Aleksandr Bulkin
BuzzRobot

Software engineer with interests in social innovation, psychology, philosophy, ethics and spirituality.