The Iron Giant, your friendliest AI

The Real Trust between two Artificial Intelligences

Tony Chavira
Jul 28, 2017 · 3 min read

I’d like to first refer to this article from Digital Journal, Researchers shut down AI that invented its own language. A particular quote about communication between the two AI, Bob and Alice:

Facebook’s researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand. […]

Bob’s later statements, such as “i i can i i i everything else,” indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like “I’ll have three and you have everything else.”

The article offers the insight that the English language itself doesn’t suit an AI’s needs. They write, “Modern AIs operate on a “reward” principle where they expect following a sudden course of action to give them a ‘benefit.’”

Personally, I find this very cute, in the way that young children expect Pavlovian rewards for doing a good job. It tells me where in the development of personhood AI is: very, very early childhood. It also says some other things about language and the way trust operates, I think…

First, the AI developed a language that’s entirely structured around executing technical tasks. And because there were only two of them in the entire universe, Bob and Alice, they could quickly agree on a language that only suited speedy interpretation between the two of them. In short, it doesn’t have to make sense to anyone else. It just has to make sense for Bob and Alice.

Second, if you read their language in this article, it is very direct. No preempting, no qualifiers, no kindnesses, nothing passive… all to the point, for a task-based reason.

I found this and their interaction so charming, actually. It’s easy for us all to romanticize our language in two completely different directions: (1) we bathe in its nuances to ferret out hidden, subtle meanings, or (2) cut out everything deemed superfluous to hone a perfect concision that gets things done. Either way, both are aimed at being perfectly understood by others, the idea that we can find the “best” way to say something to ensure we’re not being misinterpreted.

But what was surely not programmed into these A.I. was our sense of “self,” and by that I mean self-awareness, anxiety or worry that Bob/Alice will be taken out of context. These robots could talk curtly with one another and get a lot “done” because they are programmed to inherently trust that they are exchanging clear information. In effect, Alice was programmed to be honest and believe that Bob was honest: that a 1 is always a 1, and never possibly a 0 that Bob would rather not say, or Alice may not believe.

Zoom out and little and reflect a moment that this is trust the programmers themselves have in their coding language. Trust that was fundamental to working with each other to create Bob and Alice in the first place.

So alright, these aren’t artificial awarenesses. They’re intelligences.

Charming, truthful, blunt and task-oriented intelligences, though.

And sweet with absolute trust in one another.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade