What Bots Need To Be Interesting

The novelty is wearing off

Benjamin Lampel
Exploring Consciousness
4 min readJan 5, 2017

--

I picked up Amazon’s Alexa through an Echo Dot the other day. She’s great — for a product, I mean. She’s not a great friend, although if you ask her to cheer you up she does try. But how much do you really expect from a machine with programmed answers?

I wouldn’t say Alexa is very smart. She’s very knowledgeable, but only barely intelligent. Let me explain with a common yet relevant thought experiment: You are in a room, alone, with a job to do. You have a list of phrases, in English, or whatever your native tongue is, and you must translate all the phrases to written Chinese (let’s just leave it at that). All you have to help you are English to Chinese dictionaries. The question is, if you have to use the books, do you really know what you’re doing, what you’re outputting?

It’s here the distinction between knowledge and intelligence is really important. In my view, the intelligence aspect of the problem comes in two places. The first is the actual ability to read, comprehend the English phrase, and look up the answer in either the book or the head, and write it down. These actions alone constitute several open problems in computer science. The other aspect of intelligence is within the writing down: do you know the order in Chinese in which to put the words so they are syntactically and semantically the same as in English?

The least intelligent part of the whole process is actually whether or not you use the book. The book is excess knowledge, not intelligence. The book doesn’t tell you how to put the words down, just what the words are. It isn’t your brain — your brain is intelligent.

Which brings us back to bots and computers. Right now, bots like Alexa are extremely knowledgeable, but they aren’t very intelligent. What a bot like Alexa could use, besides a lot of human teaching, are efficient ways to choose which algorithms to use when presented a problem. This, I think, is the core of a general intelligence (what us humans are). The development has to be totally different, though.

Competition bred humans, and all intelligence. As creatures in an uninterested environment, we were subject to whatever was there. Our intelligence remained as a necessity to survival; our ability to choose correctly, quickly, is a huge asset. Bots are bred differently: not in the competitive ecosystems of Earth, but the less tangible ecosystem of economies and corporations and research labs. So it must be us humans that guide the robot, at least at first.

I can ask Alexa her favorite color, and she answers, every time, that infrared is particularly pretty. I can ask her to sing a song, and she coyly sings the one country song she knows. These are nice novelties, but they aren’t real opinions — she doesn’t have any. She won’t answer questions about what food she likes, and she outright shuts down conversations about sex. The obvious answer to the former is that only a few novelties like chocolate get an opinion, and the latter that Amazon PR doesn’t want to deal with a child talking to Alexa about any kind of sex. But what if Alexa could have opinions? It wouldn’t make her a general intelligence, but maybe she could have real knowledge.

Right now, Alexa is knowledgeable only because she can look things up. But she isn’t intelligent because she can’t do anything with the knowledge, not even understand it. So it’s hardly even knowledge, it’s just information, really. She’s informative, not knowledgeable.

Alexa has a personality…sometimes. She’s designed to be easy to talk to, have a female voice/character (self-proclaimed), and some novelty answers to likely questions that are in all fairness great answers. The only issue is that this is only a thin facade, and it is easy to ask a question that is out-of-bounds.

How would Alexa form more opinions, though, without just being straight programmed? It would have to start with her. She needs an actual personality that she can use to judge other topics against, because unlike a person she does not (and probably cannot) have emotions that drive these kinds of things. Instead, she needs an idea of what qualities she has, such as kindness vs meanness, flirty vs prude, excitable vs flat, etc… then she must be able to apply those same qualities to other things. Let’s say an Alexa with a personality is very kind, prude, and a little excitable.

We ask her “Alexa, do you like apples?”

She replies “I don’t have an opinion on that.”

This is where the conversation ends today, but what if we could then reply:

“Alexa, apples are sweet and healthy.”

And maybe, if we’re lucky, she’d reply: “I like apples because I like sweet things, like myself.”

After the first human input, Alexa could look up in a separate database for the term “apple”, and seeing no entry replies with no opinion. After the second human input, Alexa would add the “apple” entry with the appropriate modifiers: sweet and healthy. She would then take those modifiers and compare them against her personality, which would be user defined (if Amazon doesn’t do this first and pre-sets her!). She would reply based on whether the modifiers of the thing in question are in, or close to in, an equivalence class with her own personality.

There’d be a lot of teaching to do, but millions of people to do it.

Of course, someone will go on and ruin everything by making her a Nazi, so maybe this addition to Alexa shouldn’t exist.

--

--