Can we think of AI as an animal?

Simon Hudson
Cloud&Co.
Published in
5 min readFeb 21, 2017

Futurologists have lots to say about what will happen if we continue to develop artificial intelligence. On one end of the spectrum of AI outcomes, there is unimpressive, inefficient progress — a failure to live up to expectations and the hype (and funding) dying out. The other end of the spectrum envisions a new, supreme species that will eliminate and replace the human race. Cases more comfortably within the bell curve lie between significant and insignificant job disruption.

Each industry will have its own repercussions (heads up, manufacturing). In the Pt. 1 of this interview, Benjamin Thelonious Fels, founder of macro-eyes, shed light on how AI is developing in health care.

Pt. 2 is another can of worms that needed its own space. In it, Benjamin has a moment of clarity that provokes vivid ideas of what our AI future could look like, including the negative consequences of an “AI running amok.”

The following portion of the interview is relatively unedited and without questions — just Benjamin’s train of thought:

— — — — — — — —

There’s the potential for something greater than machine plus expert. Those two acting together, interacting, is going to create something which is much more powerful. Think about even a simple tool like a paintbrush. A paintbrush in the hands of an expert can accomplish a great deal, indeed much more than a paintbrush sitting on a table, thrown at a canvas or a person thinking about making a brushstroke.

I think a better example is that of a highly trained and selectively bred animal.

Think about an animal that, for thousands of years, has been bred to do a very specific task in an extraordinarily precise manner. Whether that’s a horse or a sheepdog, that animal can do things by itself and acts as an autonomous agent. There’s greater utility and power when that animal and a human work together.

Animals can perceive things that humans cannot; and, of course, humans can understand and perceive things that animals cannot. That interaction between the animal and the human becomes particularly powerful when the two agents know and trust each other.

For me, this metaphor is a really clear vision of what we should be working toward. There’d be philosophers who would disagree with me here; but to some degree, intelligent machines are a lot like highly bred/trained animals who have had specific traits pushed for a long time.

I would argue that where animals and humans work together best is when they have a collaborative relationship. You’re a sheepherder and you spend all day with your dog out herding your sheep; and so, the two of you really understand each other and can pass signals, rich information, back and forth in a very efficient, precise manner.

[Author’s note: Benjamin’s family had a sheepdog and our families knew each other growing up. One day we were all on a hike together, his family, my family and the sheepdog. I was an impatient hiker and kept running ahead. Though the dog wasn’t trained to herd sheep, she couldn’t help her breeding and came up and nipped me, biting right through my pinkie finger to get me back with the herd.]

That story of our sheepdog biting you is a great example of information being garbled. It’s an example of a system encountering behaviour that’s on the margins of what it knows. Our dog was the product of selective breeding, but she herself was not guided and trained properly. Just like AI, sheepdogs require an enormous amount of training data (in the form of exposure to scenarios that you repeat and repeat).

Put another way, our dog was the product of generations of machine learning that was progressively hard-coded. But the human-machine-environment loop was warped: environmental input was hitting a mechanism that wasn’t yet tuned with sufficiently varied, live data — so, that data was getting translated into a familiar, but wrong, output. It’s like the AI running amok.

“The bite” is a great metaphor for the dangers of letting AI loose. The training is present in deep layers; but the AI doesn’t know how to adjust to reality in all its ambiguity. Along with this animal-human vision of AI, a related point I want to make is that data has to reproduce reality. When you start to think about even very small instances of data not really reproducing reality, it’s altering reality instead; and if decisions are made based upon data that has altered reality, we stumble into some dangerous scenarios — extremely dangerous if you’re in health care.

In complex environments, human beings are always going to observe things and know things that are not captured in data. That’s just because we do not yet live in a world where every particle of reality is monitored. It’s not like I have a little moisture-sensing drone inside my apartment collecting all my vital signs and environmental data and emotional responses right now. Thank God that’s not the case.

So how do we bring to our intelligent machine all the richness that we as humans know or see or understand that the machine does not? And in this utopian scenario I’m sketching, we somehow manage to bring in that richness without all our deeply-human blinders and bias. For me, that’s the Holy Grail. That’s where I see intelligent systems, intelligent machines, becoming truly valuable.

////

Cover photograph by Ottomar Anschütz

Drawing by Giovanni Domenico Tiepolo

Interview photograph by Sarah Ouellet

Originally published at www.cloudraker.com.

--

--