Camilla Bennett
Tradecraft
Published in
5 min readApr 21, 2016

--

“You can know everything about jumping out of a plane, but until you’ve actually experienced it, you don’t know what the f*ck you’re talking about.” — Anonymous

http://www.noeticscience.co.uk/tag/artificial-intelligence/

For many years, cognitive and computer scientists have been interested in building computers that can mimic human actions. As technology has become increasingly advanced, this once futuristic goal seems to be finally within reach. IBM’s Watson can beat a human at jeopardy, cars can drive themselves, and more and more tech companies are focused on providing virtual services that mimic human capacities. To this end, author and futurist, Ray Kurzweil claims Artificial Intelligence will be equal to human intelligence by 2029.

Sure, Ray Kurzweil is smarter and more successful than most people, but his forecast is questionable at best. While I am obviously dazzled by the capabilities of machine learning and ‘big data,’ I have been wondering how close a computer can really get to human intelligence. Kurzweil’s claims gauge the success of a given computer with the the Turing Test, which assesses whether a computer can convincingly behave like a human by analyzing the ways it answers questions.

In measuring human intelligence through behavior alone, the Turing Test vastly oversimplifies human intelligence. Our intellect cannot be pinpointed to a singular area or formula, but is rather a subsidiary effect of our collective knowledge, our experiences, and, consequently, our abilities. Human beings are complicated, emotional, and often driven by inexplicable intuitions — and I believe a theory of mind should take these things into account.

There’s a famous thought experiment that helps explain my skepticism. The story goes that a person — let’s call her Mary — has lived her whole life in a black and white room. She was born there and lived happily for decades without ever seeing color. Despite never having experienced color, she was fascinated by optics and knew every physical, observable, property of color; everything from how colors relate to one another to how they are processed in the brain.

The big question is this:

If Mary were to leave the room, and see the rich variety of colors that make up our world for the first time, would she learn something new?

http://www.uh.edu/engines/epi2778.htm

Philosopher Frank Jackson (1982) claims she would. He calls this unique experiential information, “qualia.” He states that people can only obtain qualia by having subjective experiences. Objective information can be observed and communicated to anyone. Subjective information (i.e. qualia), on the other hand, cannot.

Years later, David Lewis (1988), connected this theory with Laurence Nemirow’s Ability Hypothesis. This states that rather than providing you with information, experience provides you with an ability. In this sense, “qualia” is defined as more than just subjective, experiential data. Rather, it’s the ability to “to remember, imagine, and recognize.”

Learning about something means gaining objective information. An experience is only possible if one has the aptitude or opportunity to have the experience in the first place. Once you’ve had the experience, you can recall, remember and recognize similar situations. You can access these memories whenever you want — something you were not capable of doing before, and something you will never be able to fully explain to another person. You have acquired an ability.

Here’s how I like to think about it: even if you know everything about jumping out of a plane… how fast you would fall, how long it would take you to reach the ground, the speed of air hitting your face, the amount of adrenaline pumping through your veins…

Until you’ve done it, you still don’t know what it’s like.

http://adventure.howstuffworks.com/question729.htm

I recently came across a story in the book “Superforecasting: The Art & Science of Prediction” that articulates this suspicion well. A chief firefighter led his team in tackling what they thought was a routine kitchen fire. However, while the team was in the kitchen, the Chief claims a “vague feeling of unease” came over him and he ordered everyone to evacuate the house. Just as they fled out the door, the entire kitchen floor collapsed behind them — the fire was in the basement, not in the kitchen as they had thought. The Chief’s quick moment of raw intuition saved them. When asked about this snap judgement, he claims he “just knew.”

It’s a phrase that we often hear — accounting for moments that we can’t explain. “I had a feeling,” “I just knew.” Some might attribute these moments to hindsight bias — where we treat our random successes as triumphs of rational thought rather than pure luck — but I’d like to think there’s something more to it.

Just like the firefighter, at times, we cannot articulate the processes behind our decisions. While the recent developments in Artificial Intelligence reflect our mastery of objective information, I question how it will ever be able to account for those experiences that cannot be reduced to data points.

In merely processing the details of the explainable, physical world, how can computers ever match our actual experiential abilities when they will never truly understand the sweetness of a first kiss; the thrill in taking a leap of faith, or the pure intuition that often accounts for some of our most important decisions? I can try to explain this more, but, ironically, the point is that I can’t; you can only gain the ability to understand the characteristics of these experiences by, well, experiencing them.

http://www.v3.co.uk/IMG/184/321184/robot-artificial-intelligence-540x334.jpg?1459405136

We can all agree that Mary learns something new when she experiences color for the first time. We can all agree that, at times, our raw intuitions drive us — inexplicably — in the right direction. I believe these moments, rather than fancy computations applied to big data, are the core of human intelligence.

As we head into a world where AI becomes increasingly prevalent — I want to call into question the claims made about its capabilities. Given this enormous hole, how close could computers actually get to matching our intelligence?

There is something to be said for the way our intuitions drive us, for the ways an experience can irrevocably change us. Human intelligence is far more complex than algorithms and data. In this way, I want to challenge the AI community to not limit their understanding of intelligence to what is convenient. Driving a car, having a conversation, and winning jeopardy are all great skills, but they are not our greatest abilities. I don’t want anyone to forget that.

--

--