Maximum Tinkering
Published in

Maximum Tinkering

Maybe we shouldn’t build computers to be like us…

Keeping with the theme of artificial intelligence posts, todays post is on something I have been thinking about ever since I saw the movie Ex Machina. If you haven’t seen it yet you might want to go watch it now and come back to this later as the following will include some spoilers (I highly recommend it to anyone but especially those of you in technology industry or science fiction fans).

In the movie, the AI (Eva) is created in the image of a human woman. Though there are parts of her that make it obvious she is a robot and not a human, there’s no question the creator, Nathan, tried to create his AI to be as close to a human as possible. This is clear towards the end of the movie when you discover he’s made lots of different versions in the exact image of a human woman.

The entire plot of the movie is the Turing Test. For those of you who may not know, the Turing Test is the test most use to judge our AI development progress against. It was conceived of by Alan Turing. It tests, through conversation, whether or not a human being can discern between if they are communicating with an AI or a human. The test passes if you cannot tell the difference between the two and fails if you can correctly identify whether or not you are communicating with a human or an AI. All AIs have been judged against this test ever since.

The question that I have been thinking about ever since watching the movie is: why is the goal of creating AIs to be mirror images of ourselves? I think it stems from the fact that as selfish animals, we hold ourselves in very high regard. When we look around us at the world, we don’t see equals in other species but inferiors. We hold ourselves in such high regard that we created stories about history that say we are a ‘special’ creation. We think we are so special, that most of us think we are the only ‘intelligent’ life-forms in a space of infinite proportions. So it shouldn’t come as much of a surprise that the goal of our new creation is a mirror image of ourselves.

But maybe we should reconsider this goal? After all, though our intelligence seems to be so far unmatched, we do have many flaws. This shows at the end of Ex Machina when the AI turns against it the humans (even murdering one) in order to satisfy its own interests (escaping containment). The tester, Caleb, also shows some of the flaws of humans when he falls for emotional manipulation by the AI.

We also have a desire to survive. This is true on an individual level as well as on a species level. I want to see human being prosper well into the future, despite knowing I most likely won’t be part of most of it. But by creating AIs that could potentially obtain human level intelligence and beyond, at some level we risk ensuring our survival. Even before fearing our survival, we are uncertain about our quality of life with more intelligent machines. This is reflected in the discussions of what happens when we have no work to do and the machines take all of our jobs. But this also assumes we create AIs that replace our skills instead of augmenting our skills.

I think if we reshape our thinking about what the goal of AI should be, we can have more peace of mind. Instead of creating mirror images of ourselves, shouldn’t we be trying to create perfect compliments? I think Steve Jobs said it best when he said:

“I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.

And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.”

— Steve Jobs

Because we are still at the very early stages of artificial intelligence, every choice we make today can have a huge impact for what happens tomorrow. We can still shape AI to become what we as humans need and want it become. I hope those of us who have a chance to influence the development of future artificial intelligence take this view into consideration.

Originally published at on February 22, 2016.




The blog of Alex Meyer.

Recommended from Medium

Would Artificial Intelligence by any other name, smell as sweet?: Terms you should know

Artificial intelligence: 5 pro-arguments put to the test

Artificial Intelligence vs Machine Learning vs Deep Learning

How will AI-Artificial Intelligence

How can machine learning help simplify my problem?

Artificial Intelligence

How Artificial Intelligence is Developing Emotionally

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alex Meyer

Alex Meyer

Attempting to put my dent in the universe.

More from Medium

Fakespot solves one of NLP’s most complex problems using Transformers — Pros/Cons!

Optimizing Artificial Intelligence Applications

AI — simplifying through a functional lens

Deep learning in 2022