Is Artificial Intelligence really that impressive?

Taalink
3 min readApr 7, 2017

--

Artificial Intelligence has been around for decades, but it has gained a lot of traction and public attention in the past few years. Machine learning, deep learning, data mining, all these buzzwords keep popping on the media. Is this new technology really that impressive? Well, it depends on how intimate you are with computers. In this series, we will explore a series of subjects related to AI and its applications. This introduction talks about computers in general, and why some problems are much harder to solve than others.

Fiction, be it books or Hollywood, spoiled the public’s expectations for Artificial Intelligence. From HAL 9000 on 2001: a Space Odyssey, to the Matrix, Skynet, there are countless examples of amazing AI, sometimes good, but usually gone bad. We have seen computers that can talk, learn about everything, control spaceships, or an entire civilization. Our imagination has set the bar so high that we are hardly impressed by new technology.

Why is the Siri on your phone, or the Alexa at your home, so limited then? Where are all the talking robots? Why is it so easy for Google to search millions of pages all over the world in a second, but so hard to tell what is in a picture?

The answer is that computers are much less intelligent than we give them credit for. They can really only do two things: record simple things with precision, and make repetitive operations very fast. That is all there is to it. They just seem smart because our brains are not very good at doing these things. How many values can you remember from the last spreadsheet you worked on? With how many decimal places? Can you sum all of the values of a column on your head? Wow, that looks like a hard task, and if we knew someone that could do that, we would probably call him a genius. Yet for your computer it is a piece of cake, everything is just neatly stored on that .xls file, and with the click of a button, you can calculate whatever you want.

Programming a computer to solve a problem is a bit how schools used to teach arithmetic to young children. You give them pencil and paper, show them symbols that represent numbers, and how to manipulate them writing on their paper. They have no idea what they are doing, but they will try to follow the instructions. You can teach them how a sum works, and then say multiplication is a series of sums. Children will eventually learn, understand, and be able to do more complex operations on their own. A computer is like a kid that cannot understand, talk, or see, but he can read and write super-fast, following all your instructions with precision. After learning that a power is a series of multiplications, and a multiplication is a series of sums, he will still solve ²³ with 2+2+2+2.

Things that can be described as series of steps: reading a number, summing it with another, writing it in some place, checking if its bigger than another number; can be solved by computers. A sequence of steps to solve a problem is called an algorithm, and it is the bread and butter of computer science. A good thing about algorithms is that they can be reused, improved, and combined. The same way that once we learn about multiplication it’s easier to learn about powers. When someone develops an algorithm to solve a problem, it becomes a tool for everyone else to use. This allows programmers to keep focusing on bigger, more complex problems. To display an image from Facebook on your phone, there are decades of algorithms running on its processor. From its point of view, however, it is still doing 2+2+2+2 over and over.

Some types of problems require solutions that cannot be easily written as an algorithm. How to recognize someone’s face? How to tell if a joke is funny? Our brains are much better than computers for these types of problems, but we are not still exactly sure how they do it. This is where Artificial Intelligence comes into play. With a suggestive name, it tries model human intelligence — how we can think and improvise to solve complex tasks without the need of someone telling us all the steps.

In the next post we will talk about computational complexity, and what is the big deal about deep learning technology.

Victor Schetinger, PhD in Computer Science, Co-founder & CTO Of Taalink

--

--