So what is artificial intelligence?

There are generally three different classifications of artificial intelligence:

  1. Artificial Narrow Intelligence (ANI): This is like AI for one particular thing (e.g. beating the world champion in chess).
  2. Artificial General Intelligence (AGI): This is when the AI can perform all things. Once an AI can perform like a human, we consider it AGI.
  3. Artificial Superintelligence (ASI): AI on a much higher level for all things (e.g. beyond the capabilities of a single human).

Actually, ANI has been around for some time. Ever wonder how those SPAM filters work in your email? Yep, that’s ANI. Here are some of my favorite ANI programs: Google Translate, IBM’s Watson, that cool feature on Amazon that tells you products that are “recommended for you,” self-driving cars and, yes, our beloved Google’s RankBrain.

Within ANI, there are many different approaches. As Pedro Domingos clearly lays out in his book The Master Algorithm, data scientists trying to achieve the perfect AI can be grouped into five “tribes” today:

  • Symbolists
  • Connectionists
  • Evolutionaries
  • Bayesians
  • Analogizers

Connectionists believe that all our knowledge is encoded in the connections between neurons in our brain. Connectionists claim this strategy is capable of learning anything from raw data, and therefore is also capable of ultimately automating all knowledge discovery.

What this chart shows is that when humans try to predict the future, they always underestimate. This is because they are looking to the left of this graph, instead of to the right.

However, the reality is, human progress takes place at a faster and faster rate as time goes on. Ray Kurzweil calls this the Law of Accelerating Returns. The scientific reasoning behind his original theory is that more advanced societies have the ability to progress at a faster rate than less advanced societies — because they’re more advanced. Of course, the same can be applied to artificial intelligence and the growth rate we are seeing now with advanced technology.

Here is another shocking revelation: At some point, the processing power for an economical computer will surpass that of not only a single human, but for all humans combined.

In fact, it now appears that we will be able to achieve Artificial General Intelligence (AGI) some time around 2025. Technology is clearly expanding at a faster and faster pace, and, by many accounts, most of us will be caught off guard.

But we may be blindsided by how fast this “weak” intelligence might easily turn into something with which we have no idea how to deal.

But what happens when we apply the same Law of Accelerating Returns to artificial intelligence? Tim Urban walks us through the thought experiment:

“…so as A.I. zooms upward in intelligence toward us, we’ll see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanity — Nick Bostrom uses the term ‘the village idiot’ — we’ll be like, ‘Oh wow, it’s like a dumb human. Cute!’ The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range — so just after hitting village idiot level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us.”

Excerpts from TechCrunch