Artificial intelligence is being hailed as the next digital frontier — and for good reasons.
The ability to think and act based upon the information at hand used to be the sole domain of human beings but the capabilities of computers have grown to such a degree that they may soon join us. And they’re coming fast.
So fast, in fact, that within the next two decades it’s predicted 47% of jobs in the US will be automated in a transformation dubbed the Fourth Industrial Revolution. The job destruction is already happening. Earlier this year J.P. Morgan used a program to do in seconds what took lawyers 360,000 hours.
Knowing a little bit about AI may not only help you identify which jobs are safe from automation, or industries that are poised for disruption, but it could also help you reconcile the conceptual differences between programs that can fight parking tickets for you and the machines that may cause the end of the world as we know it.
In this article I’m going to cover two broad categories of AI that will tell you what they are and why you should be keeping an eye on them. And I’ll cover how they play into talk of AI in general.
What is Artificial Intelligence?
Artificial intelligence is a computer system capable of performing tasks and making decisions that normally require human intelligence.
AI can be broken down into two broad categories: Narrow AI and General AI.
Narrow AI is capable of completing one task. General AI is a completely different beast, with one program capable of completing many different tasks.
Narrow Artificial Intelligence
Narrow AIs have existed for a while. There’s a very high chance you interact with narrow AI every day. Examples include:
- Recommendations (Netflix, Amazon )
- Voice processing (Siri, Alexa or Google Home)
- Dynamic pricing (think Uber’s Surge prices)
- Spam filters (arguably the greatest AI ever made)
- Facial recognition (Facebook can identify your friends in photos)
- Chess bots
- Roomba robot vacuum cleaner
None of these probably make you worry that we will soon be subservient to robot overlords. In fact, every time a new use-case is developed, people say “Oh, well that doesn’t really count, it’s obviously not ‘intelligent’.” They’re right, depending on how you define intelligent.
That doesn’t detract from their usefulness though.
It’s the advances in narrow AI that are really driving the job automation revolution. Software combined with global connectivity means a program can be copied and deployed nearly an infinite amount, automating jobs around the world at a scale impossible just a decade or two ago.
Artificial General Intelligence
Now we’re getting into more sci-fi territory. Artificial General Intelligence (AGI) is a type of AI that has the capability of doing any task that a human being can, and then some. Think JARVIS, the digital character Tony Stark relies on to run things in the movie Ironman.
While it’s a development experts think may take decades to come to fruition, it hasn’t stopped technologists from tolling the doomsday bell.
Not everyone views AGI the same way. On one side, we have Bill Gates saying “we shouldn't panic about AI”. On the other side, the likes of Stephen Hawking and Elon Musk say AGI poses a fundamental risk to the existence of human civilisation, with Musk saying the global race for AI will “most likely cause of WW3”.
One core problem or fear that experts have about AGI is the alignment problem. This is the issue of building machines whose functions are in alignment with our set of values.
This issue is explained well in Nick Bostrom’s book Superintelligence, with the devilish example of AGI: the Paperclip Maximiser.
The Paperclip Maximiser is a conceptual AGI whose sole goal is to maximise the number of paperclips it has in its possession. It devotes all of its energy and power to increasing the number of paperclips in existence and resists everything that tries to thwart it. In accordance with its aims, it may decide to improve itself in order to find new ways of generating paperclips. It may discover that human beings are incompatible with its aims, so it decides to get rid of us.
Slowly but surely, the Paperclip Maximiser would transform the Earth into a paper clip factory, before setting its sights on other planets and star systems.
It’s a quirky example of how an AGI with a benign function may end up operating in a way that isn’t in accordance with our aims and values. It also begs the question, how do we program our values into an AI, when we might no know what they are?
What’s the immediate future of AI?
We’re at an exciting moment in time. We may be joined by objects of our own creation in the cognitive niche we occupy in the world.
The confluence of modern technologies may not only liberate many us from menial work, but could also be a means for us to develop a better understanding of ourselves.