General vs narrow artificial intelligence

Anders Arpteg
Peltarion
Published in
9 min readSep 26, 2018

We have all heard about some of the amazing human vs machine achievements in recent years, such as the Watson Jeopardy challenge and DeepMind’s AlphaGo win. One of the most common questions (and one of the most frustrating) is when people are asking if AI even exists today. Some claim that the machines are not intelligent at all compared to humans, and they are both right and wrong at the same time.

We are still far from having machines with similar general levels of intelligence as humans. However, for small narrow tasks, the machine is winning the battle over humans for an increasing number of tasks. This article aims to give some clarifications to the question if machines really are intelligent today, and more specifically what the difference between general and narrow artificial intelligence is.

What is general and narrow AI?

Most AI applications today are built with a specific purpose in mind, e.g. for playing chess, forecasting the weather, or to predict future sales numbers. This type of AI application is often referred to as “narrow AI”, also known as “special-purpose” or “weak AI”. For this type of application, humans have defined what data to use, what algorithm, and what the model should look like. A machine is built and trained that can automatically learn from data how to perform a specific task, but only that task and nothing else.

For many of these specific tasks, the machine can easily outperform human performance and thus have ‘superhuman’ abilities, but only for a small specific task and not for general tasks. Some famous examples include DeepBlue defeating Garry Kasparov at chess in 1997, IBM Watson winning over humans at Jeopardy in 2011, and AlphaGo winning over humans at Go in 2015.

A number of general purpose applications of AI exist today but they are still far from having human levels of general intelligence. One famous example is when Google DeepMind built a machine that learned to play Atari games. This may not sound like a very important milestone in AI, but it was a huge milestone. It was such an important scientific achievement that it resulted in a research paper in Nature, one of the most cited scientific journals.

The big achievement was not to learn to play an Atari game, but rather the way that it learned how to play. The machine received no information about how each game worked, it only had the pixel input from the screen, the current score, and the valid actions for the game. Click the video link below to see how it only after around 4 hours was able to learn how to play the game at an amazing level of performance.

https://youtu.be/V1eYniJ0Rnk

In this way, a single machine could learn how to play many games in a general way. This is still just in the domain of playing computer games, but it is one small step closer to achieving general intelligence in machines.

How is narrow AI being used?

Narrow AI is already a part of our daily lives e.g. in search engines, voice recognition, and language translation. We believe that the AI we have today can help us solve many of the world’s most challenging problems by augmenting what humans do well, hence allowing us do more, and better. This is why we at Peltarion believe that AI technology, especially the latest and greatest type of AI, should be usable and affordable for all companies and organizations, not only the big and powerful technology companies.

Narrow AI can be used in many fields such as medicine, financial trading, retail, marketing, and even creative applications where intelligent machines are working together with humans to e.g. produce music.

Another example from Peltarion is using narrow AI to help radiologists detect and segment brain tumors. This is a costly and tedious task for humans where they manually have to go through hundreds of brain scan images for a single patient. Machines can learn to perform this task automatically, and in that way help radiologists to more efficiently build treatment plans for cancer patients. See picture blow for an example of a predicted brain tumor.

For more creative applications of AI, novel innovations such as the Google Magenta project and the Wavenet model enable machines to interpret audio data, understand differences between speakers and even generate speech from text with the voice of an arbitrary person.

What has happened within AI in recent years?

The term AI was originally coined by John McCarthy in the 1956 as “the science and engineering of building intelligent machines”. Back in those days, it was believed that intelligent behavior could be built by simply defining rules and programming a computer manually. This may work for simple tasks, but for more advanced tasks it is too difficult for humans to manually define all the necessary rules.

In subsequent years, scientists started to experiment with algorithms that could learn the intelligent behaviour automatically by using data as input. Around 1980, these techniques started to gain traction partly thanks to Hinton et al. This significantly improved the level of intelligence in machines but it was still a very complicated and time-consuming process for humans to figure out how to extract and transform data to make the algorithm learn to make the right predictions.

In recent years, advances in neural networks techniques (often referred to as “deep learning”) have been able to significantly improve the accuracy of the predictions. Deep learning can not only learn how to make predictions, but can also automatically learn how to transform and represent the data before making the predictions.

The recent success for deep learning is partly due to new algorithms, but also due to significant improvements in computational power by using graphical processor units (GPUs) instead of CPUs, and a large increase in the amounts of data that we store and process.

Building AI business solutions is still complex and expensive and requires a multitude of tools and specialized hardware. We have started to see improvements in tooling used to build and train these models, such as TensorFlow and PyTorch, which is extremely welcome. However, to avoid an increasing digital divide, there is a great need for simpler ways to access and make use of these advanced AI techniques. We expect significant improvements in coming years that will further democratize and industrialize the process of building and training AI models. New tooling and platforms will give access to state-of-the-art AI techniques for all, not just the large technology giants.

What can we learn from history?

In the 1950 and 1960s people believed that intelligent machines would be easy to build and general AI would be “solved” in a matter of years. A famous example was the 1954 Georgetown experiment that was able to partially translate around 60 sentences from Russian into English. They believed that general machine translations would be solved within 5 to 6 years. Obviously, this was not the case, and it turned out to be much harder than previously imagined.

Underestimating the difficulties of building intelligent machines led to many research proposals and projects being abandoned during so called “AI winters”. There were two major episodes of AI winters, one in the end of 1970s and another in the end of 1980s. At times, many people believed it would be impossible to ever build intelligent machines.

A lot has happened since then and the recent advances have made huge improvements in many areas, including in machine translation. For example the Google Neural Machine Translation that was released in 2016 can now translate between more than 100 languages and the quality of the translations has significantly improved.

Why will it be different this time?

What we are seeing today is that AI applications have started to move from academia into industry and are yielding significant improvements in both the quality of the service and return on investment. With the widespread dissemination of smartphones, computational power, and internet connectivity more services can be provided digitally and the incentives to automate and improve these services are bigger than ever.

Some companies are also starting to transform their business to prepare for a so called “AI-First” future, where AI becomes a natural component of most products and services. In an AI-First future, customers and clients will expect services to have a high level of intelligence already built-in. This includes companies such as Google, Microsoft saying that they are “infusing AI into everything we deliver across our computing platforms and experiences”, and IBM saying “cognitive AI will impact every decision made”.

We are also starting to see a number of nationwide initiatives where countries are joining forces to maximize the benefits of AI. According to bibliometric data such as number of publications, China is today leading in AI research quantitatively and are also spending billions of dollars to increase AI efforts nationwide. US is second, and a number of countries in Europe are also starting to strategically reinforce their AI initiatives such as France, Germany, UK, Finland, and now also in Sweden.

What about the future?

With the high level of narrow intelligence we are able to produce today, all the investments both in academia and industry, and that AI is also being used in real-world applications today; AI will have a significant impact in our society. As mentioned in the World Economic Forum 2017:

AI is no longer about a machine playing chess. AI is on the streets driving our cars, call centres talking to customers, drafting and reviewing legal documents with immaculate precision, it is even trading using indices derived from satellite imagery”.

It is also important to make use of AI in the most beneficial way. Satya Nadella phrased it nicely saying:

Ultimately, humans and machines will work together — not against one another. Computers may win at games, but imagine what’s possible when human and machine work together to solve society’s greatest challenges like beating disease, ignorance, and poverty

However, as AI starts to become a natural component of more and more products and services, the number of unintended, but more importantly, unwanted effects or behaviours will also start to increase. We believe there needs to be rules in place to ensure public safety and the AI industry needs to be proactive in ensuring responsible innovation and ways of working together to find practical solutions to the societal and ethical implications. However, it is equally important that rules should not hamper AI innovation.

An essential component of AI is the availability of data. As we have recently seen with e.g. the Cambridge Analytica data abuse tragedy, it is essential that we have systems in place that ensure data safety and privacy of people. As Mark Zuckerberg mentioned time and time again in the Congress hearing, the sustainable and scalable solution is to make use of AI tools to protect people and safeguard data. Once again, AI comes to our rescue to solve challenging problems.

As the Chinese say, “may you live in interesting times” (usually meant ironically), and it is certainly very interesting times we are living in. Hopefully, within our lifetime, we will also start seeing machines with high levels of general intelligence working together and empowering humans to solve our greatest challenges.

This story was originally published on peltarion.com.

Author

Anders Arpteg is the Principal Data Scientist at Peltarion and Chairman of Machine Learning Stockholm. He has over 20 years experience working with artificial intelligence both in academia and in industry. Holds a PhD from Linköping University, and was previously heading up a research group at Spotify making use of machine learning and big data to understand user experience. Now working with the latest and greatest AI techniques at Peltarion, where we have the ambitious goal of making deep learning and the latest AI techniques available for all, not just the large technology giants. Also founder of Agent Central AB, AI adviser for the Swedish government, member of the European AI Alliance, founder of the Machine Learning Stockholm meet-up group, and member of several advisory boards.

--

--

Anders Arpteg
Peltarion

Head of Research at Peltarion, PhD and 20 years experience with AI, hands-on knowledge of latest DL techniques and building great AI teams.