Inside the AI symphony: 4 movements to watch closely

Artificial Intelligence (AI) has recently become a debate topic. Some called it “cognitive computing”, others — “machine intelligence”. It seems to be difficult to give a definition of what AI really is.

This is partly due to the fact that AI is not only one technology. It is a board field constituted of many disciplines: psychology (cognitive modelling), philosophy (philosophy of mind), and computer science.

In order to get a definition, let’s think about objectives. The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. From a business point of view, one of the goals is to create a machine able to help (some of us would say “replace”) the human in his/her day-to-day tasks.

First, it means being able to perform tasks. Secondly, it implies that machines need to understand us. Finally, the ultimate step would be that the machines would be able to learn these capabilities automatically instead of having each of them explicitly programmed end-to-end. In order to reach that goal, we need to develop cognitive functions inside our machine.

It is amazing how much progress the field of AI has achieved during the last 15 years. We have been dreaming for a long time of self-driving cars, vocal assistants and image recognition. Today, we can see “AI things” in our day-to-day life. Those progresses have led this field to become one of the most popular topics of conversation. More and more companies see it as something that needs to be part of their long-term strategies.

NPOs and Foundations have also been created to improve and promote AI. Here is a non-exhaustive list:

OpenAI, Allen institute for Artificial Intelligence, Partnership on AI and Facebook AI Research are aiming to promote Artificial Intelligence
  • OpenAI is a non-profit artificial intelligence research company funded by Elon Musk, Sam Altman, Peter Thiel, Jessica Livingston;
  • Partnership on AI is a technology industry consortium focused on establishing best practices for artificial intelligence systems and on educating the public about AI;
  • Allen Institute for Artificial Intelligence (AI²) is a research institute funded by the Microsoft co-founder Paul Allen to achieve scientific breakthroughs by constructing AI systems with reasoning, learning, and reading capabilities;
  • Facebook Artificial Intelligence Research (FAIR) seeks to understand and develop systems with human level intelligence by advancing the longer-term academic problems surrounding AI.

Moreover, governments want to dig deeper into the implications of Artificial Intelligence in our society.

AI has definitely a big impact on the entire economy. Therefore, it becomes crucial to have a good understanding of this reality, with data and facts. It is very important to have a good level of understanding when building or using AI systems because it’s far too easy to wildly extrapolate the implications of results from published research or tech press announcements and speculative comments.

Inside this post, we will dig into 4 areas which help AI in impacting the future of our digital world. I will focus on describing what they are, why they are important and how they are used today.

1. Deep Reinforcement Learning

In the picture, the Agent represents our AI Agent which will act on the environment. After each action, the Agent will receive a reward (positive or negative) and the new state of the environment in order to choose which action it will perform.

Deep Reinforcement Learning is a type of machine learning. It is very close to our way of learning things as we do as humans: trial and error.

In a typical setup, an AI agent will observe a digital environment and take actions in order to maximise a long-term reward. After a lot of tries, it will have enough experience to succeed in the environment.

It is like when we learn to walk: we have tried several times to put one foot in front of the other, but it is only after a lot of failures and observations of our environment that we succeed to walk.

This approach became popular at the end of 2013 with Google DeepMind and its work on Atari games. More recently, they managed to build an agent able to play the Chinese game Go. In 2015, it became the first program to beat a human Go professional player.

Google has also announced that they achieved a 40% reduction in cooling costs using Deep Learning in the task of optimising energy for their data centers.

Project Sunroof is a solar calculator from Google that helps you map your roof’s solar savings potential.

OpenAI has released a toolkit for developing and comparing reinforcement learning algorithms. It provides a simple interface for interacting with the environment of a lot of old video games (they’ve got a good collection of Atari games).

Deep Reinforcement Learning is starting nowadays to be used in several industries.

Concrete applications

- Building a skillful Player in Video games - An AI agent able to learn how to play CartPole;

- Describing photos - The AI agent generates sentence descriptions from images;

- Translation - Deep Learning has totally rewritten Google’s approach to machine translation;

- Saving Whales - Deep learning is helping researchers save the North Atlantic right whale by making it easier to monitor their health;

- Estimate solar savings potential - The project Sunroof is a solar calculator created by Google that helps you map your roof’s solar savings potential.

2. Generative Adversarial Networks (GANs)

Generative Adversarial Networks are a combination of two separate entities which trained themselves following a competitive goal. In our world, it is similar to the duality of, on one side, counterfeiters trying to make the most perfect imitation possible of a dollar and, on the other side, police officers who are developing more and more sophisticated ways to catch them.

In the Neural Network world, this is an idea that was originally proposed in 2014 by Ian Goodfellow when he was a student at the University of Montreal (he has since moved to Google Brain and recently to OpenAI).

GANs solve a problem by training two separate networks with competitive goals:

- one network produces answers (generative);

- the other network distinguishes between the real and the generated answers (adversarial).

The concept is to train these networks competitively, so that after some time, neither network can make further progress against the other. The generator becomes so effective that the adversarial network cannot distinguish between real and synthetic solutions, even with unlimited time and substantial resources.

Adversarial training can be thought of as a game where the generator must iteratively learn how to create images from noise such that the discriminator can no longer distinguish generated images from real ones.

It is important to notice that we don’t need to program evaluation rules with GANs. The AI agent will figure them out on its own.

Concrete applications

- Restoring colors in Black&White photos and videos - Pix2Pix AI Makes Stunning Photos From Your Drawings;

- Pixel restoration - What if you could increase the resolution of your photos using technology from CSI TV Show laboratories?

- Transferring style from famous paintings - project turns any photo into an artwork;

- Voice generation - Google WaveNet neural network architecture directly generates a raw audio waveform, showing excellent results in text-to-speech.

3. Networks with memory

Most of the Neural Networks operate today as if they were a new person without any experience from their past.

In order for AI systems to generalise in diverse real-world environments just as we do, they must be able to continually learn new tasks and remember how to perform all of them in the future. Traditional neural networks are typically incapable of such sequential task learning without forgetting.

When we train an agent to solve a task A, it means that we are adjusting weights inside the neural network. Those weights are changed when the network is trained to solve a task B.

This shortcoming is called catastrophic forgetting.

In 1994, a paper written by Yoshua Bengio, et al., about Learning Long-Term Dependencies, defines 3 basic requirements of a recurrent neural network:

- That the system be able to store information for an arbitrary duration;

- That the system be resistant to noise (i.e. fluctuations of the inputs that are random or irrelevant to predicting a correct output);

- That the system parameters be trainable (in reasonable time).

There are several specific Neural Network Architectures that have varying degrees of memory; for example, long short-term memory (LSTM) networks that are capable of processing and predicting time series.

Recently, Google DeepMind has published differentiable neural computer that combines the learning and pattern-recognition strengths of deep neural networks.

They demonstrated how a DNC can be trained to navigate a variety of rapid transit systems, and then apply what it learned to get around on the London Underground.

A neural network without memory would typically have to learn about each different transit system from scratch.

Concrete applications

- Robotic arm control tasks - Recent trends in robot arm control have seen a shift towards end-to-end solutions, using deep reinforcement learning to learn a controller directly from raw sensor data;

- Time series prediction - An AI agent being able to predict the number of international airline passengers with data from the past.

- Natural language understanding - An AI Agent able to understand Natural Language;

- Video commentaries - An AI Agent teaching computers how to give cricket comments;

- Automatic writing - An AI Agent able to generate from raw data Wikipedia articles, math papers or computer code;

- Predicting human behavior - Nowadays, the Google Street View AI scanner can predict how people will vote;

- Predicting earthquakes - The ability to forecast temblors would be a tectonic shift in seismology.

4. High-Performance Hardware

An AI agent built with a neural network needs to process enormous amounts of data in order to be trained. Hardware becomes one of the cornerstones necessary in order to make the training faster.

In the late 2000s, Graphics Processing Units (GPUs) made by NVIDIA emerged as a very good solution. Originally, those chips were designed to give gamers rich visual experiences; they were 20 to 50 times more efficient than traditional Central Processing Units (CPUs) for deep-learning computations.

Unlike central processing units (CPUs) that compute in a sequential fashion, GPUs offer a massively parallel architecture that can handle multiple tasks concurrently. They’re also heavily optimised to do matrix multiplications.

It makes training on GPUs much faster than with CPUs.

This gold rush started after the publication of AlexNet in 2012. It was one of the first neural networks implemented on a GPU.

Today, NVIDIA continues to lead the way, ahead of Intel, Qualcomm, AMD and more recently Google.

In 2016, NVIDIA announced that the quarterly revenue for its data center segment had more than doubled year over year, to $151 million. Its Chief Financial Officer told investors that “the vast majority of the growth comes from deep learning by far.”

GPUs were not purpose-built for training neural network so they suffer memory bandwidths and data throughput issues.

This led to opening the play field for new start-ups or projects and even large companies like Google and Apple to design new “AI Chip”.

This is really exciting because if we can train our agent faster and we can deploy AI models in more fields, we can get more people into the AI field.

Concrete Applications

- Faster training of an AI Agent - Apple is the latest company creating a dedicated AI processing chip to speed up the AI algorithms and save battery life on its devices;

- Always-listening IoT devices - You can use Alexa to control your smart home with your voice;

- Self-driving cars - All you will need to do with a Tesla car is to get in and tell your car where to go. If you don’t say anything, the car will look at your calendar and take you there as the assumed destination or just home if nothing is on the calendar.


It’s amazing to see how many Artificial Intelligence applications we have around us. They are already an important part of our daily life. Nevertheless, it’s more exciting to try to guess what AI will be able to do next and be a part of it.

Ten years ago, when our GPS was giving us the best directions based on traffic, we called it Artificial Intelligence. Now, we call Artificial Intelligence a car driving a typical home-to-work route without any assistance from its human driver.

We are living The Fourth Industrial Revolution. Artificial Intelligence is one of the main actors of this change.

AI is moving very fast, partly due to new algorithms and new hardware.

Although it is difficult to foresee now what Artificial Intelligence will do in the future, I find this field very exciting, I want and I will continue being part of it!

May the code be with You!

Gaëtan JUVIN