AI for Non-AI. A Glossary of Terms for Humans.

Machine vision, chatbots, deep learning. What does it all mean? Our Digital Producer, Pierre Poulard gives us the lowdown in Part 2 of Issues: Human after All. So dive in and learn key terms and how they can be applied to a business or brand. In human speak.


Artificial intelligence is any task, effort or action carried out by a machine or a computer that could be associated with human intelligence. These tasks can span planning, learning, reasoning and in some cases, creativity. Nevertheless, even if this definition is broad, one key characteristic of artificial intelligence is that is is differentiated from traditional software. This is because AI is self-adjusting and learning, while with software every action is defined and programmed by a human. Artificial intelligence is composed of many layers making it more or less “Intelligent” when applied.

AI can be divided into two categories:

Narrow AI — Narrow AI is the current state of AI whereby it can achieve a task for which it has been designed. Therefore, current AI is not able to be singular and define or develop it’s own actions. For that reason, it can’t act as a human in any way.

Artificial general intelligence — Artificial general intelligence is the type of AI that can be seen in Ex-Machina or Westworld, where it’s so advanced that it can develop actions that have not been pre-programmed.


Artificial intelligence can be applied in many ways from customer service solutions, to fraud detection through personal assistants.

Fraud detection: Stripe created a product called Radar able to detect fraudulent transactions and reduce fraud by 25%. By using Machine Learning the AI has been trained to understand millions of data points across the start-up network to recognise the level of risks a transaction can present and block it if a fraud is detected.

Personal assistant: At CES 2018 (Consumer Electronics Show) Jaguar / Land Rover presented an AI personal assistant that will be embedded into their new cars. It will be able to recognise who entered into the car and then personalise the driving experience, from the configurations of the car (mirror positions and heating levels), to driving assistance (such as predictive fuel refill notifications and profiled GPS locations based on the situation)

Image Credit:

Machine Learning the very first layer of artificial intelligence and it’s what makes it different from traditional software. Indeed, it’s the ability for an AI to learn from a subset of data and to achieve more precisely and effectively the types of tasks it has been designed for. Therefore, thanks to machine learning, an AI can be trained to recognise a form, a pattern or perform any task, and then it will be able to perform the task it has been designed for autonomously. The complexity in the machine learning is the training and the quality of the data set used. If the data set isn’t perfectly clean, the AI will learn the imperfection of the data and won’t perform tasks as well as it could.

What does it mean?

Machine learning can be compared to a kid, it learns from what it sees and is being told. Therefore, if you want your kid or AI to recognise a dog amongst other animals, you will have to show them pictures of dogs. If, within the data, a cat appears; the kid or the AI will end up mixing dogs with cats.


Machine learning can be used in the same way as AI since it’s an inextricable component of it. Here is a specific example of Machine learning : — Smart compose: During the Google I/O 2018, Google released an AI-ML powered feature in Gmail named Smart Compose. This Gmail feature learns from your writing habits and from the email context (who you are writing to, what is the subject offering context to the email, etc) and helps you write the email by giving you tailored suggestions.

Image Credit:

Deep Learning

Deep learning with Neural Networks are the very last layers of AI and are complementary. Indeed, the Neural Network for an AI can be compared to a human brain where the neurones are interconnected in different parts of the brain to activate a specific action that requires many different tasks. It’s the same mechanism for an AI Neural Network where each layer of the intelligence is dedicated to a specific function which allow the AI to learn a task from the aggregation of many different factors. This is known as deep learning; the ability for an AI to learn beyond the obvious and make more intuitive and new decisions.

What does it mean ?

If we take our previous example of recognising a dog, each part of the Neural Network will be dedicated to the recognition of a different characteristic of a dog, so for example the colour of the fur, the size, the weight, the height, etc. Then every piece of information is gathered, compiled and scored to define exactly what breed the dog is. This imitates human recognition and cognisance.


Deep Learning and Neural Networks can be applied in many ways, from self driving cars, to image recognition or prediction. — Prediction: Two students from Harvard designed a smart AI powered by Neural Network and deep learning to perform “viscoelastic computations” which is the computation used to predict earthquake and reduced the calculation time by 55.000% allowing authorities to act as soon as possible in the event of an upcoming earthquake. See more here — Self driving cars: Tesla’s Autopilot is powered by Neural Network and Deep Learning to perform the same tasks as the driver.

Image Credit:

Natural Language Processing

(NLP) is the component of an AI that makes it able to recognise, process and perform human language. NLP includes many different disciplines to be able to perform human language as a human, indeed, contrary to a text based interaction, voice interaction can widely vary and AI need to be able to perform interactions in any context. NLP is heavily based on Deep Learning and Neural Networks to recognise patterns, categorise the patterns, contextualise them, translate them into text and perform a voice based interaction.

What does it mean ?

In other words, NLP is what makes personal assistant able to interact with humans through voice.


The main application of NLP is the Google Assistant that is able to interact with many humans through different channels. See below details “Intelligent Virtual Assistant”

Image Credit: Olay Skin Advisor

Machine Vision

Machine vision is the component of an AI that makes it able to extract data from an image, process it and perform tasks based on that data. Therefore thanks to machine vision, an AI is able to see and recognise things and make decisions based on the extracted data. It can range form a pass or fail test for quality control, to facial recognition for CCTV camera or assisting surgeons during an operation.

What does it mean ?

In other words, Machine Vision is what makes an AI able to see and recognise things. It can range from objects, shapes, photographs, videos and people.


Machine vision can have many applications. It can be applied for retail and customer service like these:

Skin Advisor by Olay

The Skincare brand Olay released a web tool powered by Machine Vision and Deep Learning, that analyses the customers’ skin and recommends the right product for them. 9 out of 10 customers who used the tool purchased a product and 88% recommend this product 4 weeks after the purchase. See more her


In China, KFC embedded into their kiosks the Alipay’s Smile to Pay technology powered by Machine vision that allows customer to pay by looking at the kiosk. The kiosk matched the face with the customer account and credits are deducted from their Alipay account.

Image Credit: Tech Emergence

Virtual Assistant

An intelligent virtual assistant is an AI designed to perform tasks and services in a human way, specifically for individuals. A virtual assistant can have many different types of interface :

— Text based: The virtual assistant using text based interactions are usually referred to as ‘chatbots’.

— Voice based: The virtual assistant can also be powered by voice activation such as smart speakers (Amazon Alexa, Google Home, etc) or the smartphones embedded assistant (SIRI, Cortana).

— Hybrid: The virtual assistant can also be powered by a combination of voice and text based interactions. They can be used for a wide range of services, from weather forecast to calendar management and performance of automated tasks

What does it mean ?

In other words a Intelligent Personal Assistant can perform for you any tasks you wish as long as it has bee designed for it and you can either write or speak to it.


Many applications of Intelligent Personal Assistant are currently available on the market, but the latest examples is the Google Assistant : — Google Assistant: During the Google I/O 2018, Google released a new update of their Personal Assistant that is now able to interact with a third human to perform tasks such as booking a restaurant for you. In other words, the Google assistant is no longer only a one-to-one interaction with its human, but can now ‘talk’ to others to perform its task.

Image credit


A chatbot is conversational tool that can be powered either by rules (scripted), by artificial intelligence (intelligent) or by both (application). It allows you to interact in different ways (voice, text, drawings, …etc) for different purposes (informational, transactional, conversational). When powered by AI, a chatbot can simulate the way in which a human would interact in a regular conversation by embedding different components of an AI — from Machine Learning to Deep Learning, Natural Language processing for voice based Chatbot, Machine Vision for ecommerce chatbot etc. A chatbot can be used for a wide range of purposes, from personal assistant, to customer service interfaces and information acquisition.

What does it mean?

In other words a chatbot interacts as a human would do for a specific purpose or range of purposes. A chatbot can’t interact on a matter that its hasn’t been designed for


There are many applications for chat bots. Here are a few examples of chatbots for personal care assistance and customer service

Alder Play

The agency Ustwo developed an app for the Alder Hey Children’s Hospital for kids in a long hospital stay. The app helps these kids to settle into their new medical environment by embedding a combination of augmented reality and AI-chatbot allowing them to interact with a virtual avatar within the hospital. The kids can access educative content about procedures, play around with the avatar, etc

Interested in more? Check out download ISSUES Part 1: Human After All, for an in depth look at Artificial Intelligence and branding.