Artificial Intelligence explained to your grandparents

Station F
STATION F
Published in
11 min readOct 16, 2018

--

This is a guest post coordinated by Fanny Deltruel who works for Microsoft’s AI factory based at STATION F. You know grandparents who want to learn more about other tech topics? Follow our Medium keyword « Techxplanation »

Now you’re an expert on Blockchain, let’s talk about another trendy topic: Artificial Intelligence! You hear that word almost everyday and yet, you’re not sure what it means. When your grandparents think about it, they probably picture a little robot talking, if not Terminator… But far from all the fantasy behind these buzzwords, there is a reality: AI is everywhere today even if you don’t see it.

Defined as «intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals » AI is often understood as machines displaying cognitive functions. So, machines acting as humans: reading, talking, learning. Keep calm and follow the entrepreneurs from the Microsoft AI Factory on the road to understand what’s really behind the AI scene.

Are robots going to replace people?

That’s usually the first question one is wondering when we talk about AI. And that’s typically the kind of question the startup Zelros has to answer, as they use AI to facilitate insurers’ everyday work.

“Just as the invention of the printing press by Gutenberg in the 15th century made scribes disappear, the invention of the computer in the 20th century made typists — and many other jobs — disappear, we often fear AI replacing some functions human fulfill today.

Will artificial intelligence (AI) one day replace all humans?

Before answering that, let’s understand the type of AIs that are available today.

  • Narrow AI. These AIs are very specialized, and limited intelligences. Yes they can learn to solve a task better than any human expert (e.g. detecting tumors in X-rays images), but by no means they have the ability to do anything else.
  • General AI. An AI not only having a logical intelligence, but also an ability to use reasoning and intuitions, and learn new tasks by itself. General AI is not available today, and it is not clear if it will exist one day and will reach full consciousness.

Because General AI is so uncertain, many experts believe that AI will not replace people. Instead it’s believed that AI will “augment” humans, and that AI + human intelligence will be stronger than AI alone or human intelligence alone”.

Where are we in terms of artificial consciousness?

Ok. So humans are safe for now. But for how long? DCBrain, who develops a solution to help optimizing fluids networks thanks to AI, gives us an overview of the latest news in the field.

“Artificial intelligence is developing at a huge pace, leading us to assume many sci-fi like scenarios for the future of humans and society! The most mind-blowing one? Singularity.

Singularity is the idea that the machines’ intelligence will develop a consciousness that can surpass the human one. Although we are today very far from this level of development, such theory can be explained by the exponential pace of technological advancements (Moore’s Law) and the actual state of machine learning (ML) today. Indeed, machines are today exceeding human learning capacities on specific tasks and they will, sooner or later, exceed humans on everything.

A key breakthrough for artificial intelligence has been the development of reinforcement-learning, an Artificial Intelligence able to cope with unexpected situations by playing games against itself. The power of reinforcement learning has been well illustrated in 2015, when a computer program, Alpha Go has for the first time in history, beaten a human mind on the go game, one of the most complex board games ever made.

Pretty scary no? Not for Ray Kurzweil that sees singularity as a real opportunity for immortality! From a more suspicious perspective, some personalities like Jean-Gabriel Ganascia consider such an idea being a myth held by major companies. This debate still tenses experts as well as all stakeholders (especially consumers) and needs to be a society public debate. Whatever the outcome, we have no doubt on the continuous improvement of Artificial Intelligence and its rapid integration in our everyday lives!”

Is an AI able to explain why it has refused my credit?

OK so machines are not yet overwhelmingly human. But can they take rational decisions? Craft ai, specialists in cognitive automation powered by explainable AI, opens the black box for us.

“Entreprises are making “business” decisions continuously: “should we accept this client’s credit?”, “should a technician be sent to this building?”, “should we recommend this product to this visitor?”. While these decisions used to be taken by humans in white collars, they are now assisted by machines; machines built yesterday using simple rules and more and more using AI. Each business decision does have an impact on your everyday life as well as on the enterprise’s, the decision itself is not enough. You and the entreprise need to be able to understand it, trust it, and assess its compliance against entreprise policies, safety norms and its bias on gender, origin or favorite Fanta flavor. Explainability is a must have, everywhere!

While AI can be used to create better decision processes by taking into account past decisions data and their outcome, losing explainability is not an option. That’s why Explainable AI (XAI) steals all the attention! Popular Machine Learning methods, such as Deep Learning, behave, more or less, as black boxes and traditional fully explainable models are less powerful. Too many AIs couldn’t tell you why they refused your credit. To fix that, lots of researchers and startups are working on XAI, aiming at creating powerful explainable models or at explaining black box models in order to foster trust and collaboration between Humans and AI driven systems”.

Will a robot judge me in court someday?

Whew. That’s good to know: there are real people trying to make algorithms explainable and transparent enough, so we can actually understand where a machine decision comes from. We only need to understand and interpret their message. Like a judge needs to take into account so many records before giving a sentence. Really? The question is at the core of Case Law Analytics’, who quantify the legal risks.

“Artificial intelligence penetrates all fields, but until recently, the legal domain was still resisting the invasion of machines. With the availability of more and more judicial data and the recent and enormous advances in algorithms, people are fearing that soon, a machine will be able to “understand” a situation and deliver a perfectly substantiated and fair “judgment”. Such a prospect is frightening: just as we are more shocked by a fatal motor vehicle accident caused by an autonomous vehicle than by a hundred ones due to human errors, we prefer to deal with a real judge, even if it means accepting judicial errors, rather than with a machine with which we cannot connect emotionally.

Scientists are currently far from being able to build artificial intelligence that can understand all the nuances and unique characteristics of a case before a court. But even if this were to happen, it is to be hoped that society will be able to resist the temptation to rely on “robot judges”. At this stage, what can be (and has been) done is to use Artificial Intelligence to model the judicial decision-making process, to reproduce the variety of possible outcomes of a given case in a given court. Legal AI can help lawyers to better leverage information to take better decisions”.

AI in our factories: is there a ghost in the machine?

Banking, insurance, energy networks and now justice: AI is everywhere and we don’t even notice it. What about in the industrial world? Is there still a captain on board there?

Tellmeplus knows well the situation, as they focus on AI & big data for Manufacturing, bringing predictive intelligence at the edge and inside industrial assets to increase operational performance.

“AI is a trend across nearly all industries today and yet, industrial robots have been a part of factories since the 1960s. So what’s different now? AI is playing a significant role in factories, facilitating industrial automation, reducing operational costs and defaults, optimizing process effectiveness, ensuring 24x7 production and guaranteeing equipment uptime.

How does AI make a difference? There is no ghost or magic spirit inside machines! AI simply uses data to better organize factories in order to make supply chain, design team, production line, and quality control more coordinated, more able to provide personalized products or services to customers, to make sure this product/service is always available and delivered as you expected.

This is now possible because the factories, and all machines inside the factories, are equipped with connected IoT (sensors, etc.) that collect data and act upon it. This data can be centralized and processed in the cloud, in IoT platforms like Microsoft Azure IoT… or directly inside machines — at the “edge”, making it more reliable and reactive. This is when AI truly becomes “the ghost in the machine”!

How does a machine learn?

We are now reassured: machines are not going to replace people straight away, as long as humans are keeping the control on them. But practically, how does a machine really “learn”? AB Tasty explains us this technical method at the heart of AI : the Machine Learning (ML).

“ML is a way to program computers that is gaining strong traction. The classical way is algorithmic: a formal description of elementary tasks dedicated to process data. This has a strong limitation, a lot of tasks are not of this kind : think about identifying a hateful comment in a forum, or recognizing objects in a picture, etc.

This can’t be expressed in elementary formal sub tasks. ML is here to solve that. The main idea is that the task is no longer described as elementary actions to perform but as a dataset of examples: the input data for the processing and the expected result: the output. For the forum moderating task, an input is the text of a post, and the output is the post status (hateful or not). After collecting such input/output data pairs (usually done manually), one builds a model. A model is a mathematical function able to process the input data to provide the expected answer.

At first, the model is “empty”, unable to perform any task before training, like a student before attending it’s first course. We gradually show the examples to the machine to tune a mathematical model in order to reach the expected output. This is how machines learn. The nice thing is that this tuning process is an old engineering/mathematical problem with a lot of ready-to-use solutions.

Source: Mediapart

This paradigm is great to solve problems that cannot be solved in other ways, but it has drawbacks. The main one is that there is no real explicit meaning of the task’s objective. The model may learn unexpected ways to solve the problem leading to local wrong decisions. For instance, in the forum moderation task, if there is a lot of hateful comments coming from a specific country in the learning dataset, the model may simply classify as hateful any comment made from this country regardless of its content.

That’s why, even if machine learn automatically, they will always be a need for a human teacher to catch and deal with such issues. It is worth it since it is able to deal with problems classical approach can’t handle”.

Are neural networks like a brain?

So machines are like students: a “teacher” gives them information, a problem to solve and a way to find a solution. They usually fail when they have to reproduce the process and then, they learn the lesson to avoid the mistake the next time. It’s like if they had a brain, no? Let’s ask Scortex, expert in quality control thanks to computer vision.

“You may have been told that computers got their own « brain », just like human beings, and that we call them « neural networks ». Computers neural network are inspired from biomimetics. You may describe pixels of an image as photoreceptor cells of eyes and neural networks as synapses in the brain that may activate or not. How did we get the idea of mimicking the brain neurons to understand images? Everything started when researchers correlated cat neurons excitation to orientation and thickness of lights seen by the cat. Knowing this, you can consider having eyes with any camera (like your phone). Let’s dive a bit into those « neural networks » now, are they really comparable to a brain?

Neural networks are a range of very interesting ML algorithms which have this property to learn a behavior as close as we want from any mathematical function. You may compare this to any student has the property to learn the knowledge of his teacher (doesn’t mean that the student will 😅).So, as the brain, neural networks seem very versatile and may adapt to a very vast range of tasks.

Under the hood, neural networks are made of neurons which have similarities with biological neurons. See below:

Source: Quora

On the top: a representation of a biological neuron. At the bottom a representation of an artificial neuron in a neural network.

Apart from their structure, the comparison between biology and computers stops quickly. Tasks currently doable by those neural networks are still very specific and the way the neurons evolve thanks to the data is way less efficient than the human brain. Moreover multi-tasking, learning new task, capitalizing from already known tasks to quickly learn new ones are still open questions in the research community.

As a conclusion, neural networks are still quite different from brains, as planes are different from the birds they are inspired from, but they’re still very useful!”

What is Natural Language Processing?

We said earlier that AI is the reproduction of human cognitive functions. But how do they actually understand, read and talk? The startup Prevision.io who created an automated machine learning platform, has the answer: the secret is the Natural Language Processing (NLP).

“NLP refers to the natural forms in which humans express themselves: speech, handwriting, sign language. When you think about toddlers, they start understanding words by associating them with objects like tree, cat, or sun. Thanks to artificial neural networks, it’s not that much different for machines trying to understand human language.

Instead of teaching a baby the word for each object, we feed the algorithm with texts associated with objects, intents or sentiments. The algorithm is then able to extract information from new texts or documents and feed it to the machine that can then reason on them. For example, if you enter “What is the status of my phone delivery ?”, the AI will be able to extract {intent=”track delivery”, object=”phone”, sentiment=”neutral”}, the system has structured the data and can now manipulate it for a different use, here querying a database and answering you.

Yet, this doesn’t mean there’s a full understanding of a language and all its nuances by the machine. For example, it is still painful to have a seamless conversation with virtual assistants or bots, mostly because they still can’t thoroughly understand human intent… Yet! »

WOW ! That was a lot of information but now, AI seems less magical thanks to the startups of the Microsoft AI Factory: basically, humans train machines with algorithms that calculate what humans expect from them. Machines are not yet independent objects, ready to take control of the world and humans are still necessary to make the right decision at the end, with only more insights from data processing! We feel a bit relieved to know that people working in that field are actually questioning the future impact on AI on the society as well. So Terminator can wait to destroy our world!

--

--

Station F
STATION F

We are the world’s biggest startup campus. Open 2017 in Paris Initiative by @Xavier75 - Director: @RoxanneVarza