The AI revolution: distant dream or pending disruption?

Thibaut Corallo
LinDA by LINAGORA
Published in
16 min readAug 24, 2017
Photo by h heyerlein on Unsplash

Since the emergence of computer science, artificial intelligence (AI) has aroused numerous fantasies. However far the machine is to equal the human independence of mind, one may be urged to think that AI has applications that might overwhelm some parts of the economy. What are those fields likely to be soon disrupted?

Disponible en français

In a report published by Deloitte and Oxford University in 2015[1], it is predicted that about 35% of the British jobs may become automated or occupied by robots in the next two decades. Indeed, the task automation that had already started at the end of the 19th century now encompasses increasingly complex works. Lately, machines have become able to perform “smart” tasks, and even outperform human capacities for some of them. In 1997, the program Deep Blue beat Kasparov at chess game. In 2016, another computer won a party at the game of go against the reigning world champion. On January 2017, an AI triumphed in a poker championship, in which some of the world best players took part.

Still, machines’ functioning seems far from being as independent as human mind as it can be found in diverse fictions, such as Asimov’s novels, Matrix, or 2001: A Space Odyssey. Even though computers can now execute complex tasks, make decisions based on probabilities, judge, or even learn from their mistakes — as it is increasingly more common — , they do not own the self-consciousness that is part of the human thought in the Cartesian sense.

In 1950, the British mathematician Alan Turing published his famous article Computing Machinery and Intelligence. In this publication, he depicted a test

Turing’s Test — Source: https://upload.wikimedia.org/wikipedia/commons/e/e4/Turing_Test_version_3.png

that would make it possible to figure out if a machine is actually intelligent. This experiment consists in setting up two verbal exchanges: one between a person and a machine, and one between this same person and another one. The conversation would be blind and only the content of the discussion would enable the first person to determine which of both conversations was with a machine. Should the “tester” not be able to know for sure which conversation partner was a computer, could the machine really be considered as intelligent.

Today, a computer is admittedly unable to pass successfully the Turing test (bar in a particularly accommodating setting), as it cannot properly simulate feelings, a veritable experience or an opinion that is has shaped on its own. If there is an AI revolution in the next few years, it is unlikely to involve an emotive or creative machine intelligence.

However, one should not bypass some AI’s utilitarian functionalities that may disrupt the economy. At LinDA and LINAGORA’s, we think that numerous companies must take account of AI’s recent evolutions when it comes to defining their digital strategies. The point of this article is to describe the last significant changes in that field, and the disruptions they may bring about. This will involve to first focus on AI’s important notions, especially the trending ones, such as machine learning, deep learning, or even bots.

AI — What are we talking about?

According to the website of the French dictionary and encyclopedia Larousse, artificial intelligence may be defined as “a set of theories and techniques implemented in order to create machines capable of simulating the human mind”. The definition is broad, as “simulating the human mind” may mean being able to perform the same tasks as it — or some of them — in a similar way as well as to imitate its capability to do so through intrinsically different methods — that may look similar but are not.

In general, AI comes into play when a machine arbitrates between several hypotheses and makes a choice based on diverse factors of different natures. AI therefore implies an analysis of data (input) that were gathered after its initial programming. The AI will build a reaction (output) based on the data analysis.

For instance, when your smartphone identifies your workplace and your home, it is actually an artificial intelligence: it has recorded your commuting between two places in addition to a substantial amount of time spent in each of both, at specific hours. This whole step accounts for the input while saving those places as workplace and home amounts to the output, namely the action that occurs in reaction to the data analysis. At start, the programming of your phone did not “mention” any of those specific places, nor the exact time you would spend at them, but it made it able to measure those data and “draw conclusions” from them.

In this example, the machine acts intelligently, for it uses data it has gathered itself, analyses it, and reacts accordingly. Nevertheless, the simulation imitates here only mind processes and focuses more specifically on reasoning. But, as science-fiction reminds us, the AI can go further through reacting to language and mastering it, or through having a human shape. It is a bot in the first case and an humanoid robot in the second. According to Andrew Leonard, a bot may be defined as an “autonomous software which is assumed to be intelligent, endowed with a personality and that most of the time provides a service.”[2] Bots are often conceived to interact with people through the language. Nowadays, bots have more and more practical uses that might soon change the relation people have with software tools. That point will be dealt with farther in the article.

Machine learning, deep learning

Today, we hear a lot about machine learning and deep learning, because those processes are currently used for advanced AI. Those terms are often employed indifferently in spite of their conceptual difference.

Let us take the example of a little advanced program, whose function would be to foresee the traffic in a city. This program would use data to establish peak hours. Its initial programming contains, as categories of elements to be analyzed, traffic density, the day and the time. During the whole week, traffic jams are noticed at the same hours, which enables the program to define peak hours. However, twice a week (at the weekend), the traffic is way less thick. In that precise case, machine learning would permit the machine to take account of its mistakes (i.e. after five “normal” days, previsions are wrong during two days) to set a new rule: peak hours are radically different two days per week.

As a matter of fact, machine learning is what makes it possible for a program to improve on its own through reevaluating the importance of some data that it is programmed to analyzed. Machine learning brings AI closer to human mind, as it becomes able to exploit its mistakes not to commit them again.

Traditionally, one distinguishes different types of machine learning depending on their arbitrage processes and on the way they produce outputs:

· Supervised learning: AI can analyze and order information on account of a set of data that were already ordered and categorized. For example, one first feeds a machine with 500 pictures of cats and then provides it with 100 more unknown pictures, asking it to determine which ones of them actually contain cats. The traffic software example that was previously explained falls under supervised learning, as the software analyses the traffic of all hours and decides which ones belong to the category “peak hour”.

· Unsupervised learning: categories are not all given anymore and the program interacts with what defines them. One usually distinguishes:

o Clustering: the program has a dataset and is in charge of establishing different categories. For instance, it is given a dataset about a population — data includes revenues, estate, geographic location, etc. — and is then expected to outline several social classes.

o Dimensionality reduction: a program is provided with a dataset and a variable to study and it is to dispose of data categories whose explanatory power is lower.

· Reinforcement learning: the machine has available tools to make decisions along with tools to measure their results (“was the decision appropriate or not?”). The can adapt on its own the way it arbitrates and makes decisions. For example, a robot which could advance autonomously would improve its ability to fathom distances simply through moving — among others, by colliding with objects its first supposed distant. In that case, the robot has its own means to evaluate the rightness of its predictions. It does not need human to validate or invalidate its output.

· Transfer learning: it consists in training a robot on a source task in order to make it perform a target task whose nature is a bit different — but not too much. This offers a great deal of possibilities to develop an AI for particular goals, in a same way as an athlete who would do particular exercises to get better at some specific aspects of his sport. Let us take the example of a software whose purpose would be to identify the breed of the animals represented on pictures it is given. For such a case, transfer learning could amount to making it train by sorting geometric shapes on pictures (of course, the example is simplified, as the recognition of animals would require to train on cases much more complex than mere geometric forms).

Several of decision-making algorithms may come on top of those different learning technologies. These algorithms are chosen for machines depending

Diagram of a neural network — Source: http://cs231n.github.io/assets/nn1/neural_net2.jpeg

on the tasks they are programmed for and the nature of data they are to analyze. One of them, the neural network, is often associated with most complex forms of AI. In a simplified way, the neural network imitates the human brain’s working through analyses of information performed at multiples layers, which correspond to neurons and synapses. At each level, information is dealt with and then redirected to another layer. Only after this complex superposition of examinations comes out the output. The neural network leads to very advanced reasoning, which results in high performance requirements.

Along publications coping with AI, “deep learning” is also a term that is often found. However, definitions regularly vary from a source to another. It is yet commonly accepted that deep learning is an advanced form of machine learning which makes it possible to handle complex information using numerous processing levels while simultaneously changing the way it is analyzed and improving all along the process. Program working with deep learning are able to understand on their own which sorts of data are important. They can test correlations and even define new ones, on which future expectations will rely. By the way, when a neural network is complex enough, one often consider it as deep learning. This concept of deep learning is particularly important for big data analysis, as deep learning are able to categorize on their own the information amidst millions pieces of data they handle.

As previously stated, among features associated with deep learning, there is the AI’s ability to become more performant without external modification of their code. Speaking of this feature, Google’s program, AlphaGo, was able to refine on its own over simulating games of go against itself so as to optimize its algorithm.

The apparition of practical applications using these advanced forms of AI is rather recent, but this is not really attributable to the improvement of our knowledge in cognitive science or of our theory of AI. Indeed, the changes brought about by AI is bound to two main factors:

- The enhanced computing power of all our devices. As it was foreseen by one of Intel’s cofounders, Gordon E. Moore, the power of our processors has grown — and is still growing — in an exponential way along the years. New powerful AI programs require much computing resources and machines from the early computing era were not powerful enough to run such programs.

- The emergence of decentralized computing on account of the cloud technology — and therefore the development of spread broadband and devices able to connect to the Internet from any place. Cloud computing enables any hardware to “run” any program — or more accurately to use it, as it is remotely run — , no matter how complex it is. A machine using cloud computing — a smartphone — just has to transfer an input to a more powerful server that will do the computing on its own and send back the output to the first device. With cloud computing, any device can therefore have access to the smartest AI. In other words, a 7-ounce smartphone can currently have the same computing power as a 55-pound server.

At today’s age of information overload, those programs are becoming extremely important wherever we have data whose sense and utility stay unknown. In due course, they will be able to use these data to perceive on their own the relations which unite some categories and get better while working.

The application fields are really various. First, programs functioning with machine learning are increasingly able to use language as input and as output. Of course, the field of speech recognition is thriving, from smartphones to the IoT. Moreover, smart cars will really spread only when the AI which steers them is able to analyze all elements necessary for a perfectly safe driving — among those elements: other cars, diverse obstacles, road limits, signals, data from the roadmap, etc. Activities in people care are also likely to be soon upset and AI will meet the demographic ageing with incremental innovations. One can as well think to the emergence of new technologies that will improve the daily life of disabled people, such as Horus project whose objective is to create an object that will provide blind people with an audio description of their surroundings at any time. At last, among the diverse fields touched by AI, there are even the new electrical distribution network, smart grids, that are to meet an increasingly decentralized and intermittent production — the intermittence being resulting from renewable energy sources such as the wind and the sun.

Bots: towards a coming replacement of software programs and applications?

In early 2016, Facebook launched a “bot store”, that is a platform which is part of the application Messenger and contains bots. To discuss with one of

Source: http://marketingland.com/wp-content/ml-loads/2016/04/facebook-bots-messenger2-1920.jpg

them, you just have to select it. Then, a standard discussion window appears, as though you were about to talk with a genuine person. It is however not a true human, but an artificial intelligence that is supposed to respond your questions with the most accurate possible answers. Like for Google’s Playstore and Apple’s App Store with applications, each company or brand can develop its own bot and make it available for public download. With a bot, users are not looking anymore for the service’s functionalities in an interface like for an app, but ask bots for them by writing with their own words. In concrete terms, someone can ask a “journalist bot” specifications about some pieces of information, or ask a “wealth specialist bot” his body mass index, etc.

In practice, the intelligence level of current bots leaves a lot to be desired. CNN’s bot has trouble in reacting to a precise event, and, generally speaking, today’s bots operate by key words rather than by a real understanding of sentences. As things stand, for those bots, passing successfully the Turing test remains an unthinkable distant dream. On second thought, however, Facebook could well have identified a major pending revolution…

A software program is a tool capable of performing automatized tasks, handling pieces of information, converting them and giving some to the user. But traditional software takes no initiative: it first and foremost answers the user’s requests, through the intermediary of its interface.

A program is more or less powerful depending on the number of functionalities it covers, and on the complexity of the latter. In every instance, the user can access its functionalities only through a computing interface. Moreover, the more possibilities a software has, the more likely users are to spend time looking for them or to be forced to develop a know-how specific to this program. Admittedly, it is possible to make an ergonomic program, namely being complete though relatively easy to use. However, the more full-featured programs are, usually the more complex. To be aware of this, one can compare a car’s wheel with a plane’s cockpit: driving a plane exacts using more “functionalities”, which translates into a more convoluted interface. In the same way, a basic program such as paint is extremely simpler than a more comprehensive equivalent such as Photoshop. Speaking of which, Photoshop would prove pointless in the hands of a beginner on account of its complex interface — even though it is of course justified by the “power” of the software.

The disruptive step involved by bots consists in getting rid of this interface. Functionalities are neither listed nor displayed for the user and the latter are not supposed to spend time looking for them. Users simply must ask what they want without caring for the means to achieve their goals or even the way to find those means — namely the onscreen buttons which lead to the functionality they desire. The whole reflection about the how is devolved upon the bot. Users just formulate their questions, and it is up to the bots to understand and to execute.

Mastercard’s bot — Source: https://timedotcom.files.wordpress.com/2016/10/mastercard.png?w=560

Nevertheless, there is a condition without which bots will never be effective: bots must be capable of understanding all sentences and exactions from their users, which requires a thorough knowledge of language — spoken or written. The AI has already made significant progress in this domain but there is still much room for improvement, as it must be able to grasp any grammatical structures, to distinguish homonyms and double meanings statements from one another, etc. Furthermore, contextualization and sense of timing will have to be mastered in order to get products and services that are fully performant. This capacity to figure things out in a given context is probably the most difficult feature to implement in AI’s language. Speaking of which, at his time, the French philosopher René Descartes had already identified this faculty of human mind as the most difficult to be imitated by a non-human — animal or machine[3]. Bots that would have such capacities in terms of comprehension would require much computing resources, but, thanks to the technology of the cloud, this is not a problem as those resources may be consumed on remote servers rather than on smartphone — this is currently the case for Messenger’s bots.

AI for virtual office assistance

When it comes to thinking the near future of AI’s inventions, virtual office assistance cannot be bypassed. In the next few years, most of employees are likely to work with such a tool. Virtual office assistants are the logical evolution of the actual collaborative platforms that include a whole set of different functionalities in a same software environment.

Those platforms are, or will soon be, able to make connections between different applications and convert information from one service to another (for example, recognize a meeting proposal in a mail and automatically save it as an event in the calendar). More and more tasks will become automatic in this way. As it is the case for bots, there will be the possibility to dictate tasks to one’s computer ; the latter will have to “understand” and execute them.

Through an approach based on open source and open innovation, progress is made very rapidly in this field. For example, LINAGORA

LINAGORA’s virtual assistant, LinTO

currently develops LinTO, a smart professional assistant. LinTO, as a “service” on open platforms and especially on OpenPaas, aims at completing a set of functionalities by the way of AI (automatic report on meetings, email classification, etc.). As one of the main contributions brought by virtual office assistants is to make connections between different services and convert information automatically, open source technologies turn out to be the most integrated and the most adaptive solutions for companies. As a matter of fact, open source enables each company to incorporate features specific to its activity into its own PaaS and associate “basic” services to services it intends to create on its own.

Conclusion

For years, humans have been able to create ever more intelligent technologies, which made us foresee a near or remote AI revolution. Science fiction brought about the myth that these significant changes would manifest themselves through machines that would be entirely autonomous, emotive, creative and self-conscious. However, even though AI’s disruption has already begun, this is first and foremost expressed by tools and useful features rather than by social and affective virtual intelligence.

Today, AI is capable of performing increasingly complex tasks, analyzing increasingly thick databases and making increasingly varied domains interact. Countless economic fields are about to be deeply affected, and the expansion of the tertiary sector is most likely to experience a watershed moment on account of service automation, especially in people care. In the same way, the bot revolution might come after the app revolution, despite it being still very young.

Artificial intelligence, for it is a fundamentally disruptive and growing technology, must be taken into account by all current economic stakeholders when it comes to defining a digital strategy.

For that reason, LINAGORA endeavors to pave the way to this technology through an approach based on R&D and on co-construction of digital strategies with our clients.

To us, AI is therefore a pending revolution that must not be missed.

— — — — —

[1] From brawn to brain — The impact of technology on jobs in the UK, 2015

[2] Andrew Leonard, Bots: The Origin of New Species, 1998

[3] From the 17th century, Descartes had prematurely predicted that the human sense of contextualization would be the most difficult to imitate for machines: “even though such machines might do some things as well as we do them, or perhaps even better, they would be bound to fail in others; and that would show us that they weren’t acting through understanding but only from the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need some particular disposition for each particular action; hence it is practically impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way our reason makes us act.” (Discourse On Methode, 1641, “Part 5”).

--

--