Are we about to fail the Turing Test?
By Juarez Alvares Barbosa, Oracle Tech Sales Consultant & Arthur Paris, OD Tech Evangelist
Described by Alan Turing in 1950, the principle of the test is simple. A human evaluator is responsible for judging a textual conversation between a human and a machine. The evaluator knows that one of the two participants is a machine but does not know which one. If he is unable to discern the man from the machine after 5 minutes of conversation, the machine has passed the test successfully. The test does not measure a machine’s ability to answer a question correctly, but it tests how similar its answers are to those that a human would provide.
The fields of application of artificial intelligence are numerous, including finance, military, medicine, law, logistics, and robotics. It has even come to the point that AI is imitating human features and expanding into creative domains like art. Are we still able to distinguish human art from robotic art?
It was in 2017 in New Brunswick, New Jersey that a new form of the Turing Test was proposed.
Before you read on — have a look at the art pieces below and try to make a guess how many were created by a human and how many by a machine:
Answer: all of them are generated by a new neural program called “CAN: Creative Adversarial Networks Generating ‘Art’ by Learning About Styles and Deviating from Style Norms”, Ahmed Elgammal.
When presented to visitors during an exhibition, the subjects were unable to distinguish between what was robotic art and what had been painted by a creature of flesh and blood. Moreover, the works generated by CAN were also considered “newer” and more “aesthetically appealing” by the subjects.
Could perhaps the new algorithmic artist reveal the impossible — what is going on in an artist’s head when he paints?
Artificial intelligence started, among other things, with a goal to allow a computer program, namely an expert system, to replicate the human intelligence and cognitive capabilities so that it can perform actions that depend on a decision-making process as the human brain does.
In the same way that a human needs to put words to be understood and to develop reasoning, the typical computer program is based on a programming language, usually, one that has several syntaxes and semantics rules. You can then create business and mathematical-logical flows and constructions where several rules that translate your business needs can be implemented and applied.
Artificial intelligence implements one step more in that logical solution to common problems addressed by computer programs. Machine Learning allows computers to learn by themselves, meaning it can learn by using large data sets instead of hard-coded rules.
This type of learning takes advantage of the processing power of modern computers, which can easily process large data sets — what we currently call Big Data. Make data and data sets available and, voilà, you get to Machine Learning, Deep Learning and AI.
Let’s first consider Machine Learning. The learning aspect of it can use two different approaches — supervised and unsupervised learning.
Supervised learning involves using identified data sets that have inputs and expected outputs. If the output generated by the AI algorithm is wrong, a process to readjust its calculations is performed iteratively with the same data set until it makes no mistakes anymore.
Unsupervised learning you provide data sets that have no specific structure or pattern that can be followed. It means you let the AI model/algorithm make the logical classifications of the data.
Deep learning is where things get even more interesting as it involves neural networks — a new name for an approach to AI and machine learning. Neural networks constitute the actual brain of advanced AI systems. Like the human brain, the AI brain has neurons grouped in neural networks. Neural nets are typically composed of many processing nodes that are densely interconnected.
The grouping considers several layers — the input layer, the hidden layers and the output layers. In summary, the input layer receives the data and passes it to the hidden layers. The hidden layers perform mathematical computations on the data received from the input layer.
The main complexity in creating a neural network is deciding the number of hidden layers as well as the number of neurons for each hidden layer. That is where we get the term “deep” in Deep Learning as it refers to having more than one hidden layer. At last, we have the output layer, and it returns the output data as processed.
Deep Learning is a machine learning method. It allows us to train an AI to predict outputs, given a set of inputs.
It also allows you to build more artificial intelligence systems that can learn complex, critical real-world tasks, such as being an autonomous vehicle driver or surgeon. So all the most advanced machine learning services and products use machine learning and ultimately deep learning.
In the same way that the social contract was forced to evolve during previous industrial revolutions, our current one needs to adapt to cope with the increasing challenges we face today. What about the good practices and principles that frame artificial intelligence as the new great revolution?
Cities have been rebuilt around railways and homes around electricity, the world today is reinventing itself around digital innovation and by extension around the companies that provide this service.
Companies are becoming more involved in regulatory debates related to their sector of activity. They work closely with governments to develop a regulatory framework that leads to the development of new technologies. The recent death of a pedestrian who was hit by an Uber autonomous vehicle in Tempe, Arizona, is a reminder that the question of the responsibility of users and operators of artificial intelligence is far from being decided.
Consumers are placing increasing importance on partnerships with businesses as their influence grows in their lives. These partnerships cover both the products, objectives and values of a company. The omnipresence of technologies in our daily lives reinforces the role they play in it, but in return, governments and public opinion demand more responsibility from them. Jean-Paul Argon, CEO of L’Oréal, reminds us that in 10 years’ time ethics will not be an “asset” but a mandatory condition for the exploitation of a licence.
Currently, the body of legislation governing the use of data at work behind artificial intelligence is insufficiently developed. However, many efforts are being made by States to do this in the same way as the GDPR in the European Union. Although companies and their managers are not paved with bad intentions, we are not immune to severe mistakes that we would prefer to avoid, such as the Cambridge Analytica scandal.
Indeed, the reuse of a research paper on a Facebook personality quiz written by Michal Kosinski and David Stillwell has put all the potential of machine learning at the service of the manipulation of American elections and maybe even the vote for Brexit. Developers innocently giving the results of their research in a publication have provided malicious people with the opportunity to benefit from the great power that they can no longer control. Like Marie Shelley’s Frankenstein, the intellectual possibilities of developing such technology have surpassed their creators.
“This is not because technology allows us to do something that we have to do it.” Clare Dillon, CurioCity Dublin 2018
Thus, the question that these companies must address is that of our confidence. We must, as companies and civil society, ask ourselves the right questions: What is its basis, development principles, transparency? To understand and answer this question is to understand the very purpose of the technology. How is it used? When is it used? How is it trained?
Would you use a robot doctor instead of a human?
On a survey conducted with 337 respondents during the registration #OD-CurioCity, only 16% are ready to rely on a robot doctor. At a time when AI is already being used by doctors to diagnose their patients, shouldn’t this number be higher?
Is this due to a lack of knowledge of technology or mistrust? Maybe the answer lies in between. Transparency and education are the essential keys in the building of trust between society and corporations.
Currently, artificial intelligence, due to the complexity of its development, is not a technology that everyone can appropriate. The best technologies developed have always been close to their users. Moreover, paradoxically, the secrecy that large companies maintain in the development of their AIs does not help their users to approach and appropriate them. Worse, it even contributes to creating biases.
Here is a typical example: a conversation in WIRED between Cathy O’Neill, American mathematician and editor of Weapons of Math Destruction, and Tom Upchurch, Head of WIRED consulting, shows the total disconnect that exists between end users and developers of artificial intelligence.
To address these biases, Joy Buolamwini, a Ghanaian-American computer scientist and digital activist based at MIT Media Lab, founded the Algorithmic Justice League — an organisation that fights against the human biases of artificial intelligence that exclude and discriminate against some of its users. The missions of this collective are: to identify biases in algorithms and give users the opportunity to interact with developers via a platform of bias notifications on the ALJ website. All this to develop best practices for artificial intelligence.
Artificial intelligence is an emerging technology and a new subject for many people; however, it is already present in our lives and will be even more in the years to come.
There are still many ethical implications that need consideration, and their importance is growing exponentially considering the number of services and products that are adopting artificial intelligence as part of their features.
We must be aware of our role of parents of this technology. Currently, AI is a kid that needs to be watched and educated. That means we should participate in its evolution by supporting discussions regarding its limits, capabilities and reach in the scope of our existing social rules.
*Views expressed on our own and not that of Oracle.
If you want to hear more about #OD-CurioCity event. Here is the podcast recorded with startup co-founder Andreea Wade and researcher Dr. Kevin Koidl questioning the transformative impact and ethical use of AI.