Do we have to fear Artificial Intelligence? No, says Luc Julia, father of Siri
Luc Julia, one of Samsung’s strong personalities, co-creator of Siri, puts this revolution in perspective. AI is a powerful tool in all fields, including media — it can assist or augment one’s intelligence; it can bring overall progress. But it can also be harmful in many ways if one doesn’t grow aware of its potential power. Awareness and responsibility are some of the challenges we have to face.
Luc Julia advocates strongly to realise the difference between AI promoted by Hollywood, and real AI, which he prefers to call “augmented intelligence”. In the end, it’s all about us humans — how we’ll learn to use this new tool, how fast we’ll realise we are the ones in charge, and that we’ll have to focus on educating ourselves. The three keywords are stimulation, education, and information.
GEN: Artificial Intelligence is everywhere. It seems to produce good things, bad things, and triggers a mixture of fantasy and real fear. You’ve taken an original stand on that issue in your recent book, “Artificial Intelligence Does Not Exist”.
Luc Julia, Vice President of the Strategy and Innovation Centre at Samsung Electronics: I wanted to put AI into perspective because the version of AI everybody is talking about today is not what AI really is. AI is not about Hollywood. It’s not about robots who are going to kill us. It’s not about bots or algorithms which can invent by themselves. Bots and algorithms cannot do anything by themselves. They do that we humans decide them to do.
We should rather be talking about “assisted intelligence “, or “augmented intelligence “. Because we are in charge, and AI is just a tool. To just reach 1% or even 0,5% of what we are doing with our brain is still not possible today with AI. It can only provide assistance in very specific domains, in which it does beat us in terms of speed. Robots are stronger than us in those very definite tasks. But overall, we humans are much better than them, as we are good at multiple different tasks.
On March 15, 2019, Facebook live has published part of the Christchurch mass shooting video of 50 people in New Zealand, which has been heavily criticised. The social media platform was only able to block the video after 12 minutes and it allowed those images to be shared and eventually to be seen by millions of viewers. Could this have been avoided?
We shouldn’t systematically talk about AI when referring to social media. The fact that something allows the sharing of information in a much larger way, with many more people than just the usual media outlets, has nothing to do with artificial intelligence. But AI can be a powerful tool to detect misinformation, potential forgeries, and harmful contents.
In the case of the Christchurch mosque mass shooting, AI was unable to detect the violent content quickly enough. Facebook explained that its filtering tool first mistook the murderer’s deed for images of a video game.
I don’t believe a word of that! It’s always possible to detect! The power of AI, the power of mathematics and the statistics that are behind artificial intelligence allows us to detect all kinds of litigious contents. The tools are more powerful than we want to admit.
The truth is that Facebook has zero interest in stop spreading those kinds of videos. This is their business! It’s based on that kind of spreading. The more viewers they have, the more ads they can publish.
But such a scandal is very detrimental to their reputation.
For the past two years, Mark Zuckerberg has always seemed repentant in front of the media, in front of the Parliament, in front of the people, promising to avoid spreading misinformation and violent videos, but he doesn’t want to stop it. Facebook is claiming that they hired 15,000 people to watch and stop these kinds of videos, that they have algorithms that are looking for them, but the reality is that they’re not giving their best effort. I talk about Facebook because mid-February Mark Zuckerberg released his manifesto about the future of Facebook “Building Global Community” and how its initial mission of connecting the world has become controversial. He pledges that they are going to change, that everything is going to be different by 2025. But I don’t believe it. They won’t change because it will go against their business. Their business is about sharing! About spreading! Getting the maximum buzz!
AI has gained substantial power in influencing public opinion through misinformation and forged messages. How do you react to that reality?
There is a danger to it. It’s becoming easier and easier to create fake content and misinformation. With the help of AI, it’s very easy to produce a video of Obama saying that he loves Trump. And videos are powerful. Now, we humans need to understand what the power of AI is. If we understand what AI is able to create, we will become more cautious about any content that we are going to receive. We need to be aware of what is possible. We need to decide for ourselves what looks suspicious. We need to be always on the look for something that might not be true.
But this is not totally new. Newspapers back in the 19th century were already publishing fake news. It was printed and people trusted printed material. Today, AI is just a new and more powerful tool.
You’re saying that we have to train ourselves to be more critical?
It’s all about education. When you look at history, we humans are constantly gaining intellectual power and gaining more knowledge about the world. In order to keep any forgery at a distance, we must understand what those AI technologies can produce. There is no secret. It’s not easy to educate ourselves but it’s something we need to do in order to survive.
The challenge is that technology has a much faster pace than it had ever before. As AI tools are getting more complex, there is a risk of a greater gap between those who master them and those who don’t. These inequalities can threaten the social fabric.
Let’s talk about filter bubbles: AI also allows platforms to sort out the information that is more relevant to us, expose us to things which are within our range of opinions, so we are locked in the information which is been selected for us. How can we break out of it?
Of course! Facebook and the likes are feeding us with this big flow of information which are chosen for us. It’s their interest to turn us into submissive fools, unable to defend ourselves against these messages. This is why we need education so badly, to help us develop critical thinking. We cannot be fed like that. All social media has the same mechanism: put us in a state of mind to receive as many ads as possible. There is a risk of triggering mistrust in all media contents.
…and it’ll turn us into couch potatoes?
In my book on Artificial Intelligence, I refer to the movie “Idiocracy “, a stupid one, which tells the story of a stupid society. Joe Bowers, an American chosen by the Pentagon for an experiment that goes wrong, wakes up in the year 2500 where people are fed with Gatorade — the famous sugary drink, junk food, and false information. Plants have been watered with Gatorade and have died. People believe anything they’re told. This stupid society is controlled by some people who do not drink Gatorade, who have knowledge and power. The others just do what they are told to do. I think this film points to a threat which looms over our contemporary societies.
We need to stimulate ourselves against that risk. The three keywords are stimulation, education, and information. That’s why I am a little mad at the media right now at the way they speak about Artificial Intelligence. They should take a deep breath, talk more seriously to the people who have been working with AI for 30 years, and realize that AI is nowhere close to what they think. Journalists and writers need to become better informed.
But AI has real power on people’s mind, not to mention provoking murders. With the use of AI, Cambridge Analytica was accused to influence the American votes during the presidential campaign of 2014, and the votes during the Brexit campaign in 2016? Is AI putting democracy at risk?
AI is a tool that can be harmful, but it’s only a tool, like any other. Democracy can be in danger in front of the tanks, as it was in front of bows and arrows back in the 16th century. But the danger is not coming from AI itself, it’s coming from us. It’s us who decide how we want to use this new tool. As for Cambridge Analytica, when it pushed messages against Hillary Clinton, it was the decision of someone. The technology is not responsible for the damages it causes.
Can some ethics be opposed to the misuse of AI?
I am confident. Look back on history. For every tool that has appeared, from the knife to nuclear power, beyond the first dreams of progress, the misuses brought the need for regulation. As for social media, GDPR was adopted last year in Europe. It was a real step forward for the people’s rights about a better understanding of what sharing information means.
But it takes time. Regulations always come 10 or 15 years after the new step. GDPR took 12 years between when the issue came up and when the regulation was implemented.
I believe that there will be regulation on AI if people understand the real — not imaginary — potential danger of it.
AI has also a detrimental role in the environment. Processing data through AI requires a lot of energy.
It’s another fact that we have to be aware of. Information and Communication Technology (ICT) represent roughly 1/50th of the world greenhouse gas emission and it’s rapidly growing.
For instance, blockchain is using thousands of time more energy that VISA transaction: 767 kWh for the former, less than 2 Wh for the latter. The way AI is built today is not scalable.
Imagine that taking a selfie is using a power equivalent to a village in Africa. What is ethical there? We have to be aware of that, it’s part of our education.
The brain is much more energy efficient than any AI tools will ever be. To build an AI machine with a capacity close to 10% of our brain would use the equivalent of several nuclear plants!
That consumption is the main issue and has to be regulated.
Luc Julia is the Vice-President of the Strategy and Innovation Center at Samsung since 2012. Former graduate of the Ecole Nationale Supérieure des Télécommunications, he has led his career exclusively in the United States. He worked at SRI (Stanford Research Institute) on speech recognition, then spent 10 years creating and managing a few specialised start-ups, before becoming the Director at Apple where he developed SIRI, the vocal assistant on Apple products. He has a reputation for being a free spirit.