First, there was Tamagotchi — the digital pet for children that took the world by storm in the late 1990s. Then came Aibo, the robotic pet dog, and Pepper, the semi-humanoid robot designed to read emotions. Now, Sophia, the digital humanoid, is touring the world.
All of which is to say, we’ve been around artificial intelligence (AI) in some form or another for years.
Even so, there remains widespread fear over the potential of an impending malevolent “AI revolution.” Which is silly. During one of my recent trips to Japan, I got a chance to interact with three Pepper robots. It was interesting watching the reaction of people you think wouldn’t be into tech. These people would just light up, all of a sudden completely enamored.
I was watching their preconceived fears about AI shatter in real time.
The truth is, the potential for AI to improve the way humans conduct business and even engage in democracy is limitless. If we succumb to our misconceptions about the purpose or imminent danger of the technology, however, we’ll never realize its potential.
Here are the key points of confusion we must address:
1. AI is much more than robots.
Perhaps our most deeply entrenched notions about what AI looks like can be attributed to Hollywood. When we think of robots, we’re probably picturing Bicentennial Man, Wall-E, or I Robot. Which is to say, we imagine robots that look like us, possessing human characteristics we can identify — like Sophia.
There’s also an aspect of comfort in this.
It’s easier to accept interacting with something that resembles us, as opposed to a strange and impersonal machine.
But AI is much more than just humanoid robots. Rather, it’s the technology behind it, the applications of which are limitless. Today, AI is helping scientists predict earthquakes, clean our houses, price car insurance, and facilitate drug testing — among many other things. More generally, AI can increase efficiency and output, freeing up our time to do other things.
2. AI isn’t going to take away all our jobs.
Time and time again, I hear people express concern that AI will take their jobs.
And I understand — we’re frequently bombarded by depressing statistics. A recent UK survey, for example, estimated that automation could take 4 million British private sector jobs in the next ten years. In the U.S., 47% of jobs are considered to be at a “high risk” of loss from AI.
But while negative predictions dominate the press, there are also plenty of reasons to feel hopeful.
AI makes clear how central human insight and expertise are to success. The technology is really good at acceleration and automation, but it’s not good at things human excel at — like empathy, judgment, general life experiences, and relationships. Many people confuse intelligence with consciousness. AI is great at problem-solving, but not so much at empathy.
And it’s for this reason that AI won’t take away from or encroach upon our humanity. In fact, it will create exponentially more opportunities.
As the numbers of machines and AI devices increase, so will the need for jobs surrounding them. AI needs regular human intervention at every stage to keep it running — it can’t function without constant attention from humans. There will always be a need for an editor or skilled writer to check over a bot’s writing, for example, just as there’ll always be a need for a human to make nuanced decisions about personnel, hiring, and project management.
Instead of letting fear take control, we should see AI for what it is — an opportunity to do more with our time and leverage our skills to accomplish new things.
3. AI in the wrong hands is dangerous, but the real issue is platform responsibility.
As with any innovation, AI technology can be used for good or evil.
Take the 2016 election. Bad actors targeted vulnerable populations with fake news, political propaganda, and politically charged memes, ultimately moving them to vote against their interests. And they did it using machine learning technology.
But the problem is never actually the technology — it’s the people who wield it irresponsibly as well as the platform owners who fail to take the necessary precautions to ensure that their platform is not abused.
If nothing else, the fake news phenomenon shows that technology often progresses faster than the human brain can keep up with it. Now, we’re playing catch-up. Facebook and other major social media companies have finally started taking seriously the power of social media, and have staffs entirely dedicated to controlling bad actors.
California, meanwhile, recently introduced a law that requires chatbots to disclose that they’re not human — AI’s legislative debut.
In the end, our discomfort isn’t really about social media or technology, but rather about overcoming our preconceived notions and catching up.
Instead of fearing AI, industry leaders should focus on the metrics behind the tech.
According to Israeli author and historian Yuval Noah Harari, AI is only as good as the metrics used.
And humans control the metrics.
We define the criteria, then let AI make the best decision possible. Given our irrational emotions and biases, humans often make terrible decisions. That’s where AI can step in. It has a more realistic understanding of the world than we do.
Instead of some free-floating existential concern about AI rendering us obsolete, humans should instead be concerned with how to best control the metrics.
Professor Harari doesn’t anticipate an AI revolution to fully materialize for at least 5–10 years. In the meantime, humans have an opportunity to build ethics into AI. We can also build the uplevel skills that will be more desirable when machines are able to handle the more mundane tasks.
The march of technology is not without speed bumps, but the net good tends to outweigh the bad. When you combine AI with human intelligence, the results are unprecedented effectiveness. AI enables us to step back from repetitive tasks and unlock the freedom to leverage our creativity, our experience, our relationships to create new opportunities to better our world.
We’ve got to overcome our misconceptions, worries, and hangups about AI and redirect our energy towards establishing industry best practices and metrics which keep humans in control.
That, or fall behind.