In Defence of AI
Artificial intelligence (AI) exerts a powerful influence over humanity. As well as great opportunities, AI poses profound risks. Most people acknowledge this, but still, they are reluctant to adopt a definitive position for or against the technology.
At Ananas, we are resolutely in favour of AI, and we want to explain why. But first, we need to characterise the debate. The social and political consequences of AI for subsequent generations will be profound. An open and and inclusive conversation is essential if we are to overcome prejudice and widespread opposition to this technology and safeguard its critical role in the advancement of our species.
What is AI, really?
Artificial intelligence today is properly known as narrow AI, or weak AI. This simply means it’s designed to perform a narrow task, like recognising a face or driving a car. Smartphones, search engines, air traffic control systems, even financial markets are run by AI algorithms. When everything is working well, we don’t notice. But when there’s a problem — like the ‘Flash Crash’ which rocked financial markets in 2010 — we are jolted into confronting the brute fact that our lives are governed by AI, whether we like it or not.
Significantly, researchers are rapidly getting closer to Artificial General Intelligence (AGI), which would mean machines can/could outperform humans at every cognitive task. This poses huge risks to humanity, and huge opportunities, also.
Homo Sapiens rule our planet because we have the highest level of intelligence. So, what happens when another entity achieves a higher level of intelligence than us? Many experts believe this will happen sooner than previously thought. Some think it could pose an existential threat to humanity. A list of esteemed personalities have put their names to this movement, including Stephen Hawking and Bill Gates. Elon Musk is on record saying that, “AI is a fundamental risk to the existence of civilisation […] “I have exposure to the most cutting-edge A.I.,” Musk said, “and I think people should be really concerned by it.”
It is certainly true that AI is able to learn from itself, and learn quickly. Human genetics contain thousands of years of programmed knowledge but technology is now catching up because it has vastly bigger datasets to learn from. Expect the gap to close quickly.
In fact, we have reached a stage where acquisition of knowledge is exponential. There are now algorithms building algorithms. Commentators are talking about “singularity” — the point at which invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization — but most experts seem to think this is a long way off.
The risks posed by AI
When evaluating how AI might become a risk to humans, experts think two scenarios most likely. First, the AI is programmed to do something devastating (like deploy autonomous weapons). Second, the AI is programmed to do something beneficial, but it uses a destructive method to do so.
A Terminator-style ‘humans vs machines’ scenario is unlikely but must be considered a possibility. At the very least, AI promises to upend our lives in significant ways. For example, through automation — or at least optimization — of processes currently performed by humans. This is already happening in industries such as finance, accounting and medicine. Many leading economists are predicting huge global increases in unemployment as automation achieves a deeper footprint in service industries that have traditionally relied upon human labour.
Of course, it’s equally possible to argue that automation will simply mean that machines will take repetitive jobs off our hands, freeing us up to do more fulfilling and meaningful work. This has been true throughout economic history — Ford’s invention of the automobile led to a fundamental restructuring of the American economy.
The impact of AI goes beyond the economic and commercial. It extend to the social sphere, the ways in which we connect with content, people and culture. The use of AI by the likes of Google Amazon, Facebook and other large consumer facing tech companies in order to prioritize and disseminate content is already affecting the way that we discover and consume ideas and perspectives. The use of AI for campaigning purposes in the US Election was an instructive example of this, but filter bubbles are now much more pervasive than anyone could have imagined in the early 2000s. These echo chambers are proliferating across the Internet, reinforcing entrenched prejudices and stifling the kind of pluralistic discourse we need in order to promote peace and understanding across communities and even within communities.
Some AI research is riskier than others. For example, modelling automated responses to nuclear attack scenarios implies a greater threat than programming autonomous vehicles to avoid accidents. Some commentars have called for a ban on such research, but all this would do is force it into domains that are less regulated and less safe. We need to keep AI where we can see it, in academic, commercial and governmental environments that are (for the most part) regulated by ethical codes of practice.
The path forward
AI is developing and improving all the time. Deep Blue beat Gary Kasparov in 1997 and humans haven’t won a game of chess versus machines since. In 2011 Watson beat human players on the language based game show Jeopardy! And in 2015 Fixed Limit Holdem heads-up poker, was fully solved by Cepheus. But it’s also true that the best players in the world are actually ‘Centaurs’, hybrids that exploit the complementary qualities of humans and machines.
We need to learn to live in harmony with the technology we are building, to work with it in order to improve the world, rather than destroy it. AI can be a force for good, provided the people who are building it have good intentions and we have the civic institutions to ensure it is disseminated and regulated in an ethical manner. As Mark Zuckerberg observes, “I’m really optimistic. Technology can always be used for good and bad, and you need to be careful about how you build it, and what you build, and how it’s going to be used.”
How to solve the challenges that lie ahead? With power comes responsibility, and we have a responsibility to encourage engagement with the subject AI. By educating others to understand of how the technology works, the risks, rewards and everything in between, we can define a collective approach to AI that will advance human progress.
As humans, we enjoy strong social systems. The laws, civic institutions, political and economic systems we’ve devised over millennia are durable enough to withstand the challenge of AI — even benefit from it. They might require tweaking or even wholesale adaptation through the adoption of policies such as universal basic income. But we are an adaptive, imaginative bunch when we put our minds to it. After all, technology is just a tool. What matters is how we use it.
AI risks are about people
AI risk and our human nature cannot be disentangled. Max Tegmark, President of the Future of Life Institute and a leading expert on AI, touched upon it:
“The concern about AGI isn’t malevolence, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem.”
Similarly, in a critical, but lucid essay in which he discusses a much needed realistic perspective in context of our society, philosophy Professor Luciano Floridi from Oxford University points out:
We are and shall remain, for any foreseeable future, the problem, not our technology. So we should concentrate on the real challenges.
These two quotes underline three key challenges:
- Figuring out how to imbue AI agents with values aligned with humans on a collective level. I’m emphasising the collective humanity, because alignment with the individual creates a vulnerability of artificial agents to internalise malice or ignorance of individuals.
- Designing a healthy and safe relation between humans and AI, is far more urgent than to safe AI in itself. To achieve this, cognitive and neuroscientific research is key, for how can we align artificial and human minds if we don’t even understand ourselves at all?
- We need to understand human intelligence better, because unavoidable we will create in the image of ourselves — the only example of intelligence we have. But it is likely we will sooner know how to create powerful agents than how to create moral and responsible agents. That would be a particularly bad time to discover to what extent we have incorporated the (many) flaws of the brain.
At Ananas, we believe in AI as a force for good, but also the role of human nature. We are using this technology to empower people to build communities that promote peace and understanding, and in turn provide powerful tools to help us further explore our individual and collective identity.
In this mission, we are creating artificial knowledge for the purpose of educating and augmenting human intelligence. By using hyper-relational databases with grakn.ai, and guiding people to explore their beliefs, we will explore a common language for intelligence and what makes us human on a structural level. We believe this has a transformative potential, and could lay the foundation for a safe and healthy future relation between human and artificial intelligence.