#2 Could a Global AI Treaty Be Key to Future Stability?
Hi from London’s historic cricket club Lords, Readers — check out the second post in my five-part Medium.com blog series exploring tech’s impact on global risk; it builds on my past HuffPost blog series explaining the ongoing global legitimacy crisis. It is inspired in part by discussions with my NYU grad students and key ideas in my award-winning political comic book, The Global Kid.
At this point, it should be clear to everyone — we are in the midst of a major crisis of legitimacy in our international system. What does this mean? Well, US hegemony is evolving in a way that it’s no longer clear who’s in charge. This is a major global risk we are likely to face for awhile. Is it Zakaria’s post-American world, Bremmer’s G-Zero leaderless world, or simply a post-hegemonic world as I’ve argued before? We just don’t know. What we do know is that the very nature of power is changing. Technology has a huge impact on our future — and the race between countries to have the best tech has already started. Which superpower will win this tech cold war — the US, Russia, China or another aspiring superpower? The potential for major superpower conflict over tech means it may be time now to create a global tech treaty, especially in terms of AI weaponry.
The geopolitical status quo of US hegemony has shifted. Even if the US is still setting the tone in geopolitics (e.g. on North Korea), no one can deny President Donald Trump’s unilateral rhetoric emphasizing how the US is not responsible for the world anymore. Nor can we ignore China’s repeated declarations of its global ambition; France’s growing influence via soft power; Russia’s dominant hybrid warfare; and so on. The international system appears to be headed in a more multipolar direction, after 25 years of US hegemony post-Cold War. But it may be time to reevaluate what we mean by power today. A superpower traditionally is defined by its superior position globally in terms of its economy, diplomacy, culture and of course military. Yet military strength is changing — it’s no longer just about conventional weapons or even nuclear weapons. Like with many issues today, it boils down to tech.
Tech is changing the very nature of power, especially military power — and will likely determine the next leading superpower. The consensus amongst superpowers Russia, US and China seems to be that AI will be key to their national security in the future. In fact, AI could revolutionize military power — and war — as much as nuclear arms has done, according to a 2017 Harvard report. Who will win the new AI weapons race? Russian President Vladimir Putin has openly said that the country with the best AI will “become the ruler” of the world. This may be why his country has already publicly declared it will build “killer robots” no matter what. China has plans to dominate all types of AI by 2030 including weaponry to beat out competition from the current leader — the US. Clearly, tech is going to be the game changer in securing power in the international system. How do we manage this?
An AI treaty is needed to secure future global stability. Thinkers like Anthony Giddens have argued for a magna carta to make sure tech companies don’t abuse their power; I’ve called for some kind of social contract to build a fair relationship between tech companies and the citizen. But what about weaponized tech? Some regulation is needed. Companies like Google have recently declared they will not allow their AI tech to be used in weapons (at least not since Project Maven). That’s nice to hear. But then you have the US military itself offering cash prizes of $200,000 to startups that can develop new weapons technology and defense contractors like Lockheed Martin developing such tech for the Department of Defence, which plans to spend $20 billion on this in 2019 alone. Where is this headed?
Last year, Elon Musk and 116 other leaders of robotics and AI companies signed a petition to the UN urging it to ban lethal autonomous weapons or more specifically “killer robots” (which some companies are already developing, reportedly in the US, China, Russia and Israel). There has been limited progress with implementing the proposed UN ban since then. Yes, we have more immediate global risks — terrorism, economic crises, climate change and so on. But tech will be the game changer in geopolitics and war. For many years, we have worried about terrorist groups getting access to nuclear weapons — now experts warn of the risk of them gaining access to AI weaponry “in the very near future.” And some (like Musk) warn AI will spark World War 3. For future stability, why not develop an AI weapons treaty now, alongside other weapons treaties (e.g. nuclear, biological, chemical)?