Why Elon Musk is Right About AI Regulation

Carlos E. Perez
Intuition Machine
Published in
6 min readJul 29, 2017
Credit: https://unsplash.com/@mazhari

I am not surprised that almost everyone who works in AI (specifically Deep Learning) has refuted Elon Musk’s suggestion that government needs to begin regulating AI. The key conversation at the moment is Mark Zuckenberg’s remark that Musk’s assertion was ‘pretty irresponsible’ and Elon Musk’s response that Zuckenberg’s understanding was ‘limited’.

Many professional researchers don’t have a limited understanding of Deep Learning and very few of them have chimed in support of Elon Musk.

Elon Musk is not only a radical thinker, he is also a very disciplined one. There have been plenty of naysayers regarding his ventures like SpaceX and Tesla. However, he has remarkably proven the skeptics wrong and executed in a manner that almost nobody else in this world can replicate. His ventures builds the most complex of machinery in a way that is not only technologically feasible but also economically feasible. Therefore on accomplishments alone, should at least give Musk the benefit of the doubt on this one.

I am writing this blog entry so that I can explore in depth Elon Musk’s reasoning. Musk holds an opinion that is clearly in the extreme minority.

What did Musk actually assert? Here is what he has clarified in a fireside chat after his remark’s during the governor’s meeting:

Musk clarified that he envisions a government agency forming first and seeking to gain insight into AI and its use initially, without any kind of attempt to regulate by “shooting from the hip.”

The primary objection of anyone knowledgeable about this field is that there is nothing specific that requires regulation (One idea is an automation must never falsely pose as a being human). The field is still in its infancy (despite mastering Go and mastering arcade games from scratch) and the closest thing that we have to ethical rules are the “Asilomar AI Principles.” However, these principles are abstract and in a form that is not concrete enough to define laws and regulation around.

Musk’s fear however is reflected in his statement: “It’s going to be a real big deal, and it’s going to come on like a tidal wave.” Musk speaks about a ‘double exponential’ in the acceleration in hardware and the acceleration of AI talent (note: NIPS 2017 had over 3,500 papers submitted). This ‘double exponential’ means that our predictions of its growth may be too conservative. Musk further remarks that researchers can get so engrossed in their work and overlook the ramifications. Musk’s fundamental stance is that more effort should be placed on AI safety over pursuing AI advances. He argues that if it takes a bit longer to develop AI then this would be the right trail.

What we know about governments and regulation is that they move in a very slow pace. Musk is proactively kickstarting the conversation about government regulation with the calculation that when government eventually becomes ready that AI technology will have advanced enough to allow for meaningful regulation. It indeed is placing the cart before the horse.

Most experts will agree that it is pre-mature to bring up AI regulation. However government, society and culture move at rates that are much slower than technology progress. Musk’s gamble here is that the negative effects of pre-mature regulation outweighs an existential threat. Musk calculates that it is better to be early but wrong than to be late and correct.

The previous American administration had published a report on AI last year. However, the anti-science leanings of the current administration may put a damper on any future government subsidized studies on the effects of AI to society. US Treasury Secretary Steven Mnuchin evenly opined that the threat of job loss due to AI is “not even on our radar screen”, only to walk back his statements a few months later. In short, despite Musk’s statements, it is very unlikely that the current administration will make an effort in this area and would prefer to have ‘market forces’ decide the solution.

Musk sounding the alarm will likely fall into deaf ears for the next four years. Perhaps that is why he brought it up in the Governor’s meeting and like initiatives like climate-action this threat may be taken up by US states instead. Unfortunately, Mark Zuckenberg’s remarks and many of the other researchers objections only gives additional ammunition to other governments to do nothing.

Unfortunately, the examples that Musk gave in the Governor’s meeting to motivate regulation were examples of threats dues to cybersecurity and disinformation and are not necessarily a threat that only AI can perform. (on thinking about this, Musk may have deliberately chosen to avoid a use case that gives malicious actors ideas!) Musk’s best analogy that does make sense is the idea of it being easier to create nuclear energy as compared to containing it.

We are indeed heading into dangerous times in the next four years. It is difficult to imagine what Deep Learning systems will be capable of by then. However, it is likely that Artificial General Intelligence (AGI) will not been achieved. However something very sophisticated in the realm of narrow AI may be developed. More specifically weaponized AI in the domain of disinformation and cyber-warfare. The short term threats are job destruction and cyber-warfare. These are clear and present dangers that will not require the development of AGI.

Toby Walsh of University of South Wales however has a different take:

We are witnessing an AI race between the big tech giants, investing billions of dollars in this winner takes all contest. Many other industries have seen government step in to prevent monopolies behaving poorly. I’ve said this in a talk recently, but I’ll repeat it again: If some of the giants like Google and Facebook aren’t broken up in twenty years time, I’ll be immensely worried for the future of our society.

Rachel Thomas of Fast.AI has writes about similar concerns:

It is hard for me to empathize with Musk’s fixation on evil super-intelligent AGI killer robots in a very distant future. (snip) … but is it really the best use of resources to throw $1 billion at reinforcement learning without any similar investments into addressing mass unemployment and wealth inequality (both of which are well-documented to cause political instability)

Both opinions revolve about inequality. AI ownership has being confined to a few elite companies. Musk was concerned enough about this that he formed OpenAI. However, it brings up a concrete regulatory issue. Should AI be owned by a few private companies or should it be a public good? If it indeed is a public good, then how shall that be protected?

We coincidentally are exploring a few of these ideas in our Intuition Fabric project.

Update: I believe Musk is aware of A.I. technology that already exists today that can be extremely disruptive and requires serious discussion in regulating. It definitely is an application of Deep Learning, but he has deliberately not been specific to what it is. Suffice it to say that it is in the realm of network intrusion and disinformation.

Update #2: Musk’s OpenAI just announced the creation of a Dota2 playing AI that is beating professional players (see: https://blog.openai.com/dota-2/ ). His remarks about regulation apparently are not new, he said the same thing in January at the Future of Life conference but it wasn’t picked up vy the press. Now, Musk has now gone on Twitter with some more interesting remarks:

with even a more ominous warning:

Denny Britz has a post that explores the OpenAI Dota2 achievement in more detail:

Britz arrives at the conclusion that the Dota2 1x1 play has a limited exploration space and a bot has an advantage over a human player in that it has more detailed information and has quicker response times. So I am left wondering if OpenAI is overly hyping up its performance.

More coverage here: https://gumroad.com/products/WRbUs

--

--