Should AI be regulated?

Martin Dinov
Maaind
Published in
4 min readSep 20, 2017

I find it appropriate that my first post on this platform will be on a topic that is in many ways central to my life and ever-more so to the lives of us all— namely, artificial intelligence, or AI. This is actually just going to be a repost of my response to a Quora question asked by someone, where a number of other people, including famed Professor Andrew Ng, have responded. This topic of AI regulation is complex and multi-faceted, involving many (all?) people from various fields. However, as someone who’s used AI in his work extensively, and as a long-time technologist and programmer, I wonder at some of the responses this question has received in the wider media, in particular by the AI community and other technologists-the most common view (and I exaggerate and simplify here) is that technological progress is good and must continue as fast as possible. Most of the rest of the technological and scientific world has a variety of monitoring systems and regulatory systems in place. Below are my thoughts, which I shared as a response to this question on Quora:

I am generally pro-AI, but here I will give a somewhat contrarian answer to most of the answers presented already and justify it briefly with some examples. It is actually surprising to me that most fellow technologists default to a ‘non-regulation’ position, choosing, it seems to me, to ignore the great variety of examples of successful and arguably net-positive regulations that we already have. Regulation does not mean stopping progress. Technological progress is not automatically a net-positive to society and individuals.

We have regulations in most technical and non-technical fields already. A few obvious examples include: which side of the road you drive on, speed restrictions on driving, which medications are prescription and which are OTC, who can call themselves an MD (those who’ve gone through sufficient training). We have regulations on drug development and sales. On the more IT front, we have strong regulations on which EM frequencies you can transmit, so as not to interfere with medical, industrial or military devices that might be receiving or transmitting on the same frequencies. We even have export control and regulations on cryptographic technologies and algorithms. And of course, we have strong regulations on the use and distribution of nuclear materials.

Now, not all of these are equally easy to monitor and enforce (e.g. crypto algorithms or nuclear material). The details of some of these are important and certainly up for discussion. But even the most anti-regulation person would not generally suggest that most or all of these should not exist, rather the debates are usually in the fine points, not in the general sentiment of whether we should have regulation on technological and other processes. The more likely a technology is to affect a large number of people moderately or even a small number of people strongly, the more in need of ‘regulation’ it is. This obviously does not mean halting progress. An analogy I like to use in this context is the Car Breaks metaphor. Why do we have breaks in a car? It is not to make sure the car is going slow. It is to allow the car to go much faster, and slow down where and when needed. You wouldn’t be able to go very fast (and thus far) without breaks. The breaks are the regulation on the car’s speed. Regulation can allow, ultimately, for safer and faster progress than if the breaks that are the regulation did not exist.

However, many of these include context-specific details and, as such, we can probably not have simple regulation that covers ‘all of AI’. That being said, difficulty of monitoring or controlling a process does not mean that we should not do so. Again, nuclear power production is quite difficult to control but we can now steer it in a net positive direction to generate incredible amounts of comparatively very clean energy. Nuclear material control (non-proliferation of nuclear material) is very difficult (to wit: the situation in North Korea we have today, but there are other examples) but few would argue to not have such controls on the use and distribution of nuclear material.

AI has the potential to impact far more widely and powerfully the world than the control and safe use of nuclear materials did. Even without invoking Elon Musk and other people’s recent cries about existential risks, which are certainly to be heeded, there are many non-existential issues around AI that can benefit from some kind of monitoring and regulation.

I would love to hear other people’s views on this — it seems most people have jumped on the bandwagon and tend to repeat either the view of ‘we don’t need any large-scale AI regulation, get away from (holy) technological progress!’ or ‘AI is an existential risk, we must do something or we’ll all die!’. In between is where there are an incredible number of nuances and details, which is where future discussions should be.

--

--

Martin Dinov
Maaind
Editor for

CEO and Founder @ Maaind. Previously Senior AI Engineer @ Capgemini, PhD neuroscience/AI from ICL. Bioinformatics from KCL. Software guy from early years.