Why Elon Musk is Right About AI Regulation
Carlos E. Perez
31838

I think Musk is worried about the wrong things. We do need to regulate AI, but not because it will get so smart that it will kill us. The problem is that AI is going to increasingly be given control of things, operating machinery, selecting workers, allocating resources and so on. When people do this, there is a clear legal chain of agency. There is the proximate worker, the chain of command, the corporation itself, the industry. With AI, we haven’t worked things out yet.

If an AI is operating machinery and damages something or harms someone, it isn’t clear who is at fault. Is it the user, the party benefiting from the operation of the machinery, the supplier of the machinery, the supplier of the software, the party responsible for training the software or some other party? It the problem is systematic, how can we assign fault? If no one really understands how the AI makes its decisions, how can we enjoin it from making undesirable decisions? Even now, corporations and individuals have been developing mechanisms for deflecting responsibility. Is AI going to be the ultimate cop out?

We had a similar problem with corporations in the 19th century. They were originally just legal entities for limiting liability, but they took on a life of their own. Even now, we find corporations hard to regulate which is why they are so politically active in support of laissez faire. I doubt we’ll have exactly the same problems with AI, but Musk is right. They do need regulation, and it makes sense to start early. I just think Musk is worried about the wrong problems.