How should we regulate AI algorithms?
Is a self-driving car responsible for a traffic accident? Should a medical AI go to jail if it subjects a patient to a deadly dose of radiation? If a stock trading algorithm causes a market crash, should it be fined? Are we heading into an apocalypse and summoning computerized demons, as some headlines quoting people like Elon Musk seem to be claiming?
Not to underplay the damage a mistakenly applied or badly designed automation could cause to the world, the short answer is: no. We are applying ever more powerful technology, but it’s us who are doing it. We should not humanize AI to the point of developing regulation to control them. Despite their complexity and seemingly emergent behavior, these are ultimately mathematical constructs created by people and organizations. Human laws apply to human actors. Algorithms are not people. Complexity does not make a system into a person.
If a self-driving car causes an accident, the responsibility lies with either the driver or the vehicle maker, not with the vehicle. If a medical AI misbehaves, the company making it must bear the cost. If an algorithm wielded by a hedge fund goes mad, the hedge fund must compensate. If punishing corporations isn’t enough, then we punish people. Algorithms don’t need either money nor time, so we can’t make them repent by either fining or jailing them. By letting the discussion veer into regulating the algorithm, we put ourself on a slippery slope of not keeping the responsibility with the people.
We’ve already made that mistake many times. It seems that the bigger the organizations, the easier it is for them to slip through mistakes and even intentional harm with no meaningful punishment. This won’t become better by pretending to regulate their computers instead of the organizations. We should learn from those mistakes and make sure it is the real decision makers who bear the downsides of the risks they expose.