Generative AI and Regulations

Peter
AIoD
Published in
5 min readFeb 20, 2024

Killing Innovation Before It Starts

Dalle-3 Generated Image

“Virtue is more to be feared than vice, because its excesses are not subject to the regulation of conscience.”

Adam Smith

“If you put the federal government in charge of the Sahara Desert, in 5 years there’d be a shortage of sand.”

Milton Friedman

When the word blockchain was becoming a large part of our vernacular and popping up in virtually every journal and newspaper in the world several esteemed former and retired US statesmen wrote an interesting editorial on a potential use. They described how the use of the blockchain could solve land rights ownership problems in Latin America. Clear land rights being foundational to any free society their goal was noble but deeply flawed. I found it somewhat interesting since the authors were certainly not technologists, that a digital ledger was going to solve centuries of highly corrupt government. It didn’t nor can it, but my point is that professional politicians projected their ideas onto a technology for which they had little understanding. Sound familiar? Every time there is a new technology that can revolutionize our lives politicians and the chattering class seem to want to stop it.

I read the AI Bill of Rights that was recently published by the Whitehouse’s Office of Science and Technology Policy. It provided for an entertaining read but suffice to say it did little to guide the ongoing dialog on regulating AI. I have been wanting to compose an article on the regulation of AI for quite some time, but I’ve been quite reluctant because, like many topics these days, is one that is potentially controversial since it will touch upon the word ‘bias’. Let me state my position at the beginning: I am anti-regulation in general but when it comes to new technologies I am vehemently opposed to regulations. Quite simply politicians don’t understand what they would be even regulating. Ask most people if they could define in concise terms machine learning and artificial intelligence and you will receive either a blank stare or total nonsense. Invariably the conversation will turn to reducing bias in AI and my response has always been a look of bewilderment and a follow-on statement: you mean math is biased? Silence.

For the record when I am asked what AI is, I generally respond that it is sophisticated pattern matching. Of course, it is a simplification, but it gets the point across very effectively without diving into all the nuances. As humans we are super-efficient at pattern recognition, something that AI does remarkably well.

The word ‘bias’ has unfortunately crept into the conversation on regulating AI and machine learning and has become the cudgel for imposing regulations. AI is unique in the sense that it is a general-purpose tool that will likely alter a large swath of knowledge based industries as it matures. It has the capacity to replace or augment many high earning professions. So much of our knowledge is encoded or written down so there is certainly no lack of training data, and it is not hard to envision a world where AI can replace or make significantly more productive medical imaging, medical diagnostic, and legal functions.

This technology, which is in its infancy, would be inhibited by poorly defined regulations and regulations about complex technology will always be poorly thought through. Like most regulations they will use a sledgehammer when a ball peen will do fine. Our politicians are masters of regulating fictional problems and in the process causing a long list of unintended consequences. An attempt to regulate AI and other algorithms with a one size fits all heavy-handed approach, before they are even mature, will destroy innovation. Our adversaries have no problem taking advantage of any impact to innovation regulations would cause. They will do everything in their power to win the battle of AI leadership. Do you think our adversaries are not going to utilize AI for military purposes because of humanitarian reasons?

My bias is not your bias. There is this view the media has promulgated that all bias is bad. It’s not. Any attempt to remove regulated biases from training data will, in many applications of the technology, cause it to be far less beneficial. Virtually every article I see about regulating AI is about removing biases or making the models fair. My definition of fair might not be the same as yours and by imposing my version of fairness onto a model makes it inherently unfair to someone else. Bias and fairness are highly subjective concepts based on one’s perspective and trying to encode that into a model will only invalidate it.

There are many applications where having data that represents reality is critical even if it doesn’t fit a politicians regulated simplistic nuanced view of the world. If, for instance, a company using AI for medical diagnostics follows the AI Bill of Rights to the letter I believe that they would be practicing medical malpractice. The company is obligated to use as much data they can acquire to train the model for the best diagnostic decisions regardless if the data appears to have biases. Data will always have biases and stripping it of all biases, and it is highly unlikely you can, will only create false biases that will dramatically limit the use of these formidable tools. For accuracy, models must capture all the uniqueness and nuances of human nature. That’s not to say that there shouldn’t be some controls in place, that’s just being smart, but those controls should be highly domain specific. I wouldn’t want a model that is replacing my general practitioner to make a false diagnosis and prescribe me a medication that would kill me. There should be some form of constraints to prevent major errors and I would want doctors coming up with those constraints. I do not want a politician coming up with those controls.

AI didn’t just show up overnight. The basic ideas have been around for decades but advances in compute and storage hardware have made training these large-scale models possible. It is those massive models trained with petabytes of data using massive amounts of compute that panic politicians, but it is also those models that can have the biggest net positive impact on humanity. We should have a thoughtful conversation about the impact and ethics these models will have on our lives. There are deep nuances that need to be brought to light and discussed and not simplified and boiled down to a single word. These tools truly can alter and transform our lives and make available services to a section of society that previously did not have access. Just imagine having the work of a world class doctor encoded in a model and making that knowledge available to everyone for a tiny fraction of the cost.

AI will democratize knowledge and decision making in a way that the internet only started. Do we really want to kill it with bad regulations?

--

--