What If Everything You’ve Heard about AI Policy is Wrong?

Adam Thierer
18 min readFeb 21, 2023

--

Most of what you’ve been told about artificial intelligence (AI) policy is wrong and Orly Lobel’s new paper, “The Law of AI for Good,” will set you straight. I believe this is the most important article written yet about AI governance and if you care about creating a better and safer future, you should read every word of it.

Orly Lobel is a Professor of Law at the University of San Diego School of Law and the author of the new book, The Equality Machine: Harnessing Tomorrow’s Technologies for a Brighter, More Inclusive Future. That book is also worth your time, but in this essay I will focus primarily on her new paper because it is squarely focused on the algorithmic governance issues that I write about here regularly.

Lobel calls for a paradigm shift in the way we currently think about algorithmic systems. She wants to flip the script about AI governance by moving us away from what she calls the “AI-as-Wrongs approach” and toward a conception of “AI for Good” that advances the goals of equality and justice that AI critics say are threatened by the rise of modern computational technologies. “Law-of-AI-for-Good would capture the vast potential of AI, while restraining its downsides,” she argues. “It would replace the prevailing absolutist approach that pervades contemporary policy debates with a comparative analysis of the costs and benefits of AI.”

The “absolutist approach” that has Lobel worried is visible in almost all prominent media, academic, and government coverage of AI policy issues today. “In the past decade, legal policy and scholarship have focused on regulating technology to safeguard against risks and harms,” she notes. “Far less attention has been given to the ways in which the law should direct the use of digital technology, and in particular artificial intelligence (AI), for positive purposes.” Used properly, she argues, algorithmic systems and applications can improve public welfare along multiple dimensions:

“Already digital technology is gaining comparative advantage over humans in detecting discrimination, making more consistent, accurate, nondiscriminatory decisions, and addressing the world’s thorniest problems: climate, poverty, injustice, literacy, accessibility, speech, health, and safety. The role of public policy should be to oversee these advancements, verify capabilities, and scale and build public trust of the most promising technologies.”

This is not to say there aren’t real risks of algorithmic systems being misused. But the current conversation is far too one-sided, she argues. “The issue is whether the concerns are unpacked, nuanced, concrete, and balanced — or whether they are bundled, blunt, abstract, at times overstated, and shaping the conversation in distorted and counter-productive ways.”

To restore some sanity to the AI policy debate, Lobel begins by explaining the problems with the current tenor of algorithmic governance discussions. She then does a deep dive into the two most common proposals set forth by most regulatory advocates: maximizing “humans-in-the-loop” at all times while also minimizing data collection. Both principles go too far and are in need of reassessment and balance, Lobel argues.

AI’s Academic & Media Mono-Culture

For many, Lobel’s thesis will seem provocative — even radical — but it is actually rooted in common sense. It only seems radical because we’ve been fed a steady diet of Chicken Little-ism about AI. Everywhere you look today, dystopianism dominates discussions about algorithmic processes with many critics making AI out to be the most nefarious technology ever. Lobel highlights some of the most popular texts in the field, which have titillating titles like: Weapons of Math Destruction, Automating Inequality, Technically Wrong, The New Jim Code, and Algorithms of Oppression.

And she’s only just scratching the surface of how hysterical things get with flamboyantly titled papers and books. The AI academic community today has become a close-minded mono-culture. If you dare suggest AI can do any good, you’re practically chased out of the room or accused of being an unthoughtful oaf. AI scholarship is now basically doom porn, with scholars trying to one-up each other’s tales of a looming techno-apocalypse. There’s almost no bottom to it.

Lobel notes that, throughout this literature, “risks of using available AI tools loom larger than the failure to use them.” “The most significant overarching fallacy that the techlash lens presents is demanding AI perfection rather than comparative advantage over human decision-making and the status quo,” she argues. Lobel identifies many specific fallacies seen in much of the critical academic and media writing about AI. They include:

* absolutism versus comparison;

* demanding perfection or lack of failure;

* engaging in the wrong comparisons;

* thinking of AI as static;

* ignoring scarcity and scale;

* privileging the status quo; thinking in binary solutions — adopt or ban; and

* making false distributional assumptions.

She’s exactly right, especially about the unwillingness of the critics to make sensible comparisons and acknowledge trade-offs, while also ignoring the learning process that accompanies all technological change and has repeatedly helped humanity muddle through.

Doom-and-gloom academics also play a sort of two-sided ‘gotcha’ game on AI policy. On one hand, they’ll identify what they claim is a troubling problem with a particular algorithmic process. “Look at how stupid this AI is!” they declare. But they act like that’s the end of the story instead of the beginning of learning process. It certainly is the case that algorithmic systems have flaws, just like all technological processes before them. No system is perfect, especially right out of the gates. But, by their very nature, algorithms are recipes that we are constantly tweaking to improve the final meal. It is a process of endless trial-and-error, incessant technical improvement, and constant societal adjustment. “Technology provides opportunities to learn and correct over time,” Lobel rightly says. And that is equally true of political, economic, and social systems — they learn and correct over time, too. This is the story of human progress. It happens in fits and starts and can be extraordinarily messy, but we have done it before and we will do it again and again. Algorithmic systems will improve over time because humans will constantly adjust them and corresponding systems to make them work for society. And we already see that at work in real-time all around us as algorithmic systems evolve and improve with each new iteration.

However, when confronted with such potential algorithmic improvements, the doom-and-gloom academics quickly play the opposite ‘gotcha’ card: “These systems are too good!” they lament now. Like a modern Goldilocks tale, they want their AI just right. Not too flawed, but also not too powerful. Hey, who can blame them? But, once again, this is a never-ended process of tinkering and improvement. We only get better and safer technological systems with constant experimentation and learning. And part of that learning process entails figure out how to keep accidents and mistakes to a minimum without derailing progress entirely.

The doom porn crowd doesn’t care to hear any of this, instead being happy to engage in what Virginia Tech technology historian Lee Vinsel calls “criti-hype,” which is essentially the reverse of what some companies do when they over-hype products or capabilities. “It’s as if [tech critics] take press releases from startups and cover them with hellscapes,” Vinsel points out. In his essay, “You’re Doing It Wrong: Notes on Criticism and Technology Hype,” he says that criti-hype academics and activists, “play up fantastic worries to offer solutions, and as we’ll see, often they do this for reasons of self-interest — including self-interest as in $$$$$$$$$$.”

He argues that “criti-hype became an academic business model” in recent years and that AI “is the area of technology that has likely experienced the greatest amount of criti-hype.” “I have watched people in ‘critical AI studies’ give conference presentations in which they spun out elaborate and frightening dystopian futures based on no other evidence than a few Google Image searches,” he says.

“But it’s not just uncritical journalists and fringe writers who hype technologies in order to criticize them. Academic researchers have gotten in on the game. At least since the 1990s, university researchers have done work on the social, political, and moral aspects of wave after wave of ‘emerging technologies’ and received significant grants from public and private bodies to do so. … at the worst, what these researchers do is take the sensational claims of boosters and entrepreneurs, flip them, and start talking about “risks.” They become the professional concern trolls of technoculture.”

Dan Castro of ITIF has written about the rise of such “concern trolling” around AI, and I elaborated on that trend in an essay about calls to “have a conversation about AI.” Basically, bad news sells. It sells books, gets you media hits and conference invitations, and helps you get respect (and tenure) from the rest of your doom porn colleagues and critical studies departments.

Meanwhile, in my essay last week about Microsoft and ChatGPT, I went through some of the most recent sensationalistic examples of criti-hype writing about generative AI systems. I also mentioned this excellent YouTube presentation by technopanic expert Nirit Weiss-Blatt on “The Media Coverage of Generative AI,” which nicely documents the media hysteria over artificial intelligence.

After I published that recent essay, I found this rather astonishing example of criti-hype from a Tufts University neuroscientist that just has to be read to be believed. The professor writes that the time has come for panic and radical action against AI innovators. “Panic is necessary because humans simply cannot address a species-level concern without getting worked up about it and catastrophizing,” he claims. “We need to panic about AI and imagine the worst-case scenarios, while, at the same time, occasionally admitting that we can pursue a politically-realistic AI safety agenda.” He fantasizes about, “a civilization that preemptively stops progress on the technologies that threaten its survival” and rounds out his call to action by suggesting that anti-AI activists vandalize the Microsoft and OpenAI headquarters “because only panic, outrage, and attention lead to global collective action.”

Not all AI academics are this hot-headed or hostile toward progress, but we increasingly live in a world in which this sort of lunacy is being normalized in academic departments. Some days when I’m reading through all the criti-hype AI literature out there, I find it almost indistinguishable from passages in “The Unabomber Manifesto.”

Challenging the Dominant Policy Prescriptions

Eventually, these relentlessly negative, fear-based narratives come to influence public policy discussions and proposals. In fact, they already have. “Governments are poised to double down on regulatory strategies that nearly exclusively address the risks of AI, while paying short shrift to its benefits,” Lobel notes.

In a recent paper, Neil Chilson and I documented “The Coming Onslaught of ‘Algorithmic Fairness’ Regulations,” and highlighted the rapid growth of AI regulatory proposals at the federal, state, and even local level in the United States. In another recent essay, I also highlighted seven major fault lines driving AI policy advocacy today. Bottom line: AI regulation is on the march both here and abroad.

Lobel is specifically interested in some of the common regulatory mechanisms being proposed for algorithmic systems. Her thesis has profound ramifications for many different current and proposed digital policies, like data minimization mandates and “humans-in-the-loop” requirements. She says, “these approaches are increasingly deeply unrealistic and normatively flawed.” The key to safer algorithms, and a safer world, she argues, could be more data and fewer humans in the loop.

Problems with Data Minimization

First, let’s consider her pushback against expansive privacy regulation and data minimization requirements. “The right to privacy can conflict with the right to fair, unbiased, and accurate decision-making,” Lobel argues. She points to evidence suggesting that “to fight against discrimination and to ensure that algorithms are more inclusive, more data is usually needed.”

The problem is that critics only focus on the potential discrimination harms that might come from data collection. Moreover, laws like GDPR and other comprehensive privacy-based regulations always demand data minimization for that reason. But there is an equally serious danger of not being able to identify and remedy deeper injustices without more data. “Those risks arising from not collecting information are not contemplated.”

Lobel feels so strongly about this that she wants to see law protect the right to data maximization in certain instances. She suggests that, “a complementary bundle of rights — co-existing, and at times competing with, privacy rights — must include a duty to collect fuller information and a corollary right to be included in data collection.” Most existing or proposed privacy regulation rest on the assumption that the less data collected, the better. “But what if the very fact that data is collected brings more health, safety, equality, accuracy, and socially valuable innovation? In other words, what if the tradeoffs are not simply between individual rights and cheaper services, but are also between different fundamental rights?”

Unfortunately, she notes, “both in Europe and in the United States, setting defaults that privilege privacy combined with a techlash policy mindset make such a balanced construction unlikely. Even when it comes to one of the most clearly acceptable exceptions to privacy, that of scientific research, the GDPR has already begun to present challenges. For market innovation and competition, an unbalanced application of the GDPR and any imbalanced privacy law may result in a range of unintended regressive effects.”

If anything, Lobel is understating the severity of the situation. Recently, I posted a long literature review summarizing the impact of the GDPR. [See: “GDPR & European Innovation Culture: What the Evidence Shows,” Feb. 5.] As noted there, all the academic evidence points to an undeniable conclusion: GDPR has been a disaster for European business formation, competition, investment, and global competitive standing. It has become difficult to even name any major EU-based digital technology leaders. “The only thing Europe exports now on the digital-technology front is regulation,” I noted in The Wall Street Journal recently. And this situation is about to get so much worse for Europeans as the E.U. looks to impose mountains of additional data restrictions under the AI Act. The root of the problem lies in the data minimization requirements associated with the law as well as the significant compliance burden associated with the data collection rules more generally.

And consider what data minimization means for our ability to use AI and machine learning to address pressing public health needs. Last year, I wrote about my trip to the Cleveland Clinic to meet with doctors and scientists there who were tapping the power of algorithmic systems to address cancers, heart attacks, strokes, and degenerative brain diseases (Alzheimer’s, dementia, Parkinson’s). It was remarkable to hear about how AI for good was already becoming a reality. But there is so much more to be learned.

I noted how, during his opening remarks, Dr. Tom Mihaljevic, MD, CEO and President at the Cleveland Clinic, said that when he was getting started in medicine in the 1980s, medical information doubled roughly every 7 years. Today, by contrast, medical information is doubling every 73 days! The only way to take full advantage of all that knowledge is with the power of artificial intelligence and machine learning, he said. Importantly, he estimated that the Cleveland Clinic is only able to reach an estimated 1.5% of Americans using traditional means of care, but he hoped that through improved data sharing across facilities, the Clinic could expand its reach to patients and work with other doctors and scientists collaboratively to use the power of data to solve various public health needs. But the key to all of this is data — data collection, data modeling, and database cross-referencing to achieve better science. Data minimization mandates will be a cross-purposes with these public health objectives, just as Lobel fears. Our algorithmic capabilities can only be as good as the data we feed into them.

Problems with Humans-in-the-Loop Requirements

Next, let’s consider Lobel’s concern about humans-in-the-loop requirements. Again, this generally refers to efforts to push for — or even mandate by law — that humans be involved at critical stages of algorithmic processes to ensure that they can continue to guide and occasionally realign systems and practices as needed. It’s generally a wise principle and one that I have endorsed in some of my past writing. [See, “How the Embedding of AI Ethics Works in Practice & How It Can Be Improved,” Sept. 22, 2022.] While one can favor humans-in-the-loop as a generic best practice for all algorithmic systems, most AI policy wonks tend to go further and suggest that the principle should take on the force of law (even if they regularly fall short on specifics about how to do that).

Lobel insists that this principle can go too far. In many instances, she wants to get humans out of the loop altogether. She suggests that, “under certain circumstances there should be a right to an artificial decisionmaker, alongside a corollary duty to automate. Put differently, there should be a prohibition on humans entering the loop when such entrance would diminish the benefits of automation and bring error and bias.”

But what is the test for when humans in the loop “diminish the benefits of automation and bring error and bias”? Lobel is a bit short on specifics here. On one hand, that is understandable because this can be a hard line to draw in terms of specific policy prescriptions that can advance AI for good. Unfortunately, however, many other academics on the other side of this debate have no problem drawing that line in the opposite direction. For example, law professors Gianclaudio Malgieri and Frank A. Pasquale call for “unlawfulness by default” to be the legal standard for algorithmic systems. Other critics propose the creation of a new “FDA for algorithms” (perhaps in the form of a new National Algorithmic Technology Safety Administration), or a new AI Control Council. These proposals embody a precautionary principle approach to algorithmic policy that would tightly limited automation of all varieties. Humans wouldn’t just be required to be in the loop, but in many instances no “loop” would even be allowed to form until some regulatory agencies got around to approving new algorithmic innovations.

Lobel is extremely worried about what such mandates could mean for life-enriching AI innovation. She abhors the hubris of those who fail to understand how such regulation “may come at a serious cost” for society. She continues:

“The hubris is even worse when we take a public policy perspective. Requiring humans to be the final decision-makers in high stakes processes is not only a flawed solution in contexts where AI has clearly reached comparative advantages, but it also risks perpetuating irrational fears about AI instead of helping debias citizens about the comparative risks of technology. Most troubling, such hubris ironically risks legitimizing the use of flawed algorithms rather than working to make the algorithms better because it continues the legacy of automation fallacies.”

She offers up autopilot in aviation as an example of why it is sometimes crucial to get humans out of the loop the achieve safety goals. “The international aviation industry in its entirety operates with the gold standard of autopilot when weather conditions are harsh.” But it’s not just when conditions are bad that such automated flight systems are important. Virtually every facet of flight today involves some degree of automation and it has enabled the aviation industry to steadily improve its safety and efficiency over time. Thanks to automation, pilots can focus on other tasks and also ensure that fatigue and other human errors are less likely to affect the safety of the aircraft. Would we really be better off if more humans were in the loop in airplanes today? Almost certainly not, and this is why Lobel suggests we may even have a right to demand such automation. I’m not sure we’d need a formal right in that sense, but the public has already come to understand and expect that automated flights are important to getting them from point to point safely. And hopefully that standard expands, Lobel notes. “If consumers are comfortable with this standard, there is no reason to believe that we cannot learn to love “a lot safer” autonomous cars — as well as fully autonomous commercial planes.”

Summary of policy prescriptions

Lobel makes a powerful case for “a right to data collection” and greater automation to important societal goals. She deserves praise for identifying the dangers of overly precautionary regulation. “This protective regulatory stance exemplifies a bias in favor of inaction and the status quo that, in some instances, likely no longer serves us,” she rightly argues.

However, I wish that Lobel would have tried to spell out in greater detail the exact sort of test that she believes can help achieve these policy goals. If we really want lawmakers to formalize a “law of AI for good” and give it some real teeth, then we’ll need something akin to “The Innovator’s Presumption” to be written into law. As I explained this concept elsewhere, it would enshrine into law the demand that, “Any person or party (including regulatory bodies) who opposes a new technology or service shall have the burden to demonstrate that such proposal is inconsistent with the public interest.”

This standard reverses the burden of proof in regulatory proceedings and forces opponents of technological change to meet a higher bar when seeking to block progress. To make the Innovator’s Presumption even stronger in specific instances and within technocratic agencies, it should be accompanied by high evidentiary standards of review so that criti-hype junk science and threat inflation cannot be used to block the advance of algorithmic innovations.

Toward Better Understanding: The Need for AI Literacy

One sensible thing we can do to address what Lobel calls the “human-AI trust gap” is to advance better public understanding of algorithmic systems. “Digital literacy — and improving digital rationality — should be a national strategy,” she says. “The aim should be the right mix of trust and skepticism… based on accurate assessments and acceptable trade-offs.”

She is exactly right. For many years, I have written about the important role that media literacy or digital literacy can play in helping to better inform public discussion of complex issues. We definitely need to do a better job of this, but where to start? That’s the hard question. Schools today have a hard enough time teaching basic literacy. Asking them to take on media literacy / digital literacy is needed, but so too are the resources and plans to achieve this objective.

What about other steps, such as transparency requirements that would encourage — or potentially even force — AI innovators to reveal more about how their algorithmic systems work. In recent years, many scholars have suggested that transparency requirements can help society peer “inside the black box” of algorithmic systems and help us gain a better understanding of their inner workings.

Lobel is skeptical of transparency-based remedies, however, such as laws that would mandate that the public always be informed whenever any automated systems are being utilized. She claims that, “research shows that this right to know about automation may have inadvertent harms.” She makes a fair point. After all, we don’t always know when automated systems are being used while we are flying planes, using financial applications, or getting medical exams. Algorithmic capabilities already power countless technologies in the background of our lives today, yet we usually don’t see a big, bright sign explaining that to us. Similarly, I don’t have a complete understanding of how the lane-departure warnings on correction systems in my car work, but all I really care about is that they do work. I don’t need to have any transparency right against my automaker; I just need to know that the system will be recalled if it doesn’t work as advertised, or that I can sue them later for damages. This sort of remedial administration of justice is the best way to address most algorithmic harms without upsetting the natural advance of important things.

That being said, I think transparency-related regulatory requirements may be the easiest thing for innovation defenders to compromise on. Depending how structured, a simple notice requirement can be harmless enough. For example, imagine a law mandating a notice accompany new AI or robotic systems that reads: “Algorithmic systems have been used in the delivery of service.” Then, if people aren’t comfortable with that fact, they can opt not to use the system or service.

However, if such algorithmic transparency notices take the form of frightening warnings that are not rooted in sound science, then such transparency requirements will definitely backfire in the fashion that Lobel fears. It would also lead to endlessly silly squabbles about how detailed such notices should be. (Think of how dumb some medical disclosures are in ads today already.) Going back to the example of autopilot in planes, should a giant sticker be affixed the side of every plane you board that tells you how all the automated systems work, or even just the percentage of time that autopilot is used while flying? If so, one would hope that the next disclosure line would mention how much safer travelers are as a result of that automation! But can you imagine the political fight that will take place at regulatory agencies once such notices start getting written? It’ll be a mess, and opponents of AI innovation will use that process to slow down the advance of new systems by using it as a sort of “heckler’s veto.”

Conclusion

I beg everyone in the AI policy space to read Orly Lobel’s paper immediately and be willing to entertain the thought that the doomsaying that dominates public discussion today has gone much too far. It is essential we start looking at things differently. If we want better health care, safer transportation, a cleaner environment, and less discrimination, then we need more and better algorithmic capabilities, not fewer. Orly Lobel has shown us the path forward.

_____________

Additional reading:

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer