Policy Ramifications of the ChatGPT Moment: AI Ethics Meets Evasive Entrepreneurialism

Adam Thierer
17 min readFeb 14, 2023

--

ChatGPT is in the news a lot these days and seemingly everyone has a hot take regarding the societal impact of this AI-enabled Large Language Model (LLM). Few people had even heard of ChatGPT or its parent company OpenAI until recently. But now that this chatbot is surprising everyone with its remarkable capabilities, a predictable wave of hand-wringing has ensued from the chattering classes. This level of interest accelerated significantly when Microsoft announced a whopping $10 billion of additional investment (after already investing $3 billion earlier) in ChatGPT. Then those concerns kicked into hyperdrive once other tech companies announced they were accelerating the launch of rival products in response. This all follows a similar round of hand-wringing a couple of months ago about AI-generated text-to-art systems like DALL-E (also from Open AI), Midjourney, and Stable Diffusion.

Make no doubt about it, this period may well be remembered as a watershed moment in the history of AI and the rapidly unfolding Computational Revolution. It could also be a turning point in the governance of emerging technologies more generally. With an eye toward those policy implications, this essay discusses three important takeaways that I think are being overlooked about “the ChatGPT moment”:

1) First, OpenAI’s launch of ChatGPT (and the responses from other firms) serves as another important reminder of how quickly disruptive innovation can shake up technology markets in our modern economy — even as pundits and policymakers tell us that such competition is unlikely. In fact, many of the people being most critical of ChatGPT and AI innovation today are the same ones who have for years complained about supposedly unassailable Big Tech “monopolies.”

2) Second, the ChatGPT moment highlights the ongoing tension between “evasive entrepreneurs” and the many advocates of “anticipatory ethics” and/or preemptive regulation of AI and other emerging technologies. This tension will grow more acute as more algorithmic products and platforms launch.

3) Finally, American digital technology innovators are once again at the forefront of remarkable breakthroughs in an emerging technology field because they enjoy a more friendly innovation culture and policy environment not found in other countries. It is important that the U.S. protect and extend that positive innovation culture, particularly as competition with China intensifies on the AI front.

This essay will focus primarily on the second of these issues and discuss how a surprising player — Microsoft — has shaken up the technological marketplace and the state of AI governance discussions. And we should be happy that they have done so.

ChatGPT Launches without a Permission Slip

With ChatGPT setting a record for fastest-growing user base ever, major media stories worry about “How ChatGPT Kicked off an A.I. Arms Race” (New York Times) and the way that, “AI rockets ahead in vacuum of U.S. regulation” (Axios). What these and other columns decry is the way that, “Big Tech was moving cautiously on AI. Then came ChatGPT” (Washington Post). More flamboyant headlines breathlessly proclaimed that “AI-generated text is poisoning the internet,” (MIT Technology Review) and wonder, “Is this the Start of an AI Takeover?” (The Atlantic). Not to be outdone, still others warn of how ChatGPT foreshadows a new “dystopian world” to come, or that we’ve already reached “AI’s Jurassic Park moment,” with algorithmic capabilities racing ahead faster than our ability to control them. On the ITIF’s panic cycle, generative AI systems have already moved past the point of panic and are rapidly approaching the height of hysteria.

Source: ITIF

The “arms race” that has these pundits worried refers first to Google quickly announcing it was accelerating efforts on the AI front “to stem the threat of the Microsoft-OpenAI alliance” with a ChatGPT rival of its own that is called “Bard.” Other tech leaders are also readying responses. “Meta, Long an A.I. Leader, Tries Not to Be Left Out of the Boom,” says the New York Times in a headline that would have seemed unthinkable not long ago. Meanwhile, Apple and Amazon are sitting on mountains of compute (massive supercomputing infrastructure and server farms) that will eventually yield more consumer-facing AI services beyond just Siri and Alexa. Countless other companies are crafting competing AI products of their own, as the astonishing adjoining images from Antler and Matt Turck illustrate.

Source: Antler
Source: Matt Turck

We should pause for a moment and appreciate the irony of these and other news reports about the ascent of ChatGPT and Microsoft’s bold investments in it. For years, pundits and politicians have told us that “Big Tech” was an unstoppable juggernaut, but now the launch of ChatGPT has left those same critics wondering how major players like Google, Meta, Amazon, Apple, and IBM could have let OpenAI seemingly blow right past them in the AI race. One answer is that tech markets are far more dynamic and competitive than those critics imagined, and major innovations have continued all along behind the scenes without capturing front-page news.

In another sense, however, the major tech players really did get leapfrogged by ChatGPT, but for another reason that those critics won’t care to admit: Those large companies have been cowed into fear by years of condemnation from ethicists, politicians, and media critics who all claimed tech firms innovate too fast. Many academic tech critics have long called for more “friction” in technological design, which is just another way of saying that innovation should be slowed to a crawl and entrepreneurs should be forced to seek permission slips from various regulatory authorities before launching new services. In the AI space, there has been an explosion of academic literature calling for such heavy-handed algorithmic regulation.

These critics worry that some important values or concerns will take a backseat to unbridled innovation now that the AI genie is out of the bottle. We can think of this as the tension between anticipatory ethics and evasive entrepreneurism. The ChatGPT moment has thus become the latest case study in how almost all digital innovation works today.

A Clash of Values (and Policy Defaults)

First, let’s define some terms. Anticipatory ethics refers to efforts to “bake in” certain values and safeguards up front in the technological design process. For algorithmic systems, this means ensuring that various ethical best practices are embedded preemptively to ensure that algorithmic systems and applications benefit humanity. As will be noted, anticipatory ethics does not necessarily require legal or regulatory action, but most AI ethicists and other pundits usually believe that should be the case.

But what is sometimes referred to as AI “value alignment” or “ethically-aligned design” is challenged by the fact that humans regularly disagree profoundly about many ethical issues. And one of the most important things that people disagree on is whether to allow innovation to happen relatively freely (i.e., in a “permissionless” fashion) or whether it should instead be constrained through preemptive controls. I wrote a book about this never-ending battle between two generalized policy defaults of permissionless innovation and the precautionary principle. As a policy vision, permissionless innovation refers to the idea that experimentation with new technologies should generally be permitted by default. By contrast, the precautionary principle default treats most new innovations as “guilty until proven innocent,” and locks them down by design. More recently, I explained how this same tension between these policy defaults is playing out in the world of artificial intelligence.

Which brings us to the idea of evasive entrepreneurialism. As I noted in a 2020 book on the topic, evasive entrepreneurs are innovators who push back against social or legal norms through their innovative activities because they believe that innovation can profoundly change the world for the better. Sometimes, evasive entrepreneurs use new technological capabilities to put pressure on public policymakers to reform or selectively enforce laws and regulations that are outmoded, inefficient, or illogical. In cases where regulations do not yet exist, evasive entrepreneurs will sometimes move aggressively to roll out new technological capabilities and build public demand for them, thus countering the eventual political pushback.

In my book on the growing prevalence of evasive entrepreneurialism, I describe the “dance” — or even a sort of cat-and-mouse game — that often happens between innovators and regulators on many digital fronts today:

By acting as entrepreneurs in the political arena, innovators expand opportunities for themselves and for the public more generally, which would not have been likely if they had done things by the book. Ironically, by pushing up against social and legal norms in that fashion, innovators also often increase their chances of getting a fair shake from policymakers, who are forced to acknowledge a clear public interest in the fruits of expanded innovation opportunities. (p. 7)

This dance happens between innovators and regulators and also between innovators and ethicists and media critics. When law lags well behind market developments, as it frequently does on the digital technology front, it means that various social norms and pressures — as pushed by non-governmental actors — can become an important short-term “regulator” of technological innovation.

How Innovators are Addressing Ethical Concerns

This sets the stage for the battle playing out today between ChatGPT and various AI critics in media and academia. American innovators are pushing out increasingly sophisticated algorithmic technologies and being confronted with an avalanche of criticism as they do so. The critics focus on all the theoretical things that could go wrong by allowing new AI applications in the wild without properly vetting them with someone first — namely themselves are some new or existing regulatory body. Some of these critics say that “unlawfulness by default” (i.e., a very strict version of the precautionary principle) should be the regulatory default for all algorithmic systems to require that all their concerns are somehow addressed before new AI technologies are pushed to market.

“We’re on it,” the AI innovators generally respond, highlighting all the steps they are taking to address these concerns. In several long essays here last year, I detailed the extensive work being done by algorithmic innovators to address worries raised by AI critics by baking various best practices into their design processes preemptively. Those essays are:

· “How the Embedding of AI Ethics Works in Practice & How It Can Be Improved“ (9/22/22)

· “Running Code and Rough Consensus for AI: Polycentric Governance in the Algorithmic Age” (9/1/22)

· “AI Governance ‘on the Ground’ vs ‘on the Books,’” (8/19/22)

After surveying the many developments in this area, I concluded that, “contrary to the complaint by some AI critics that we aren’t having enough serious conversations about AI risks today, a good argument could be made that we have too many conversations going on about AI currently and the biggest problem is really one of coordinating those conversations, not just calling for more of them.”

And few companies have done more to address concerns about algorithmic systems than Microsoft. As I noted in those earlier essays, Microsoft has been on the cutting-edge of both AI developments and ethical frameworks for addressing algorithmic concerns. In a recent blog post, Microsoft Vice Chair & President Brad Smith laid out Microsoft’s vision for responsible AI innovation and he detailed the leading role the firm has taken on this front:

For six years, Microsoft has invested in a cross-company program to ensure that our AI systems are responsible by design. In 2017, we launched the Aether Committee with researchers, engineers and policy experts to focus on responsible AI issues and help craft the AI principles that we adopted in 2018. In 2019, we created the Office of Responsible AI to coordinate responsible AI governance and launched the first version of our Responsible AI Standard, a framework for translating our high-level principles into actionable guidance for our engineering teams. In 2021, we described the key building blocks to operationalize this program, including an expanded governance structure, training to equip our employees with new skills, and processes and tooling to support implementation. And, in 2022, we strengthened our Responsible AI Standard and took it to its second version. This sets out how we will build AI systems using practical approaches for identifying, measuring and mitigating harms ahead of time, and ensuring that controls are engineered into our systems from the outset.

Smith could have gone even further and noted all the important things that Microsoft officials have done with various professional associations who they work with to address these concerns. Through the Microsoft Research program and other initiatives, Microsoft has also supported many academics and university centers that study AI ethics and technology policy issues. Finally, few firms have worked more closely with policymakers here or abroad to preemptively address their concerns before product launches than Microsoft has over the past two decades.

Is There Any Way to Satisfy the Critics?

And yet, for many critics, all these efforts will never be enough. Fearing only the worst, many critics want seemingly endless foot-dragging to become the baseline whenever bold new algorithmic technologies are set to be released. This passage from a recent Kevin Roose New York Times column cuts the heart of how this dance is playing out in real-time with ChatGPT and its critics. More specifically, Roose highlights how Microsoft and OpenAI are aware of those concerns but will look to address them in an iterative fashion:

fixating on the areas where these tools fall short risks missing what’s so amazing about what they get right. When the new Bing works, it’s not just a better search engine. It’s an entirely new way of interacting with information on the internet, one whose full implications I’m still trying to wrap my head around. Kevin Scott, the chief technology officer of Microsoft, and Sam Altman, the chief executive of OpenAI, said in a joint interview on Tuesday that they expected these issues to be ironed out over time. It’s still early days for this kind of A.I., they said, and it’s too early to predict the downstream consequences of putting this technology in billions of people’s hands. “With any new technology, you don’t perfectly forecast all of the issues and mitigations,” Mr. Altman said. “But if you run a very tight feedback loop, at the rate things are evolving, I think we can get to very solid products very fast.”

So, here we see the fundamental tension between AI ethicists and AI innovators on full display. ChatGPT is trained using reinforcement learning and human feedback to constantly refine and improve its results. It is a technologically iterative learning process. But the process of instilling ethics in technological systems is also an iterative learning process. This is what Scott and Altman mean to say when they say that they expect these issues to be ironed out over time. During the launch event for the new ChatGPT-powered Bing and Edge, both Microsoft CEO Satya Nadella and Sam Altman stressed the importance of embedding ethics by design, but doing it through such iterative learning. As Altman summarized during his remarks, “The two companies share a deep sense of responsibility in ensuring that AI gets deployed safely. It’s very important to us. We’re eager to continue learning from real-world use so they will create better and better AI systems. You’ve got to do that in the real-world, not in the lab.”

In other words, AI alignment is not a static process but an evolutionary one that will require constant trial-and-error and real-time learning. When algorithmic systems are found to have faults or show bias, as Atlman noted, those problems will be addressed in real-time through vigilant oversight, risk-mitigation strategies, and rapid-fire remedial steps. Here is how I summarized this feedback process in a previous essay:

it is a mistake to think of AI safety or algorithmic ethics as a static phenomenon that has an end point or single solution. Incessant and unexpected change is the new normal. That means many different strategies and much ongoing experimentation will be needed to address the challenges we confront today and the many others to come. The goal is to continuously assess and prioritize risks and then formulate and reformulate our toolkit of possible responses to those risks using the most practical and effective solutions available.

Alas, far too many ethicists and media pundits take an absolutist perspective and just say “NO!” to everything. To be clear, the socio-technical concerns they raise deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. But there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions. Far too often, critics of algorithmic systems tend to assume that such innovations will just magically come about. They then jump ahead to ponder all the ways they think we’ll need to control the future. But there is no need to worry about the future if inventors can’t even create it first.

Too many AI ethicists and media critics regularly ignore this point, or at best pay it only lip service. For many of them, fear-driven worst-case thinking is the name of the game and all that matters. They live in a perpetual state of technopanic about most emerging technologies, and AI-enabled technologies in particular. [Watch this important presentation by Nirit Weiss-Blatt on “The Media Coverage of Generative AI” highlighting the growing media hysteria over artificial intelligence, especially in the wake of the ChatGPT rollout.]

This is why it is so important that OpenAI launched ChatGPT and that Microsoft made its bold investment in the technology in the hope of bringing greater algorithmic capabilities to the masses. We’d never get any innovation like this unless innovators were willing to push the horizons of what is possible.

The Line in the Sand

But this now means that Microsoft will be at the center of an epic debate over the future of computational governance, which puts the firm in a position it has not found itself in for many years. In the 1990s, Microsoft was viewed as one of America’s leading disruptive innovators, and most of the rest of the world was playing catch-up. But lengthy antitrust battles here and abroad, plus endless regulatory harassment across the globe, beat the firm down and made Microsoft somewhat more conservative in both business and policy circles.

Microsoft is now back in a big way and it is actually being more disruptive than many of its rivals on the AI front, at least at this moment. Google, Meta, Amazon, Apple, IBM, and others have seemingly become somewhat wary about rolling out some of their algorithmic technologies, not because they lack them, but because they are likely worried about the subsequent wrath of regulators and ethicists. “Google, Meta and other tech giants have been reluctant to release generative technologies to the wider public,” the New York Times says. “But newer, smaller companies like OpenAI — less concerned with protecting an established corporate brand — have been more willing to get the technology out publicly.” Again, it is good that these and other firms try to work out the kinks before launch and seek to address as many concerns as they can. But one can’t help but feel that many of today’s leading tech innovators have grown somewhat overly-cautious after the endless browbeating they’ve been taking in the midst of an ongoing “techlash.”

While Microsoft and OpenAI note that they also have taken these issues under consideration pre-launch, there is no doubt that they have moved to leapfrog the competition and act as a disrupter with the bold rollout of ChatGPT — even as the critics moan about it. “It’s a new day in search. It’s a new paradigm for search,” Microsoft’s Nadella said during that recent Microsoft launch event. “Rapid innovation is going to come. In fact, a race starts today in terms of what you can expect and we’re going to move. We’re going to move fast. For us, every day we want to bring out new things. Most importantly, we want to have a lot of fun innovating in search because it’s high time.” That’s the sound of a company that has got its mojo back in a big way and is ready to take on the world. And with the financial resources that Microsoft has and the astonishing compute power they possess with Azure, the sky is really the limit here.

But what happens next as the critics grow louder and calls for regulation intensify? Will Microsoft be forced to somehow pull back? Brad Smith’s recent blog post made it clear that Microsoft is moving forward with AI innovation and integrating OpenAI technology into its product stack, including Bing and Edge. The firm has also said it will offer ChatGPT functionality to other companies to help them create their own tailored chatbot offerings. In other words, Microsoft is embracing the opportunity of an AI-enabled future in a bold, holistic fashion. It is a potential game-changing moment for the modern tech economy and it positions them well for the algorithmic future.

That’s not to say that Microsoft and other firms are not going to come to the table and talk about AI concerns with policymakers, ethicists, or various other concerned parties. In his blog post, Smith says that, “effective AI regulations should center on the highest risk applications and be outcomes-focused and durable in the face of rapidly advancing technologies and changing societal expectations. To spread the benefits of AI as broadly as possible, regulatory approaches around the globe will need to be interoperable and adaptive, just like AI itself.”

These are good principles, and some of them can help guide policy. But the statement lacks precision regarding the ultimate question of where the general default for algorithmic systems is to be set, and it also suggests that Microsoft is writing as much for a European audience as an American one.

Quietly, however, Microsoft and other tech companies know all too well how unwelcome their algorithmic innovations are across the Atlantic. The reason that the ChatGPT moment isn’t happening in Europe is because the European Union’s policy regime makes such innovation virtually impossible. Europe’s leading export now on the digital technology front is regulation, not world-beating products. And things are about to get much worst. The European Union is advancing a new regulatory regime through its AI Act, which will decimate algorithmic innovation across the European continent, just as previous data regulations did for the first generation of online digital services.

That is the wrong AI governance regime for America because it would deny us the countless life-enriching and even life-saving applications that algorithmic systems have to offer us, while also leaving our country at a competitive disadvantage relative to China. But we should also want more algorithmic innovation to happen in the U.S. not only because it improves our global competitive standing and expands the range of life-enriching technological services, but also because it helps us ensure that important values really do get baked into the technological design process in the way that AI ethicists desire. America’s leading AI innovators do take these ethical issues seriously and will work to advance AI for the common good because that is the best way to build public trust in algorithmic systems. But we can accomplish that worthy goal without heavy-handed mandates and suffocating red tape like the Europeans are set to impose once again.

Source: Special Competitive Studies Project

Related Essays:

Microsoft-OpenAI launch video:

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer