AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead

Adam Thierer
12 min readSep 10, 2022

--

Over the past decade, tech policy experts have debated the impact of “software eating the world.” In coming years, the debate will turn to what happens when that software can be described as artificial intelligence (AI) and comes to have an even greater influence. Every segment of the economy will be touched by AI, machine learning (ML), robotics, and the Computational Revolution more generally. And public policy will be radically transformed along the way.

This process is already well underway and accelerating every day. AI, ML, and advanced robotics technologies are rapidly transforming a diverse array of sectors and professions such as medicine and health care, financial services, transportation, retail, agriculture, entertainment, energy, aviation, the automotive industry, and many others.

As every industry integrates AI, all policy will involve AI and computational considerations. The stakes here are profound for individuals, economies, and nations. It is not hyperbole when we hear how AI will drive “the biggest geopolitical revolution in human history” and that “AI governance will become among the most important global issue areas.”

Unfortunately, many policy wonks — especially liberty-loving, pro-innovation policy analysts — are unaware of or ignoring these realities. They are not focusing on preparing for the staggering constellation of issues and controversies that await us in the world of AI policy.

Innovation defenders need to keep the Gretzky principle in mind. Hockey legend Wayne Gretzky once famously noted that the key to his success was, “I skate to where the puck is going to be, not where it has been.” Yet today, many analysts and organizations are focused on the where the tech policy puck is now and not anticipating where it will be tomorrow.

Because of its breadth, AI policy will be the most important technology policy fight of the next quarter century. As proposals to regulate AI proliferate, those who are passionate about preserving the freedom to innovate must prepare to meet the challenge. Here’s what we need to do to prepare for it.

The Coming Computational Revolution

Thomas Edison once spoke of how electricity was a “field of fields.” This is even more true of AI, which is ready to bring about a sweeping technological revolution. In Carlota Perez’s influential 2009 paper on “Technological Revolutions and Techno-economic Paradigms,” she defined a technological revolution “as a set of interrelated radical breakthroughs, forming a major constellation of interdependent technologies; a cluster of clusters or a system of systems.” To be considered a legitimate technological revolution, Perez argued, the technology or technological process must be “opening a vast innovation opportunity space and providing a new set of associated generic technologies, infrastructures and organisational principles that can significantly increase the efficiency and effectiveness of all industries and activities.” In other words, she concluded, the technology must have “the power to bring about a transformation across the board.”

Contemporary AI technology fits the bill. AI combines computation, data, algorithms, machine learning, networks, and advanced mathematics, to drive a broad range of powerful applications and surfaces new techno-economic and techno-social paradigms. It will transform everything.

Several underlying realities make AI both the most important technological revolution and all-encompassing technology policy issue of our lifetime. First, AI reflects the power of combinatorial innovation in which new technologies symbiotically build on top of one another, holistically accelerating each technology’s development and sophistication over time. AI and machine-learning capabilities build upon knowledge and capabilities developed through many other important technologies and sectors, including microprocessors, the Internet, high-speed broadband networks, sensors and geolocation technologies, and data storage/processing systems. In turn, AI and ML will become the building blocks for a great many other innovations going forward.

Indeed, AI/ML is set to become the “most important general-purpose technology of our era.” This general-purpose nature is the second complicating reality of AI. AI will benefit almost all organizations’ analytics and marketing, customer service, and sales. It will also affect many other business practices, social norms and interactions, artistic endeavors, governmental operations, and much more. This broad scope of application is what makes AI so important for future innovation and growth, but also complicates its governance.

Much like consumer electronics, computing, and the internet before it, AI is difficult to govern in the abstract; it is much easier to imagine how to govern individual applications. This is one reason that governance frameworks for driverless cars, drones, and robotics are developing more rapidly than overarching regulation for general AI.

Finally, AI also raises special governance challenges because it is a dual-use (and often open source) technology that, like chemical and nuclear technologies before it, has both beneficial peaceful uses but also potentially many military or law enforcement applications. This fact is particularly important when discussing so-called existential risks.

Expanding Our Skillset

Thus, AI (and AI policy) is multi-dimensional, amorphous, and ever-changing. It has many layers and complexities. This will require public policy analysts and institutions to reorient their focus and develop new capabilities.

Market-oriented research centers tend to have lots of lawyers and economists on staff today. Those skillsets will remain essential. But AI opens new policy horizons and demands new capabilities. “Machine learning is at the intersection of statistics and computer science, occasionally also taking inspiration from cognitive science and neuroscience,” notes computer engineer Ethem Alpaydin. Those skillsets are rare in think tanks or policy advocacy shops today. But AI and ML also intersect with many softer sciences, including philosophy, sociology, anthropology, and countless others. The most important policy battles over the future of AI will be about ethics, and yet few think tanks have philosophers on staff.

Yet advocates of aggressive AI regulation can be found in many of these academic fields, and many think tanks and policy advocacy organizations on the Left are already gearing up to handle AI policy in a more holistic fashion, incorporating perspectives from many of these disciplines. This leaves individuals and groups who defend markets and innovation woefully behind. When one examines the major conferences on AI and robotics today, there is very little ideological balance. That’s not just because the conference sponsors are guilty of a pro-regulatory bias, but also because the pipeline of talent on the other side of the fence is dry. There are not many obvious free-market scholars who could bring greater balance to such programs.

Thus, the first order of business for our movement is clear: We need to get serious about developing new talent and bringing new skillsets to better address AI policy issues and defend innovation and progress in this arena. The free-market movement has done a good job developing young talent in economics programs and law schools, but it’s far behind in developing a diverse new generation of scholars in all those other fields mentioned. As AI eats the world, it also eats the academy, and it won’t be just the lawyers and economists weighing in on policy matters.

Mapping the AI Policy Terrain: Broad vs. Narrow

Beyond talent development, the other major challenge is issue coverage. How can we cover all the AI policy bases? There are two general categories of AI concerns, and supporters of free markets need to be prepared to engage on both battlefields.

These categories correspond to the two different forms of AI: strong and weak. Strong AI generally refers to general machine-based cognitive capabilities, the strongest of which is sometimes also called artificial general intelligence (AGI). An AGI would have cognitive traits as broad and as plastic as human cognition, and would be able to engage on many different types of problems — perhaps even surpassing human ability.

The vast majority of AI experts agree that such AGI and especially so-called superintelligence claims are wildly overplayed and that there is no possibility of machines gaining human-equivalent abilities any time soon — or perhaps ever. “In any ranking of near-term worries about AI, superintelligence should be far down the list,” says AI expert Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans.

Nonetheless, superintelligence claims attract considerable public attention because it conjures up gloom-and-doom scenarios. In popular culture, AGI figures prominently in the plots of many dystopian depictions of artificial intelligence, including many science-fiction books, movies, and television shows. Thus, while over-hyped and unrealistic, techno-panicky fears about superintelligent robots or AI systems will need to be addressed to ensure policy discussions are not dominated by sensationalism and irrational speculation about machines attaining human-level capabilities. This broad concern about AI is particularly acute in discussions about what governments should do to address existential risks or global catastrophic risks around “killer robots” or other military or law enforcement uses of AI. These issues will demand more serious consideration, but few free-market analysts are paying attention to them today.

Most of the AI policy action today involves various classes of narrow or “weak” AI applications, Weak AI executes a specific task extraordinarily well. Examples include mundane, behind-the-scenes functions such as playing games, image or language translation, or recommender systems on various websites. But narrow AI is also used in the other sectors already identified, such as medicine, fintech, energy, etc.

In one area of weak AI, transportation, free-market analysts and organizations are relatively well represented. A growing number of analysts and organizations are participating in the debate over autonomous driving and flying systems. These are critical policy areas that will continue to require attention. Unfortunately, the involvement of free-market analysts and organizations here is an outlier. Such views are not well-represented in many of the other sectors that computational science and machine learning are set to revolutionize.

This is why far more specialized knowledge — of both the technologies and the nuanced issues and policies surrounding them — will be essential. Tech analysts will need to work their way up entirely new learning curves and master a much wider array of topics and skills if they hope to defend AI innovation in targeted fields and sectors.

As research organizations get more involved in AI policy, they will need to consider whether they have the capacity to take on AI issues both generally and specifically. For many organizations, their core competency will likely lie in the targeted policy topics that they’ve long covered and for which they have skilled analysts who can expand their portfolio to include AI-related matters as AI enters those fields.

This is entirely logical, but a good grasp of the big picture — and some policy focus on it — is essential. Again, AI is amorphous and increasingly ubiquitous; technologies and related policy issues will collide and entangle as combinatorial innovation accelerates. Previously distinct technologies or issues will no longer be easily compartmentalized the way they were in the past.

For example, the safety, security, and privacy considerations surrounding data-driven and connected “smart” devices or applications in one sector (say, health care) could intersect with similar concerns in other fields (like financial technology) when their underlying technologies interact. Likewise, the regulation of algorithmic processes for one purpose (ex: online content moderation on social media sites), could end up creating a variety of unanticipated policy issues (ex: cybersecurity vulnerabilities or privacy considerations).

In sum, attempts to regulate computational capabilities in one way could have profound implications for many other technologies, sectors, and fields of science. Thus, when critics blithely suggests “we should take steps to control AI,” slow its development, or even shut it down, they are (perhaps unknowingly) recommending that we should take steps to control or influence a wide swath of economic activity and innovation.

Confronting the Formidable Resistance to Change

Finally, free-market analysts and organizations must prepare to defend the general concept of progress through technological change as AI becomes a central social, economic, and legal battleground — both domestically and globally. Every technological revolution involves major social and economic disruptions and gives rise to intense efforts to defend the status quo and block progress. As Perez concludes, “the profound and wide-ranging changes made possible by each technological revolution and its techno-economic paradigm are not easily assimilated; they give rise to intense resistance.”

So too for AI. Resistance to AI-driven technological change will grow far more intense. The challenge for innovation defenders will be to craft constructive responses to the many arguments and policies set forth by those who oppose further progress in our computation capabilities.

One thing should be clear: China and many other nations understand the stakes in the global AI race, and they are looking to greatly enhance their computational capabilities to ensure they possess the talent, firms, capital resources, and policies to challenge the U.S.’s early leadership in AI. America’s early lead in this race is an outgrowth of our success with the Internet, e-commerce, and digital technologies. That stunning success story was enabled by wise policy choices that promoted a dynamic culture of creativity and innovation and rejected calls to settle for past technological, economic, or legal status quos.

Will America rise to the challenge once again by adopting wise policies to facilitate the next great technological revolution? Will our nation’s governance vision be rooted in the power of permissionless innovation, giving entrepreneurs the green light to find new and better ways of improving the human condition with life-enriching and life-saving technologies? Or will our governance vision shackle innovators with precautionary principle-based regulatory mandates, that put up the red light or endless prior restraints on creative activity?

It’s time for liberty-loving analysts to step up and prepare to do their part to ensure the benefits of this revolution come to fruition. I personally plan to spend every waking moment of my life in coming years making the continuing case for the importance of innovation and progress in this crucial technological sector. Will you join me?

[I thank Neil Chilson for feedback on this essay.]

__________

Additional Reading

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer