The Disruptive Economics of AI

The effects of AI on business are unfolding slowly, but steadily,

MIT IDE
MIT Initiative on the Digital Economy
7 min readDec 10, 2022

--

Photo by Christina @ wocintechchat.com on Unsplash

Irving Wladawsky-Berger

AI is emerging as the defining technology of our era. AI technologies are approaching or surpassing human levels of performance in vision, speech recognition, language translation, and other human domains that not long ago were viewed as the exclusive domain of humans. Over the next few years, major advances in deep learning and foundation models will lead to even more impressive AI-based applications.

At the same time, the opportunities for AI to improve the quality of business decisions and create economic value are boundless, noted by University of Toronto professors Ajay Agrawal, Joshua Gans and Avi Goldfarb in From Predictions to Transformations, a recent Harvard Business Review article based on their new book, Power and Prediction: The Disruptive Economics of Artificial Intelligence. “But because decisions in one area of an organization usually have an impact on decisions in other areas, introducing AI often entails redesigning whole systems. In that way, AI is similar to groundbreaking technologies of the past, like electricity, which initially was used only narrowly but ultimately transformed manufacturing.” {Goldfarb also spoke recently at the MIT IDE about his new book. Watch video of the seminar here.}

In other words, despite its human intelligence connotations, the widespread deployment of AI systems across the economy will follow a life cycle similar to those of previous historically transformative technologies, and require a fundamental rethinking of organizations, processes, business models, and talent.

The View from 2017

This wasn’t always the case, of course. As recently as 2017, I attended a University of Toronto seminar where professor Goldfarb spoke about the economic value of AI. He explained that the best way to assess the impact of a potentially transformative technology is to look at how the technology reduces the cost of a widely used function. Computers, for example, have dramatically reduced the cost of digital operations like arithmetic by several orders of magnitude.

As a result, we’ve learned to define all kinds of tasks in terms of digital operations, e.g., financial transactions, inventory management, word processing, photography. Similarly, the internet has reduced the cost of communications and the Web has reduced the cost of access to information, which has led to a huge increase in applications based on communications and information, like music and video streaming, and digital media.

Viewed through this lens, AI is essentially a prediction technology, and its economic impact is to reduce the cost and expand the number and variety of applications that rely on predictions. A key finding of Stanford’s 2022 AI Index report was that AI is becoming much more affordable and higher performing, leading to the widespread commercial adoption of AI-based applications. “Since 2018, the cost to train an image classification system has decreased by 63.6%, while training times have improved by 94.4%.,” said the report.

Goldfarb and his U of T colleagues published their original research in the 2018 book Prediction Machines: The Simple Economics of Artificial Intelligence. “When we published Prediction Machines in 2018, we thought we had said all we needed to on the economics of AI,” wrote the authors in the preface of their new book. “We were wrong. We laid out a framework for the economics of AI in that book, which remains useful today. However, the Prediction Machines framework only told part of the story, the point solutions part. In the years since,

We discovered that another key part of the AI story had yet to be told, the systems part. We tell that story here.”

“While we had been focused on the economy properties of AI itself — lowering the cost of prediction — we underestimated the economics of building the new systems in which AIs must be embedded,” they added. “Had we better understood that then, instead of assessing the landscape of prowess in the production of state-of-the-art machine learning models, we would have instead surveyed the landscape for applications focused on prediction problems where the systems in which they would be embedded were already designed for machine prediction and would not require displacing human predictions. We would have looked for enterprises that already employed large teams of data scientists who had integrated predictive analytics into the organization’s workflow.”

Photo by alex° on Unsplash

Finance Deployments Lead the Way

In their search for such enterprises, they found that financial institutions were among the most ready to deploy AI at scale across their organizations because they already employed large teams of analysts to predict a variety of criminal behaviors including fraud, money laundering, and sanction noncompliance. So were e-commerce organizations whose success depended on key data-driven decisions such as personalized product recommendations and smart inventory management. A number of other industries had embraced AI in specific areas, such as automated drug discovery in pharmaceuticals and underwriting decisions in insurance, but while these were promising point solutions, they could not be considered transformational system wide solutions.

As we’ve learned over the past two centuries,

there’s generally been a significant time lag between the initial acceptance of an exciting new technology and its transformative impact across economies and societies.

It takes considerable time — often decades — for these new technologies to be widely embraced. Historically transformative technologies have great potential from the outset, but realizing that potential requires a fundamental rethinking of organizations and major complementary investments, including business process redesign; innovative new products, applications and business models; and the re-skilling of the workforce.

For example, U.S. labor productivity grew at only 1.5% between 1973 and 1995. This period of slow productivity coincided with the rapid growth in the use of IT in business, giving rise to the Solow productivity paradox, a reference to Nobel Prize MIT economist Robert Solow’s 1987 quip: “You can see the computer age everywhere but in the productivity statistics.” But, starting in the mid 1990s, US labor productivity surged to over 2.5%, as fast growing internet technologies and business process re-engineering helped to spread productivity-enhancing innovations across the economy.

Similarly, productivity growth did not increase until 40 years after the introduction of electric power in the early 1880s. It took until the 1920s for companies to figure out how to restructure their factories to take advantage of electric power with new manufacturing innovations like the assembly line.

Key Challenges

Let me summarize a few of the key challenges in the deployment of AI-based systems that were discussed in the HBR article.

AI systems change takes time. The article argues that while language translation, medical image analysis, and financial fraud detection are impressive AI advances, they’re hardly transformational. “They slot into existing businesses without much fuss, precisely replacing the humans who traditionally made predictions. In all other respects, the businesses are unchanged.” At this point, the impact of AI is nowhere near the transformative impact of electricity, or IT.

AI is changing decision making. “Most decisions require two things of the decision-maker: the ability to predict the possible outcomes of a decision, and judgment. Prediction is largely based on data. Judgment is basically a subjective assessment of contextual factors that are not easily reduced to data.”

AI shifts uncertainty. “The value of AI comes from improving decisions by predicting what will happen with factors that might otherwise be uncertain. But a consequence is that your own decisions become less reliable for others. Introducing AI into the value chain means that your partners in it will have to coordinate a lot more to absorb that uncertainty.”

For example, if restaurants adopt AI for accurate demand forecasting, they’re likely to waste less, sell more, and become more profitable. However, their suppliers now face increased unpredictability because restaurant orders are likely to change from week to week. As a result, suppliers need to embrace AI to better predict their customers’ orders, and their demand uncertainties are now passed on to the next levels in the supply chain.

AI systems require coordination with modularity. “The adoption of AI will often involve a system that finds an optimal balance of modularity and coordination. Modularity insulates decisions in one part of the organization from the variability — the ripple effects — that AI creates in others. It reduces the need for reliability. Coordination, in contrast, counters the lack of reliability that comes alongside AI adoption.”

“[T]he promise of AI’s prediction technology is similar to that of electricity and personal computing,” they conclude. “Like them, AI began by resolving a few immediate problems, creating value in isolated, tightly bounded applications.

But as people engage with AI, they will spot new opportunities for creating solutions or improving efficiency and productivity.

… As these opportunities are realized, they will create new challenges that in turn provide more opportunities. So as AI spreads across supply chains and ecosystems, we will find that all the processes and practices we took for granted are being transformed — not by the technology itself but by the creativity of the people who are using it.”

This blog first appeared November 17 here.

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.