The Artificial Intelligence Productivity Paradox

Ryan Khurana
7 min readAug 13, 2018

--

Unlike narrowly impactful innovations, Artificial intelligence (AI) is general purpose technology (GPT) since it has the ability to integrate into nearly every sector of the economy. GPTs, such as electricity, automobiles and the internet, tend to make most of their impact in earlier stages of the production chain, resulting in transformations in economic and social organisation. AI clearly follows in the trend of previous GPTs as not only has advancement in the field driven an increase in demand for data scientists and engineers, but has resulted in resources being dedicated to the ethical and social impacts of the technology. The impact of AI on governance and fears of technological unemployment serve as indicators of the profound change that AI is set to bring. It is surprising then that such a technology has made little impact on the wider economy.

Since interest in the field began to increase as a result of Deep Learning being used to significantly reduce the error rate in image recognition, AI has become ubiquitous in daily life. Computer vision is used to identify images on smartphones, while the natural language enable realistic conversations with virtual assistants. The number of AI related startups and venture capital investment into the field has skyrocketed since 2010, with little signs of slowing down in the near future. According to a recent study by Jason Furman, former Chairman of the Council of Economic Advisers, and Robert Seamans of NYU, there is an increasing trend in general funding of AI and in the number of patent applications year on year.

Despite the field’s growing reputation and groundbreaking contributions, global productivity growth has remained near stagnant since the Great Recession. The United States has consistently underperformed historical productivity growth rates since 2008, which raises the question of where all the gains from AI are going. If AI is a GPT, its growth should be most apparent in productivity, since it would increase the output each sector is able to have. This type of contradiction is known to economists as the Solow Paradox, from a 1987 quote by Nobel Prize winning economist Robert Solow that “the computer age is visible everywhere but the productivity statistics”. The AI Solow Paradox between the expected results of an accelerating rate of innovation and stagnant productivity presents a challenge for policy makers who want to manage AI’s development so that its benefits are significant and widespread.

There are three main schools of thought to explain the current discrepancy between innovation and productivity. The first is known as the mismeasurement hypothesis, which states that technology is making a real impact and improving people’s lives, but the statistical tools being used are failing to capture its full impact. Productivity statistics look at output per hour or output per worker, but fail to incorporate the non-pecuniary gains in the quality of output. Smartphones may be manufactured at the same rate as 10 years ago, and cost a similar price, but are of greater quality than older models. This improvement in the quality of technology, which has resulted in increases in user satisfaction, and has enabled a wide range of digital capabilities that were impossible before, is failing to be captured, distorting policy analysis. The marginal cost of browsing Facebook more or Googling the answer to another question is zero, but AI is improving the quality of these experiences in a way that is not easily measurable. The lack of economic metrics, however, should not obscure the real effects of innovation.

While a popular view, it is not without its flaws. The mismeasurement hypothesis was tested in 2017 by University of Chicago economist Chad Syverson by analysing the relationship between productivity growth and the information technology (IT) which is believed to believed to be missing from the statistics. He found no relationship between the size of an IT industry and the rate of productivity slowdown, nor between the share of broadband access and the productivity slowdown. If things are being mismeasured, it would be expected that where the IT industry is larger, the missing stats are starker, and if consumers were benefitting regardless, productivity growth would still be expected in places with low broadband access. Since this relationship does not exist, the compelling narrative of mismeasurement lacks empirical support.

The second view claims that it is not the productivity slowdown that’s illusory, but innovation. The rate of innovation has been in marked decline for years, with the appearance of development being the result of minor incremental improvements, mainly in consumer technologies. He uses the thought experiment of taking someone from the 1800s to the 1900s and comparing their experience to the jump from the 1900s to the 2000s. While the first time traveller would see a radically different world with electricity, indoor plumbing, and cars, the second would see a world that simply had much better versions of previously existing technologies. The radical innovations that led to historical growth then, he argues are not occurring. This pessimistic, which views AI as simply an extension of gains in computing, denies that it will have general purpose effects in the same way as electricity.

The problem with this view is that it discounts the differing historical rates of innovation around the world. Many of the world changing technologies that were introduced during the first two industrial revolutions were not rapidly disseminated everywhere. It can be seen today, that developing countries that lacked the infrastructure of legacy technologies rapidly adopt the latest breakthroughs faster than developed nations with infrastructural lock-in. A country like Romania has internet speeds far faster than the United States, while China’s payment infrastructure puts Europe and North America to shame. This pessimism is more likely to be the case of the way new technologies are deployed in rich countries, rather than an indication of declining innovation.

The final view argues that there is a lag between innovation and implementation. In this view, the potential for productivity growth already exists, but the techniques needed to implement new innovation, and the understanding that allows for technology to disperse across an economy, are missing. The researchers behind this interpretation pointed out that it “allows both halves of the seeming paradox to be correct,” since it allows for both optimism and pessimism in the short run. They find that in each historical technological revolution, there are early leaders who are able to build the technology and profit off of its creation, but it takes a while before others are able to adequately develop their own versions, intensifying market competition, thereby leading to widespread productivity and economic growth.

These findings are corroborated by another recent study which finds that there exist across various industries certain “superstar firms” which are far more productive than their competitors. These market leaders tend to have smaller workforces and rely on more patents, suggesting that if the rest of the market adopted the management practices they undertake, the missing productivity would appear as use of innovation disperses. Estimates of the productivity gains that AI is predicted would, given this analysis, not require new tools to measure, nor would they be more mild than previous technological revolutions. The issue lies firmly in the domain of policy.

Greater communication between academic researchers in the field and the public at large, improved understanding of the pace of technological development, and greater political discussion of the capabilities of new technologies would go a long way in improving the general economic impact of AI. Certain countries, such as the Ministry of Artificial Intelligence in the UAE, and the UK’s recently announced AI Council are steps in the right direction, though do not go far enough to fill in the gap. While it is likely that market forces will incentivize communication and understanding in the long-run, accelerating this process will benefit the nations who make these investments the earliest.

A model that ought to be revived is that of the US Office of Technology Assessment, a congressional office that existed between 1972 to 1995 at the height of the computer revolution. It provided technology assessment in a way that made complex innovations understandable to policy makers and the general public. Not only did such an initiative allow for non-partisan evidence in technology related decision making, but its mandate encouraged publications that were readable for non-technical audience. The current situation in AI features a stark contrast between the academic research and sensationalist journalism, without many steps in between. Such a mandate would allow industry to develop AI practices that are forward thinking, yet realistic.

There are many challenges that would result from a world with low productivity, especially one with populations ageing as fast as they are today. Artificial intelligence provides an opportunity to resolve these issues, creating widespread opportunity and prosperity. In order to bring this about, the right investments need to be made in encouraging the dissemination of this transformative technology.

— — — — — — — — — — — — — — —

Ryan Khurana is the Executive Director of the Institute for Advancing Prosperity. You can follow him on Twitter and connect with him on LinkedIn.

Thanks for reading! If you enjoyed the article, we would appreciate your support by clicking the clap button below or by sharing this article so others can find it.

Want to read more? Head over to Politics + AI’s publication page to to find all of our articles. You can also follow us on Twitter and Facebook or subscribe to receive our latest stories.

--

--

Ryan Khurana

Executive Director of the Institute for Advancing Prosperity. Axiology, Horology, and Technology.