The Financialization of Innovation in the 21st Century

This post is by Margaret B. W. Graham, professor (retired), Desautels Faculty of Management, McGill University

A much under-represented topic in the conversation about the emerging 4th Industrial Revolution is the consequences of the financialization of innovation. (See Mike Collins, “Wall Street and the Financialization of the Economy,” Forbes, Feb. 4, 2015). I am convinced that there exists a direct line between the financialization of the American economy in the last third of the 20th century and the reckless rapidity with which new technologies have been introduced in the open market. The push to get to market first, the emphasis on speed, not to mention the sense of inevitability about the potential negative consequences of these technologies for well-being at so many levels, all relate to the overriding demand for new financial products offering high yields in a low-interest rate environment. Certain aspects of the transition to a 4th Industrial revolution are of course reminiscent of previous industrial revolutions. In previous eras, though the underlying technologies were different, the monetization of knowledge and the pressures to realize short-term returns to invention also distorted or negated positive features of emerging technologies, unnecessarily magnifying their negative consequences for society. This aspect of innovation, now disingenuously termed “unintended consequences,” goes back a long way. The Venetian city-state, which attached a premium to attracting advanced technology nevertheless also routinely held formal hearings before granting its coveted 20-year patent monopolies to inventors. Venice went out of its way to reward inventors, foreign and domestic, but it was not naïve about unintended consequences. No doubt as a result of bitter experience, Venice anticipated and wanted to prepare for, what might be in store for the more vulnerable parts of its population. As Catherine Fisk has shown in her book Working Knowledge, individual inventors have also suffered, at least since the early 19th century, as attempts to monetize technology have deprived them of the once inalienable rights to the fruits of their knowledge work

Continuities notwithstanding, what is new in the transition phase between the 3rd IR and the 4th IR is the way the organization and financing required to deploy the new knowledge have been dispersed and routinized, to the point that even start-ups have been commoditized. This has been possible in part because 3rd and 4th industrial revolution technologies are information based, and much of their value is based on intangibility.

________________________________

“The demand for technology is a derived demand, that is, it depends on the demand for goods and services that technology helps produce; there is little or no demand for technology for its own sake.” Joel Mokyr, The Lever of Riches

For most of the twentieth century and in most industrial countries this declaration by a leading historian of technology writing in 1990 would have been regarded as a truism, underscoring the obvious materiality of technology. For over half a century technological innovation had been financed and performed mainly by governments working through large companies and meeting needs that were vital for the military or essential for the public good. This way of organizing innovation channeled and concentrated the necessary resources, and sometimes faced up to responsibility for the impacts on labor and society. Since the turn of the 21st century, however, it’s not been obvious that the development of innovative technologies is intended to fulfill a genuine need for goods and services, at least not tangible ones. Nor is it clear that the market allocates resources effectively or is prepared to deal with unforeseen drawbacks to technological deployment. In fact, the primary “need” is for a way to monetize the imagined potential of an emerging technology before it actually manifests in goods and services that meet a social, or even an economic need.

Reasons to develop technology that do not depend on the demand for goods and services that the technology helps produce are innumerable. They may include, but are not limited to: an inventor’s desire to cash out her invention rather than defend or develop it; the necessity for universities, or other research-performing institutions, to maintain their flow of research funding; the strategic intent of patent-production houses (often known as patent trolls) to lie in wait until an organization with deep pockets commits so much to a new technology that it can be trapped into infringing on the troll’s property; and as already mentioned, the need for the financial industry — especially the subset comprising investment banks and venture capital firms — to maintain a steady supply of potentially high-yielding financial products in which their clients can invest.

Of course, the demand for investment products in technology rests on the assumption that actual products and services based on it will eventually reach a real market, but in the meantime the idea of the potential applications of the technology and the insatiable appetite for risky investments that can produce higher yields than more ordinary investments supports a profitable stream of earnings for underwriters and preferred early investors, not to mention journalists and consultants who focus on new technologies. The Gartner Hype Cycle charts this phenomenon, first observed in the last quarter of the 20th century, when investors were eager to put money into technologies like robotics long before any commercially useful robots had been deployed. Gone is the previously standard S-curve along which any major inventive concept could be expected to transit, from fluid form factor and uncertain economics to increasingly settled specifications and specific value. Replacing the slow gradual climb along the S-Curve is the dramatic early spike in journalistic coverage leading to investor excitement that diverts capital from established enterprises with conventional needs for financing. Such piling on has resulted in more than one (e.g. telecoms, dotcoms) financial crash destroying billions in investments for tertiary investors, too late to reap the rewards of an Initial Product Offering (IPO), but way too early to hold on for the slog to eventual product introduction. Moreover, the inopportune timing of the Great Recession, combined with the looming retirement of the “baby boom” generation, has intensified the demand for higher-yielding investments. The financial industry has responded, while finding ways to take very little of the risk upon itself. Whether this trend will continue if and when interest rates rise to historically more normal levels and as stock prices become increasingly more volatile in the face of uncertainty, remains to be seen.

If the annual number of viable start-ups no longer exceeds the number of company failures (a ratio often cited as an indicator of innovation in an economy) it is surely because in the age of the “Unicorn” (high-growth start-up valued at or above $1 Billion) technological entrepreneurs have derived more fulfillment, and experienced less financial misery, by selling out to large companies or private equity investors, than they have by trying to take public and maintain control of a public company out in the wild. Can we foresee the consequences, intended or not, of the financialization of technology-based innovation in this new era?

I ponder this question as I note repeated references to Artificial Intelligence in Ignites, a daily newsletter that serves the asset management industry. AI, it seems, is something that many investors, both institutional and retail, are wanting to have represented in their portfolios. I recall a similar demand for robotics in the 1980s and how it led to many investments in bogus companies that hardly went farther than buying expensive office furniture. A similar discussion may have taken place in corporate board rooms in the early years of the 3rd Industrial Revolution around the importance of investing in research in electronics, but I cannot imagine anyone at that time saying they wanted to invest in Physics.

For much of the twentieth century technological innovation in the US was channeled through and financed by the aforementioned nexus of large institutions — government agencies, technology-based companies, and universities — that had the largest claim on tax dollars, and was the primary source of funding for innovation at the time. Access to leading-edge technology during and after World War Two was achieved primarily through government channels, often protected by government classification and mediated by government laboratories and prime contractors. It was strictly reserved for companies willing to accept government contracts and play by government rules. During the Cold War that followed large research-performing companies funded much of their own R&D, but they could not operate entirely independently. Indeed, while National Science Foundation records show that 50% of all research was privately funded, the informal penalties paid by those who did not accept government funding, for fear that government claims would be made on their own intellectual property, were costly and debilitating. The complaint at the time was that this approach was unnecessarily costly and time-consuming. Towards the end of the century, as with the race between the Human Genome project and privately funded genetic sequencing projects, pressures grew to release technology funded by the government and performed in the large government and private corporate laboratories into the open fully competitive market.

Though much of what the U.S. Government contributed to technological innovation was invisible to the general public, it had been providing many innovation services that would not be so readily picked up by private industry or backed by private funding. Governments had provided early markets, paying uneconomic prices, and also operated what amounted to the first test-beds for many innovations. Through its various military arms, the U.S. Federal government not only funded the R&D but controlled the intellectual property and built the human capital needed to make radical new technologies economically viable. In both universities and industrial laboratories where much of the relevant research was actually performed, the scarcity of research personnel and the burden of educating a new generation of scientists slowed the momentum of research until a new generation took over. The cost of providing these services increased with the funding. In the 1940s — 1960s physicists in particular demanded and got special treatment that eventually impeded the more successful collaborative process of innovation that had prevailed under wartime conditions. What is often forgotten is that the perceived delays had the beneficial effect that manufacturers, users and funders had time to adjust to the demands of the new technologies, to train for their special needs both for manufacturers and managers and to become acquainted with their economics.

As late as the 1980s in the U.S. and to a greater extent in other leading industrial countries, the development of new technology-based innovation still followed a simple sequence. Starting with the military market, which had the highest performance demands and the greatest capacity to pay, “high-tech” products were presumed to develop in a regular sequence — serving military, institutional, and consumer markets in that order. Although many serious technological developments took place outside the purview of the Military Industrial Complex, federal and state governments, both military and civilian, played an important initiating and stabilizing role that affected the climate for innovation in the private sector as well. In the course of acquiring and building its technological capacity in two world wars against scientifically superior opponents, the U.S. military built the factories and trained generations of experts from technicians to researchers. After World War Two the well-equipped factories and skilled manpower needed to compete in the Cold War contest with the USSR was available because of these investments. Challenged by competitors like Germany and the USSR, the U.S. military also provided the initial market for many “high-tech” products in their early “buggy” stages. These included products as varied as radio, light metals, numerically controlled machine tools, transistors, radar systems, audio recorders, antibiotics, and later global positioning systems, voice recognition, new materials, and computers. For much of the twentieth century, use in wartime conditions not only proved new systems’ feasibility, but prepared industry and society for their wider commercialization. “Learning by doing” for military markets gave producers further down the chain plenty of time to gather information about the issues the new technologies were likely to raise for their different customers: the skills and investments required to design and produce them, the training and collateral equipment required to use them, and finally, the displacement of certain skills needed to supply their predecessors.

Initial conditions matter, of course, and the weakness of this common development sequence started to emerge in the 1970s when former enemies Germany and Japan, who lacked military markets of their own, and who therefore designed directly for mass markets to start with, proved more adept than their American counterparts at making new technologies affordable and profitable. Most people are familiar with the competitive difficulties that American technologies like small cars and machine tools encountered when they ran up against foreign goods that were cheaper, easier to operate, and possessed higher quality of fits and finishes. Few were aware of the accompanying problems that were related to the financialization of American industry. Technologies that had been developed barely past the point of feasibility were sold off in their infancy, to engineer profits, to cash in on a more certain, if cumulatively smaller, stream of returns through licensing, to avoid the further larger investments required to commercialize. Few people are aware to this day of the virtual government subsidies that American companies formerly enjoyed, by aiming their initial products at buyers whose end-users already had collateral expertise acquired in wartime. While the U.S expressed outrage and contempt for nations that had “industrial policies,” like Japan Inc., the list of technologies and industries funded by the U.S. Department of Defense amounted to something very similar.

The standard development sequence described above changed with the intensification of the Vietnam War. Then a significant portion of the scientific research community inside universities, led by graduate students who opposed the war and feared the draft, balked at doing military research. They also balked at doing product development even in industrial laboratories. One reason that so many large, formerly innovative companies failed in very large attempts to innovate had to do with cultural barriers erected when the post-war generation of scientists made technical choices that put their own professional interests ahead of those of the broader company. In time the research migrated, or was outsourced, to related institutions linked to universities, institutes like the Stanford Research Institute, or the Jet Propulsion Laboratory, or Lincoln Labs. But as tax law and regulatory changes led to greater amounts of investment money available from non-governmental sources new pockets of privately financed invention popped up in areas around many universities. New forms of interinstitutional R&D were enabled by the very information technologies that formed the backbone of the third Industrial Revolution. By the 1980s the industrial side of the formerly closed and secretive US innovation system had opened up and spread out to the point that international researchers formed a significant part of the U.S. R&D workforce, both domestically and eventually in their own countries. With many qualified migrants ready to do the work, the urgency of training domestic workforces in the more industrialized countries that would inevitably demand higher pay and better conditions was not evident.

In the face of a more competitive innovation landscape, both from new companies and foreign competitors, large companies which had formerly performed the lion’s share of industrial R&D in many fields found it harder to sustain their investments in long-range research leading to major innovation. Their poor innovation track records, difficulty retaining scientific research staff, the pressures from Wall Street to smooth earnings — all of these made long-term research programs aimed at producing “blockbusters” in less than a decade trim their sails. With the increase in availability of private investment money outside government channels, both personal and corporate, the success of a few visible escapees from large laboratories — Fairchild, Shockley, Intel, and garage-based start-ups like Microsoft and Hewlett Packard — inspired many others to launch start-ups originating in a spate of emerging technologies and high-tech start-ups. Of course, all was not as open or above-board as the public picture painted it. Vast sums of money, and a significant segment of the available expertise, has been devoted to research and development in and for the intelligence sector, which makes the classified nature of earlier military technologies seem transparent by comparison. The old development sequence that involved learning from military applications, transferring to institutional users and then developing for consumer applications functions no longer. Indeed, in many areas the US military and the government must wait until products have been developed for the commercial market before they are adopted for government uses.

What are the aspects of financialization that can cause problems for innovations in technologies now emerging? In the first two decades of the 21st century cheap money, ever-decreasing tax burdens, and an appetite for risky, hence high-yielding, investments, combined with global technology competition in leading sectors and government commitments to being first in certain technologies have supported very rapid development in several different core technologies. But certain financial practices and trends seem likely to slow the pace going forward. Among these: consolidation of large research-performing companies, each buying up smaller concerns with potentially competing technologies, amassing of intellectual property to satisfy the demands of venture capitalists for tangible collateral, high stakes patent cases between companies with very large financial reserves. All of these are reminiscent of the conditions that brought about the enclosing of the US innovation system in the 1930s. The staggering increases in compensation for technical specialists especially in fields like AI, combined with a blurring of the boundaries in universities between research for the sake of knowledge and research for patents points to another problem which has historical precedence — the scarcity of a trained workforce, advanced researchers in AI, mathematically trained practitioners trained to work with big data, as well as qualified technicians. Combined with increasing needs for information security, these conditions seem likely to portend a reclosing of the U.S. innovation system. This time, however, several other research-performing countries have the intellectual capacity and the motivated personnel to provide serious and effective competition. These people, often trained in the U.S. and Europe, have access to global research networks and the ability to absorb and build on foreign intellectual property gained legally or not. In addition, their governments are prepared and committed to making the kinds of investments in human capital that the US government made during the Cold War.

What these new government-supported efforts may not have, however is the concern to consider the wider impacts of the new technologies on society.

Will the private sector play the role going forward that government has played in the US in the 3rd Industrial Revolution? The evidence suggests so far that it is much more narrowly focused on collecting what seem to be certain short-term returns. Unlike Germany, Canada, and some Scandinavian countries where corporations have a tradition of human capital building, and where the European Union and other quasi-socialist governments provide healthcare and guarantee safety nets, US-based global firms and private equity alike show more signs of demanding concessions from governments, even playing off government entities against each other to generate the best returns for their shareholders. This suggests that, in the countries where major technology-based innovation has had the most traction for the past century, the trend to financialization will almost certainly strengthen. Even if technological momentum weakens, fears about foreign competition are likely to divert attention and resources from trying to mitigate the impacts of technological change. As a consequence, the outlook for learning and adapting and trying to mitigate the impacts of technological change in the 4th Industrial Revolution is even less positive than it was in prior industrial revolutions.

--

--