Second Machine Age or Fifth Technological Revolution? (Part 4)
This is the fourth instalment in a series of posts that reflect on aspects of Erik Brynjolfsson and Andrew McAfee’s influential book, The Second Machine Age, in order to examine how different historical understandings of technological revolutions can influence policy recommendations in the present. You can also read the first three instalments:
The historical patterns of bounty and spread
A central idea in The Second Machine Age is the definition of the consequences of the new technologies as a combination of ‘bounty’ and ‘spread’. With the concept of ‘bounty’, Brynjolfsson and McAfee refer to the wealth-creating capacity of information and communications technologies (ICT), particularly in terms of significantly increasing productivity. In discussing the ‘spread’ of this bounty, they refer to increasing inequality and to the ‘winner-takes-all’ polarisation of the wealth created. These are indeed two key characteristics of what has happened up to now in the ICT revolution. In this section I will argue that both elements correspond to a pattern that has repeated itself with each surge. And rather than accept such consequences as being a peculiar feature of the current new technologies, I will hold that they are in the nature of the capitalist system and of the specific manner in which it assimilates major technical change. Nevertheless, the fact that those typical features are taken to an extreme by digital technologies does pose new challenges that merit special attention. In this post, I will discuss the historical recurrence and, in the next, the uniqueness and its implications.
The recurrence of bounty: successive leaps in productivity and paradigm shifts
Brynjolfsson and McAfee rightly reject the pessimistic view of the ‘new wave of worries about the “end of growth” by economists, journalists, and bloggers’, which interprets the slow increase in productivity in the 1970s and 1980s and/or the reduction in its growth after 2005 as a sign of the end of the impact of ICT on the economy. Considering the paradox noted by Solow, who claimed that he saw ‘the computer age everywhere except in the productivity statistics’, they counter with studies that show the much greater productivity of those who produce or intensively use ICT and those who do not. In addition, they make a historical parallel between the lagged impact of electrification from the end of the 19th century and that of ICT now. Using a study by Chad Syverson, they show that it took thirty years in the US for the impact of electricity on productivity to really take off and make a significant impact. I agree with this point, but would argue that it was not electricity alone, but a whole system of technologies and infrastructures, involving cheap steel, transcontinental railways and telegraph (and eventually the telephone) together with heavy engineering (metallurgical, electrical, chemical, civil and naval) that came together to gradually provide the general leap in productivity observed by Syverson.
Indeed, that is how it has always happened. As discussed in the previous post, each of the five technological revolutions to date has featured a combination of new all-pervasive infrastructural networks, new interrelated industries, products and technologies, as well as a new cheap source of energy and/or a new material. It takes a long time not only to install all the new technologies, and for the process of Schumpeterian creative destruction to replace or modernise what existed before, but also for the new technological paradigm to propagate and be assimilated.
The canals and turnpike roads, water wheels and machinery of the first surge began to diffuse across Britain in the 1770s, but only made a real difference to the economy after the turn of the century. Similarly, the railway mania of the 1840s made its most significant impact on the British economy during the years known as the Victorian boom, after the Great Exhibition of 1851. As noted in a previous post, the 1880s and 1890s saw the start of the first globalisation, as feats of naval and civil engineering inter-connected the whole world, while one industry after another mechanised and integrated from supplies to sales. However, the overall impact of this on productivity, and in the forging ahead of the American and the German economies, did not occur until after 1900. And then, from 1913 in the USA, mass production, cheap oil, petrochemicals and the automobile provided another technology system for a leap in productivity, which only revealed all of its wealth-creating power during WWII and in the post-war boom. This pattern, I claim, is the same one we have been experiencing with ICT and with its still unused potential for transforming all other industries and activities. Thus, while I agree with Brynjolfsson and McAfee in seeing the great bounty of ICT as being still ahead of us, my point is that there is not one historical parallel from which to learn about the lag in the productivity impact on the economy — but four.
The other argument given by the authors for expecting a delay in the productivity benefits is the need for complementary innovations. They cite the path-breaking work of Paul David to argue that successful adoption involves important ‘business process changes and organisational co-inventions’. The example provided by David refers to the adoption of electricity, which originally maintained the centralised model of power typical of the steam engine. It took another generation to understand that, with electricity, each machine could be powered individually. More generally, this is the idea that Chris Freeman and I have been raising since the 1980s: a technological revolution is ‘a combination of interrelated product and process, technical, organisational and managerial innovations, embodying a quantum jump in potential productivity for all or most of the economy and opening up an unusually wide range of investment and profit opportunities […] Such a paradigm change implies a unique new combination of decisive technical and economic advantages’. We understand this complex combination as a ‘techno-economic paradigm’; a major change in managerial and organisational common sense that evolves as the new technologies propagate. In my 2002 book, I attempt to identify the different sets of principles that have guided innovation, investment and organisation in each of the five surges (see table 1).
Table 1: A different techno-economic paradigm for each technological revolution: 1770 to 2000s
The irony is that, when the innovation potential of the prevailing revolution has been exhausted, and its markets saturated, it is the original success in implementing that paradigm which ends up becoming an inertial force that delays the diffusion of the next revolution and the reaping of its full benefits. As Paul David suggests, a generational shift may be required for achieving the full paradigm change, because the difficulty resides in giving up the sources of success in the previous revolution and adopting a different set of principles for innovation across the board. It is with those new principles that what Brynjolfsson and McAfee call ‘combinatorial innovations’ are made. Each new paradigm opens avenues for innovation across the whole economy transforming every single industry and activity, with a potential leap in productivity and quality, probably with radically different business models. Furthermore, social and institutional innovations are also driven and shaped by the new paradigm. Thus, today, the bureaucratic pyramids, abandoned by the big corporations, but still prevalent in government, education and other organisations, are ripe for transformation with the networked, flexible, continuous improvement paradigm emerging from the ICT revolution. I shall return to this in the next post.
The recurrence of spread: inequality
An aspect that particularly concerns Brynjolfsson and McAfee is the ‘spread’ of the wealth potential brought about by ICT, seen currently in the polarisation of income and intense concentration of wealth in very few hands. They seem, however, to ascribe this phenomenon to the special nature of the ‘second machine age’, rather than see it as something that happens again and again with successive revolutions.
We have all become familiar with the figures of US inequality provided by Piketty and Saez, who remark on the current concentration at the top of the income distribution, but see the improvement in the 1950s and 1960s as an exception in a secular trend. Here, I would like to provide a different interpretation. Because of the creative destruction process that occurs during the installation period of each technological revolution, every revolution eliminates skills and industries, devastates previously industrial regions and polarises incomes between winners and losers. And in every case, there will be new millionaires, with the new wealth concentrating in the hands of the novel entrepreneurs and of the bold and ruthless investors and speculators. Then, when the inevitable bubble collapses, governments ‒ either for humanitarian reasons or for the sake of social peace ‒ tend to step in to reverse some of the worst social consequences of the ‘creative destruction’ process and to regulate the negative behaviour of the financial world. And that is clearly what the Piketty and Saez figure shows. Both in the installation period of the mass production revolution — in the 1910s and 1920s — and in that of the current ICT one — from the 1980s to the 2000s — the income share of the top 1% of taxpayers tends to reach 25%. This inequality was reversed by government policy in the post-war period — 1950s and 1960s — in what has been the most successful positive-sum game in the history of capitalism (see figure 1).
Figure 1: Percentage of income earned by top 1% of taxpayers in the USA
The policies that led to that result would now seem quite impossible to even consider. Just to give an idea of the magnitude of the change, we can look at the top and bottom tax rates during that golden age (see figure 2). Such high rates are historically associated with wartime (as was the case in the US in both WWI and WWII) and not with a peaceful and prosperous period.
Figure 2: US income tax spread
In this case, not only does it seem astonishing in today’s policy climate that a rate of 90%+ was accepted, but also that it was held steady by Eisenhower, a Republican president. The explanation is simple. Business learnt two crucial things during the war: one, that taxes went through the hands of government and returned as demand for their products and, two, that, given the new mass production methods, the greater the demand for identical products, the lower the unit cost and the higher the potential profit. The taxes that turned into massive and uninterrupted demand for the products of the ‘American Way of Life’ and for the Cold War were the best profit-making opportunity.
In a much less ambitious way, each of the previous revolutions saw government try to overturn the inequality brought about by the bubble years. During the Victorian boom, the improvement was enough to lead Engels, during the recession that occurred at the end of the 1850s, to write to Marx warning him not to expect the working classes to rebel because ‘The masses will have become damned lethargic as a result of this prolonged prosperity’. Indeed, the abject poverty he had written about in The Condition of the Working-Class in England in 1844 had been sufficiently overcome to move workers from the rebellions of the 1840s to organised trade union movements. The deployment period of the next surge in the 1900s brought the Progressive Era in the US, during which various social movements and the government tried to reduce unacceptable poverty and counteract bad living and working conditions, as well as to protect consumers and smaller producers from the excess power of trusts and monopolies. In Britain, the Liberal Prime Minister Lloyd George set up an early version of the Welfare State in 1906 with the ‘People’s Budget’. It was explicitly aimed at diminishing the appalling dereliction that had been described in reports by Booth and Rowntree. Bismarck had pioneered such measures in Germany decades earlier, and most European countries set up similar social security policies in the 1900s. We are now in equivalent times, facing the inequality consequences of another technological transformation, and needing to set up a form of government intervention effective for bringing fairness and prosperity with the ICT revolution.
The recurrence of spread: winner-take-all processes
Brynjolfsson and McAfee express concern that the new bounty may not bring a shared prosperity because of two related phenomena: the rise of the superstars and the winner-take-all processes leading to monopolies.
The superstars are the top earners in music, sport, software development and so on, along with CEOs and other top managers in business. Indeed, it is astonishing that the few at the top can earn in a few months what a worker in the same society will earn in a lifetime. The historical parallel with previous surges is not very strong in this case. There have always been a few millionaires emerging from the bubbles or from the revolutionary industries that each time replace the engines of growth, but I must agree that the current superstar culture is unique. However, whilst I do not deny that some of the causes for this phenomenon can indeed be attributed to technology, in my view, much of the context behind this extreme polarity is socio-political. For that reason, I will refer to the superstar phenomenon in the next post, when discussing the social shaping of each revolution.
By contrast, the concentration of each industry in the hands of a few companies after the bubble collapses, whether monopolies or oligopolies, is a characteristic of the recurring pattern, and especially notable in the third and fourth surges. It was not as evident in the first and second surges in Britain, as the process of industrialisation was still scattered in separate cities and regions. Most of the ‘giants’ of the time were at a local scale, even if seen as super-powerful in their context. Yet, even during those periods, we saw the rise of the railway kings; the growth of the banks and other financial institutions, which used the infrastructure of each revolution to move into the national, European and even global scale of operation; and the last days of dominance of traditional monopolies such as the East India Company.
The US until the 1870s was a developing country, which therefore would not be expected to follow the diffusion patterns typical of the core countries of each revolution. However, after the Civil War, when the new engineering industries emerged and the US entered the race to forge ahead (being the ‘China’ of those times), the giants did win and take all. The typical market competition of the installation period — the so-called Gilded Age — eventually results in the formation of monopolies or oligopolies. J.P. Morgan united all the steel makers to form US Steel; he did the same with agricultural equipment to create International Harvester. In these and in many other cases, he did it with the explicit goal of ending what he called ‘ruinous competition’ (cited in Morris 2005). In the electrical industry, a multiplicity of firms consolidated into only two, General Electric and Westinghouse. Ironically, anti-trust legislation, designed to avoid collusion for price-fixing among the many, actually encouraged the creation of near-monopolies such as the National Biscuit Company, the American Tobacco Company and other such mergers, typically with ‘National’, ‘American’, ‘General’, ‘International’ or U.S. in their name. A similar process was happening in the dominant European countries: powerful cartels were formed and major companies, such as Siemens and Krupp, emerged to control each industry.
In the subsequent revolution, the US automobile industry, which in the 1910s had as many as forty companies, ended up in the post-war boom with an oligopoly of only three. And the same happened, in one way or another, in oil, chemicals, electrical appliances, retail chains and so on. Indeed, the very nature of the way that technological revolutions diffuse, inevitably leads to this winner-takes-all process, which implies also that the engines of growth of each surge are regularly replaced by the new giants.
The process of concentration in a few hands is each time seen as necessary. As the new industries develop the winners in the competition find that they need to reach economies of scale and control of the market. This means reducing excess competition to gain strategic leeway as well as, whenever possible, amassing enough profits to fund their further expansion from retained earnings rather than having recourse to the financial sector. The result is that towards the end of the installation period, one or a few companies reach enormous proportions, creating barriers to entry that tend to discourage and exclude potential competitors. And this process has occurred not only in manufacturing, but in materials, services and, especially, infrastructural networks. Box 1 summarises the history and characteristics of a typical giant of the third surge in the US, the Diamond Match Company, as described by Chandler in The Visible Hand.
This does not deny that the giants of today’s information economy find it particularly easy to reach control through network power, nor that they may be of a significantly different nature from those of other surges, though it remains to be seen how this will play out. But the recurrence of such trends towards imperfect competition and monopoly or oligopoly does mean that the capitalist system, manages, in one way or another, to move from open competition in the installation period to some form of monopoly or oligopoly in the new industries, once the new technologies are consolidated. The companies might do it through mergers and acquisitions or by eliminating the others through winner-takes-all competition.
Examining these phenomena in his time, Schumpeter expected giant companies and oligopolies to eventually form in each sector and to lead to ‘imperfect competition’. In this he explicitly agreed with Chamberlin and Robinson, who defined it as based on product differentiation, quality, strategy, advertising and other non-price factors. For Schumpeter, this meant that the original form of creative destruction, based on new entrepreneurs displacing the incumbents, would tend to morph, as companies grew larger, into innovation made within the company, improving or displacing its own products and competing with other large companies doing the same. Schumpeter held that excess price competition would restrict profit margins and would limit the freedom to invest large sums in R&D, training or new equipment, when technology and scale required them.
Moreover, he held that ‘perfect competition’ rarely existed and, he controversially added, that the greater power of large companies in ‘monopolistic competition’ was the basis for achieving higher standards of living for the masses. For these reasons, he confronted the anti-trust laws that concentrated on avoiding price-fixing and insisted that promoting the best conditions for innovation was a more important goal for government in pursuit of growth.
However, given the different nature of each set of new technologies, I hold that the types of policies that can contain abuse, promote innovation and guarantee the best social outcomes are likely to be different at each turn. The denunciation of the monopolistic and unfair practices of the Standard Oil Co. (which at one point controlled 90 percent of the industry) led the US Supreme Court in 1911 to order its dissolution into 90 independent companies. John D. Rockefeller had by then become the richest man in the world. By contrast, American Telephone and Telegraph (AT&T) was allowed to function as a highly regulated monopoly because it was considered optimal to have a single operator for a national communications network (a so-called ‘natural monopoly’). The conditional agreement is said to have resulted in the creation of Bell Labs. To know exactly what to do and when to do it is a major challenge for governments at the threshold of each deployment period. To arrive at the most effective policy framework for the best possible results requires a deep understanding of the new technologies and clarity and determination regarding the policy aims.
The recurrence of spread: technological unemployment
Brynjolfsson and McAfee are right to question the complacency of many economists about technological unemployment into the future. Following this concern, they engage in a thorough discussion of the various elements of accepted theory and with the view held by many economists, who trust that new jobs will appear because they have historically done so.
When analysing technological unemployment and whether destroyed jobs will be replaced by new jobs, they look at three possible factors according to current theories: elasticity of demand; the pace of social adaptation in relation to the pace of technical change; and the possible equalisation of wages by globalisation. Their analysis of each leads them to a pessimistic outlook. And indeed, if left to market forces alone, that is the only realistic expectation. In my view, following the historical precedents, the outcomes of all three depend on the socio-political choices that are made when the problem becomes acute — which is precisely now, at the turning point of each revolution, between installation and deployment. Let us look at each in turn.
Will demand be elastic enough in the industries in question — or in the economy as a whole — for new employment to be greater than job losses? There is nothing intrinsic about the reaction of society-wide demand and it is only in rare cases that demand is truly inelastic by nature. Elasticity depends on the socio-political choices that define income distribution, taxation, government spending and so on. Thus in the 1930s, during the turning point of the previous surge, the market for the new tech of the day, automobiles and electrical appliances, appeared saturated and already quite inelastic. Without the Welfare State and the policies that enabled the spread of home ownership in the post-war years, demand would not have increased at the swift pace characteristic of that boom. As Henry Ford had predicted, the workers were now able to buy cars as well as homes, and appliances to fill those homes. Later, when markets again became saturated in the US at the end the 1960s, when planned obsolescence and the second car could not increase sales beyond the rate of population growth, the import substitution model — a brilliant institutional innovation at the time, despite the ahistorical criticisms levelled against it now — came to the rescue of business through the practice of protected final assembly work in the developing ‘Third World’. This North-South positive-sum game created demand through the creation of new incomes, which spread the consumption model of the American Way of Life across the globe. But the fact is that, when manufacturing employment moved abroad, the ICT revolution — at that stage of early diffusion — did not create new jobs at home at the same rate as they had moved away. Demand was made elastic, indeed, but by spreading it abroad, not at home. Worse still, the socio-political choice made at that point was to reduce the Welfare State and abandon the positive-sum game that had brought the previous prosperity. Business no longer required increased incomes in the advanced world; incomes could stagnate and still be sufficient for the purchase of cheap goods imported from Asia.
The rhythm of adaptation is also down to socio-political choices. Can society adapt quickly enough that no matter how fast technical change is and how quickly it changes the job skills required, people will be ready to follow it, or will unemployment result from a lagged response? In the astonishingly fast pace of technical change of the Age of Heavy Engineering, from the 1870s onwards, both the American and German governments provided technical education for the great number of skilled workers required by the new industries. At the same time, private and public universities gradually developed new careers for specialised engineering: chemical, electrical, mechanical, civil, while private and public schools trained the workforce-to-be in the various complementary skills, especially clerical ones for the new office tasks. However, the UK did not react at the same pace. The universities, locked into old traditions, did not adapt to the new times, while the government was late to fund training for the new industries. As the US, Japan, Germany and other European countries concentrated on racing ahead into the new industrial age, Britain and its financial sector engaged in further developing imperial dominance and neglected innovation at home.
Brynjolfsson and McAfee’s third area of analysis involves wage levels. Their question is: Given globalisation and the likely reduction of demand for human work, due to robotics and artificial intelligence, is it likely that wages would equalise down in a race to the bottom, to a level unacceptable for workers in the advanced world? Once again: if markets are not steered in the interest of society, if their profits are not somehow linked to the well-being of the citizens of the country, that is indeed a possibility. The post-war experience gives us a glimpse of how government policy can change the context in such a way that the interests of business and society coincide. Trade union protests in the US were often met with violent repression. It all changed when, under Franklyn Roosevelt, Congress passed the Wagner Act of 1935, which legalised and encouraged their action. Each company wanted the others to raise wages. The only way to get everybody to do so and therefore increase demand was to back the unions. Organising by industry of employment, rather by trades, the new unions facilitated industry-wide collective bargaining, which fitted the non-price competition practices of the oligopolies in the high productivity sectors. The resulting agreements influenced wages across society, so that, rather than widely separating wage levels between high and low productivity sectors, they narrowed the gap, establishing a relative standard for the whole range, which increased in accordance with the rates set by the unions in the leading industries. This was an adequate institutional set-up to guide markets in a positive-sum direction at a time when national economies were relatively closed. Markets alone would never have achieved it. The current globalised context will require different ‒ but equally adequate ‒ institutional innovations.
The authors also briefly question the standard optimistic view based on historical data about new job creation: is the confidence of economists in the creation of other jobs following the historical record really justified? As they say: ‘For most of the two hundred years since the Luddite rebellion technology has boosted productivity enormously, but the data show that employment grew alongside productivity up until the end of the twentieth century.’ That sentence could have been written in 1938: observing the depression and unaware of the great growth spurt yet to come, one could lament that employment had grown with productivity only until the crash of 1929. Treating historical data with an emphasis on long-term trends while only analysing the present in the context of short-term data can lead to serious misinterpretations. A more meaningful and useful way to examine historical data is to look at it in the light of the relationship between surges of technical change and their process of assimilation by the economy and society. In that way, we will recognise pendular swings going from periods of technological unemployment and deskilling to periods of re-employment and back. Each unavoidable process of ‘creative destruction’ involves the emergence of the new along with massive destruction of industries, skills and jobs, painful affecting some professions, regions and countries. Each deployment golden age brings the opposite beneficial effects, though not necessarily to the same people, industries or regions. The installation period sets the stage for completely new opportunities for ‘combinatorial innovation’, transforming the whole economy and adding new sectors. Yet, as reiterated above, the solution does not come automatically, nor is it found by markets on their own, but rather results from socio-political choices. And such choices can be of profound consequences, for better or for worse.
Can we trust history to provide valid lessons?
At the end of their discussion on the future of employment, Brynjolfsson and McAfee express doubts about the historical argument of economists who claim that the jobs lost to technology in one area have always appeared in other areas of the economy. They ask:
Which history should we take guidance from: the two centuries ending in the late 1990s, or the fifteen years since then? We can’t know for sure, but our reading of technology tells us that the power of exponential, digital, and combinatorial forces, as well as the dawning of machine intelligence and networked intelligence, presage even greater disruptions. (p.179)
In terms of the discussion in these posts, that is perhaps the most important sentence in their whole book. It lies at the core of their vision of a clean break with the past; it is the essence of their view of the ‘second machine age’ as unique.
Surely, they are right to mistrust an argument that simply counts on markets to perform the desired miracle automatically. Mainstream economists, having fallen in love with mathematics, and desiring their object of study to be a natural science with predictable models akin to physics, no longer bother ‒ as the classical economists did ‒ to seriously study history, technology, politics or institutions. They thus ignore their influence on the economy and cannot explain bubbles, unemployment or reemployment, beyond markets and monetary policy.
My answer to Brynjolfsson and McAfee’s question about from which history to learn, the long term or recent years, is: from both! The two and a half centuries since the first technological revolution in the 1770s provide the recurring patterns; the features visible in the last fifteen years (and in the thirty since the microprocessor) provide the uniqueness. That combination should offer us useful criteria for purposeful socio-economic action.
This blog series first appeared on the website of Carlota’s research project, Beyond The Technological Revolution. The project is funded by the Anthemis Institute, where Carlota is Academic-in-Residence.
Read the next instalments:
Sign up to the UCL Institute for Innovation and Public Purpose’s mailing list to hear about our latest research, news and events. You can follow us on Twitter: @IIPP_UCL.