The Bifurcation is Near

@auerswald
the code economy
7 min readMay 19, 2015

--

When the United States was thrust into World War II in December of 1941, all types of inputs into the war effort were suddenly in short supply: rubber, coal, iron ore, and, of course, human computers.

Human computers were just people doing math. They were needed to perform computational work of a military variety — particularly the generation of firing tables, used by artilerymen to set aim on enemy targets. At the U.S. Army’s primary weapons testing facility in Aberdeen, Maryland — the Ballistic Research Laboratory — the demand for this specialized labor was so great that the military established a secret unit called the Philadelphia Computing Section (PCS) at the University of Pennsylvania’s Moore School of Electrical Engineering. The army recruited one hundred human computers, mostly women, from the University of Pennsylvania and neighboring schools.[1]

The PCS turned out to be far from adequate to the task.[2] By August 1944, Herman Goldstine, a mathematician and Army lieutenant who acted as the liaison between the Ballistic Research Laboratory and the PCS, lamented: “The number of tables for which work has not yet started because of lack of computational facilities far exceeds the number in progress. Requests for the preparation of new tables are being currently received at the rate of six per day.”

Goldstine had not, however, placed all his bets on the PCS. A year earlier, he had been prompted by a colleague to seek out John Mauchly, a physics instructor at the Moore School, who had written a memorandum proposing that the calculations being performed by PCS workers could be completed thousands of times more rapidly employing a digital computer built with vacuum tubes. Goldstine obtained funding for Mauchly’s proposal. Shortly thereafter, he engaged the great mathematician John von Neumann to be its leader.[3] The effort to construct the world’s first general purpose, digital computer was underway. On February 14, 1946, less than six months after the end of World War II, the Army announced the completion of the Electronic Numerical Integrator and Computer, or ENIAC.

The fact that digital computers are able to outperform humans in performing mental tasks thus should come as no surprise: they were designed to do just that. The very first computer, the ENIAC, set into motion a transformation in work and life on a global scale that continues to accelerate today. But that first general-purpose digital computer also very directly eliminated about 100 jobs — those of the Philadelphia Computing Section. Those initial victims of digital disruption were caught up in larger events — World Wars and the like — that obscured the historic nature of their particular circumstances. Yet, as digital computers have become ever more powerful, they will inevitably outperform humans in an expanding range of tasks, challenging the viability of an ever-growing list of occupations of which “human computer” was but the first.

Two decades ago futurist Jeremy Rifkin’s published a book titled The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era, in which he argued that we should be deeply concerned about the social impact of digital technologies. “We are entering a new phase in world history — one in which fewer and fewer workers will be needed to produce the goods and services for the global population,” he cautioned. “In the years ahead, more sophisticated software technologies are going to bring civilization ever closer to a near-workerless world.”[4] Others who have taken up a similar line of argument recently include Erik Bryjolfsson and Andrew McAfee, Tyler Cowen, and most recently, Martin Ford. From an economic standpoint, their argument is straightforward:

The power of technology is growing at an exponential rate.

Technology nearly perfectly substitutes for human capabilities.

Therefore the (relative) power of human capabilities is shrinking at an exponential rate.

If Rifkin and others are correct, we should be deeply worried about the process of digital disruption.

In sharp contrast, Ray Kurzweil’s 2005 best-seller, The Singularity is Near: When Humans Transcend Biology, argued that the exponentially increasing power of technology — particularly, though not exclusively, digital computing technologies — will trigger epochal discontinuity in the human experience. From an economic standpoint, Kurzweil’s argument is comparably straightforward:

The power of technology is growing at an exponential rate.

Technology nearly perfectly complements human capabilities.

Therefore the (absolute) power of human capabilities is growing at an exponential rate.

Like many others, Kurzweil argues that “only technology can provide the scale to overcome the challenges with which human society has struggled for generations.”[5] But he goes further, tracing the arc of technologically-enabled progress forward into the immediate future to sketch the outlines of “The Singularity,” which “will result from the merger of the vast knowledge embedded in our brains with the vastly greater capacity, speed, and knowledge-sharing ability of our technology, [enabling] our human-machine civilization to transcend the human brain’s limitations of a mere hundred trillion extremely slow connections.” When it comes to algorithmically-empowered robots taking our jobs, Kurzweil’s prescription is straightforward: If you can’t beat, join ‘em — maybe even literally, in cyborg fashion.

So which is it to be for humanity: Rifkin’s dystopian World Without Work or Kurzweil’s bright Singularity?

As you may have guessed, I am going to propose that a third line of argument is possible:

The power of technology is growing at an exponential rate.

Technology only partially substitutes for human capabilities.

Therefore the (relative) power of human capabilities is shrinking at an exponential rate for those categories of work that can be performed by computers, and not in others.

The best evidence to support this line of argument comes from the labor market studies conducted by MIT economists David Autor, Daron Acemoglu, and Frank Levy, and Harvard economist Richard Murnane, in dozens of papers and one book written in various combinations and with co-authors over the past dozen years. In a seminal 2003 paper published in the Quarterly Journal of Economics, Autor, Levy, and Murnane summarize their findings as follows:

We contend that computer capital (1) substitutes for a limited and well-defined set of human activities, those involving routine (repetitive) cognitive and manual tasks; and (2) complements activities involving non-routine problem solving and interactive tasks. Provided these tasks are imperfect substitutes, our model implies measurable changes in the task content of employment, which we explore using representative data on job task requirements over 1960–1998. Computerization is associated with declining relative industry demand for routine manual and cognitive tasks and increased relative demand for non-routine cognitive tasks.[6]

From the Autor, Acemoglu, Levy, and Murnane (or “AALM”) perspective, the impact of digital disruption on the future of work depends critically on the nature of the work itself — in other words, the “how” of production, and not just the “what.” Tasks that are routine and can be easily encoded will be performed by computers, where those that are not will continue to be performed by people. The jobs of the human computers at the Philadelphia Computing Section are a case in point: Because the human computer were literally performing rule-based logical computations that are the essence of computer “programs” they were also literally the first people to lose their jobs to digital computers. That process is ongoing.

Where Kurzweil talks about an impending technologically-induced Singularity, the reality looks much more like one technologically-induced Bifurcation after another. Furthermore, the answer to the question “Is there anything that humans can do better than digital computers?” turns out to be fairly simple.

Humans are better at being human.

[1] George Dyson (2012), Turing’s Cathedral: The Origins of the Digital Universe, New York: Vintage, p. 69.

[2] As George Dyson reports in this comprehensive of digital computer, “A human computer working with a desk calculator took about twelve hours to calculate a single [ballistic] trajectory … To compete a single firing table still required about a month of uninterrupted work.” (Dyson pp. 69–70)

[3] Goldstine did not know that von Neumann was part of another secret military team, working in the desert of New Mexico on the design for the atomic bomb.

[4] Brynjolfsson, Erik; McAfee, Andrew (2011–10–17). Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (p. 6). Digital Frontier Press. Kindle Edition.

[5] Ray Kurzweil (2005), The Singularity Is Near: When Humans Transcend Biology, New York: Viking Press; p. 371

[6] Autor et al. “The Skill Content of Recent Technological Change: An Empirical Exploration”. See also Todd Gabe, Richard Florida, and Charlotta Mellander (2012), “The Creative Class and the Crisis” Martin Prosperity Research Working Paper Series: “The economic crisis contributed to sharp increases in U.S. unemployment rates for all three of the major socio-economic classes. Results from regression models using individual-level data from the 2006–2011 U.S. Current Population Surveys indicate that members of the Creative Class had a lower probability of being unemployed over this period than individuals in the Service and Working Classes, and that the impact of having a creative occupation became more beneficial in the two years following the recession. These patterns, if they continue, are suggestive of a structural change occurring in the U.S. economy — one that favors knowledge-based creative activities.”

--

--

@auerswald
the code economy

author, the code economy: a forty-thousand-year history