The Futility of (Narrow) Speculation About Machines and Jobs

Adam Elkus
10 min readOct 19, 2015

--

One of our favorite pastimes is speculating about what jobs will left over after a allegedly imminent deluge of technological unemployment. This speculation is, of course, pointless. It defines future employability in terms of “can’t be automated.” But, for a variety of reasons, that is a fairly useless criterion. In reality we have very little idea what kind of skills will be in favor in this century, and the only person that I see who really gets this is my school’s economist Tyler Cowen (author of a very good book called Average Is Over).

I shall explain some reasons mitigating against the utility of current speculations.

How do we measure progress in science and technology?

David Autor hits the nail on the head when he talks about basic problems of perspective in measuring technological progress:

You have to distinguish between qualitative and quantitative change. My computer can run Microsoft Word 1,000 times faster than my computer could 20 years ago, but it doesn’t make it 1,000 times more productive; maybe it’s 20% more productive. The point is there’s this false equivalence drawn between computing processor cycles and productivity or output, and it’s really diminishing marginal returns.

To give you an example of this, I was at a conference and an executive from McKinsey got up and said, “Your washing machine today has more processing power than the entire Apollo moon project.” He meant this to demonstrate the great rate of change and the fantastic progress, and to me that just said, “Diminishing marginal returns.” My washing machine is not going to the moon.

Autor may be wrong or right but he also suggests something very problematic in the assumption of technological progress as linear and non-monotonic. More narrowly, in 1983 Lisanne Bainbridge wrote about the “ironies of automation” — technological innovations designed to increase productivity and human performance may actually expand existing problems with industrial processes by deskilling workers that supervise and manage the new automata. In 1962, the arms control analyst Sir Solly Zuckerman observed that in nuclear and conventional command and control, decreasing human judgement also decreased human military flexibility and political control.

We should be very careful about suggesting that X or Y technology is going to put an entire class of people out of business because translation of theory into practice is usually the point at which such assumptions become heavily complicated. We should at least have some consistent and decently well-founded idea of what progress means before making such assumptions, and that’s still an ongoing problem.

Creativity will not save us

Speculations on what skills us homo sapiens will need to keep our jobs usually center around “soft” skills such as creativity or social skills. What is the problem with this? First, it’s difficult to measure creativity and other intangibles so in practice each industry does it differently. Second, there is little evidence that there is somehow an unmet societal demand for such skills and capabilities even if they could be operationally defined.

No one will give you an entry level job because you are “creative.” Creativity and social skills are not measurable. Creativity will not get you an entry level job, 5+ years of work experience and some experience with an arbitrarily determined tool will. I have never seen a job ad that any of my friends have applied for that specifically mentions “candidate must have social skills” and features, say, a Cotillion ballroom dance as part of the job interview.

What is far more likely is that you will get a job because of a combination of your preparation for specific arbitrary variables and tests that employers select in an heroic effort to measure these amorphous latent variables. Some of these tests are roughly as scientific as reading the entrails of dead animals. But employers can convince themselves that their hiring measures are valid measures of individual worth longer than you can remain financially solvent.

The idea that any of these things is domain-independent is also dubious. Does anyone believe that Martin Scorsese would be a brilliant music composer simply because he made amazing movies? Ulysses S. Grant may have been a brilliant general but he failed at almost every occupation of his life. Don DeLillo wrote novels around the same time Steve Jobs made computers but it would be nonsense to suggest that writing White Noise easily transfers into designing universally beloved computer products.

This is not to say that we’re all doomed to specialize, but truly Renaissance men and women are not mass produceable. At best, a liberal arts education prepares someone to work in an interdisciplinary team, something that is quite different as a skill from having domain-independent abilities to be creative.

There is little evidence that our society takes such skills seriously. Employers perpetually lament that employees lack basic intangible skills, but if these skills were really so important to the economy one wonders why the American educational system structurally does not produce them. For example, take this paragraph:

“When it comes to the types of skills and knowledge that employers feel are most important to workplace success, large majorities of employers do NOT feel that recent college graduates are well prepared. This is particularly the case for applying knowledge and skills in real-world settings, critical thinking skills, and written and oral communication skills — areas in which fewer than three in 10 employers think that recent college graduates are well prepared. Yet even in the areas of ethical decision-making and working with others in teams, many employers do not give graduates high marks,” the AACU report says.

Hmm, is that so? If that was the case, then why are humanities and “soft” social sciences being systematically gutted by our university system because they are “impractical” (STEM is all that matters!!!)? And if the existing market is failing to provide employers what they need, why aren’t they paying to get it? After all, these companies certainly are not powerless in this regard. They could pay via direct transfers (students could have tuition paid to take coursework that theoretically should lead to the production of these skills) or indirectly through political lobbying to ensure that non-engineering coursework doesn’t vanish. Hell, they could even demand completion of MOOCs in topics such as philosophy, history, and other similar fields as resume proof of smarts for the job.

But it would be unfair to focus only on companies. I, like most people of my generation, spent much of my childhood cramming for standardized tests. I was fortunate enough to come from a background that would give me the SAT prep classes, the tutors, etc. I was still beat out by people even more driven and capable of cramming so that they could get into the universities key to the alumni networks which would actually determine their economic and professional futures. People can say a lot of flowery words but the acid test lies in how resources are actually spent. The evidence seems to suggest that our society believes it can do without those skills, and it is allocating resources based on such an assumption.

The basic knowledge and criteria is unreliable

The belief that there is a hard separation between what machines can do and what humans cannot is common. Hubert Dreyfus made a career out of grandiose and easily disproven pronouncements about this. Still, it has been a part of the debate ever since Alan Turing’s seminal paper on the imitation game. I am here, however, to tell you that attempting to divine this cannot be done from a purely technical point of view. This is a complex subject, but I will take some of them in small slices.

What can’t be automated is a moving goalpost. The so-called “AI effect” holds that things which were once considered AI become regarded as commonplace and thus no longer AI. One can cycle through a long series of basic innovations (for example, list data structures) that were once considered “AI” and are no longer regarded as such. There are several reasons why this may occur.

The goalposts change as we change. We co-evolve with our tools, and thus this suggests that as we grow more technologically powerful we will perpetually revise our conception of what it means to have the necessary and sufficient means to be successful in the modern world. Moreover, ideas of education, competence, and worldly acumen also change. We used to think that every well-educated Western man ought to be familiar with the classics, speak Greek and Roman, and have the effortless capacity of the gentleman amateur in some area of interest (some of which were pretty weird). Today, being able to recite Virgil or Homer, for example, doesn’t matter. No, everyone must Learn to Code (TM).

We will move the goalposts after each goal is met. It is possible to simply program computers to meet well-structured tests. And then humans will say that the tests don’t matter, and the real test is something else. And so on. It’s no wonder that science fiction tends to dictate how we talk about computers because ultimately science fiction — like our ideas about intelligence — is a model of how we regard ourselves as distinctive beings. But in reality we are not as different from other animals as we may believe. We will, however, do whatever it takes to protect our own fragile species-wide egos, however.

No one really knows how to formalize intelligence. It is not as if science provides us with much insight on this question either, to be fair. In the 21st century, the academic battle lines and factions in the debates over cognitive science and artificial intelligence are virtually the same as they were during the first conference on artificial intelligence in the 1950s. Seriously. Take a look for yourself and you’ll see familiar topics such as “how can a computer be programmed to use a language,” “neuron nets,” “self-improvement,” and other mainstays. Beyond the idea that it’s OK to regard the mind as a kind of machine, there is little real agreement. In some ways we have actually regressed as well. Who is today’s Herbert Simon or Norbert Wiener?

The logic of automation is inconsistent and even incoherent

Yes, there are some things that we would obviously prefer to have a machine do. And there are certain tasks that machines objectively can’t do yet. For example, there is a clear economic logic to mass production over hand-craft specializing individual laborers working in guilds. Machines could take over that, and most have come to accept that they should. But very few believed that AI should have taken over control of a proposed Reagan-era integrated missile defense/military command and control system because the technical requirements for such an omniscient machine strategist were impossible to meet. However, underlying this discussion is a strange belief that somehow there certain things that are just destined to be automated and others that will always remain the province of the human. This is sheer nonsense. As Harry Collins and Martin Kusch observed, artificial intelligence is human-like when we regard being human as mechanical. Or rather, what we are willing to accept as mechanical in its execution.

For example, let’s take customer service. Very few consumers are satisfied with dealing with an automated voice interface and prefer to speak to a live human. But if firms raised prices to hire more humans, consumers would likely balk. Likewise, self-driving cars look a lot less impressive in light of the fact that we opted for inefficient and deadly cars over mass transit. Had we, for example, moved to a mass transit system instead of investing in the automobile, we might have been able to automate much of American transport with AI — as Hong Kong does with automated planning and scheduling algorithms.

In general, there are many existing things that could be automated tomorrow if we so desired. And forget about advanced computing systems or deep neural nets. Basic technology — the kind of automated checkout stations seen at CVS stores — could replace a large swath of American jobs if employers made the attempt. Yet we see automation in some commercial areas and not in others. Why? And there are countless professions in which Americans would not mind talking to a robot over a live human being and would even prefer it. Does anyone really believe, for example, that creativity and people skills are necessary to work at the DMV? Finally, to be crude, men and women “automate” one of the most basic of human interpersonal interactions every day with a variety of battery-operated sexual devices and life-size sex dolls.

Automation is a choice

Automation certainly can be said to follow from the instrumental technical logic of science and engineering. But as the previous section implies, automation is also a social choice. And one that is made according to a dizzying array of economic, legal, political, cultural, and even cognitive-affective imperatives. Defining future employability in terms of “can’t be automated” is tautological. The jobs of the future can’t be automated because they can’t be automated. Never mind that what can and cannot be automated is a question that would require the political scientist, psychologist, lawyer, economist, sociologist, anthropologist, historian, organizational theorist, etc to be in the room alongside the computer scientist and the electrical engineer to produce an even remotely useful answer.

Because we are making two wagers when we predict about the future employment landscape:

  1. “I can successfully predict the course of future basic and applied research and development in the underlying science and technology.”
  2. “I can successfully predict how science and technology will change society and how society will change the underlying science and technology.”

People are paid obscene amounts of money to develop theories, analyses, and hard predictions about the intersection of these two factors. And for what it is worth Clayton Christensen, despite his flaws, at least made the attempt. Nonetheless, futurists seem to have substantial problems with a basic thing: they hold some variables constant while predicting enormous changes in others. Moreover, they also seem to have a problem of missing when they should hold some things constant.

Conclusion: No One Knows Nothing?

Returning back to the Cowen book, I suppose you have two options:

  1. Be prepared, as Cowen says, for an unstable future in which you will have to constantly train and retrain yourself based on changes in economic fortunes. It’s an interesting book, and I can’t quite summarize it here beyond that basic point.
  2. Try to find, as my friend Tdaxp often notes, the right conflux between what you love, what you can be good at, what other people will pay you to do, and what preserves your optionality in general if you are wrong or change your mind.

What I wouldn’t do is pin your hopes on arbitrary predictions and platitudes about what machines can and cannot do. To put stock in that is to both expose yourself to danger should you be wrong and close off opportunities you might have otherwise gained had you had a less rigid idea of your own professional worth. By all means, pay attention to the debate over machines and technological unemployment (frequent correspondent Miles Brundage is doing his PhD on it) but on the whole understand the stakes inherent in predicting whether or not a robot will take your job.

--

--

Adam Elkus

PhD student in Computational Social Science. Fellow at New America Foundation (all content my own). Strategy, simulation, agents. Aspiring cyborg scientist.