Is the Future “Unimaginable?” [intro]

Mindful musings on Homo Deus and crucial considerations on the future of technology, work, and society

Terralynn Forsyth
20 min readAug 4, 2017

The Mower by Robert Frost

THERE was never a sound beside the wood but one,
And that was my long scythe whispering to the ground.
What was it it whispered? I knew not well myself;
Perhaps it was something about the heat of the sun,
Something, perhaps, about the lack of sound —
And that was why it whispered and did not speak.
It was no dream of the gift of idle hours,
Or easy gold at the hand of fay or elf:
Anything more than the truth would have seemed too weak
To the earnest love that laid the swale in rows,
Not without feeble-pointed spikes of flowers(Pale orchises),
and scared a bright green snake.
The fact is the sweetest dream that labour knows.
My long scythe whispered and left the hay to make.

The fact is the sweetest dream that labour knows.
My long scythe whispered and left the hay to make.

When you have a question, where do you go to find the answer?

I’d argue the majority of us in 2017 will type our question into Google via some kind of mobile device, leading us to our favourite forums or highest ranked website. If you’re reading these words, you’re an avid user of the internet and open a HTML enabled browser on a daily basis. Whether if it’s to check the weather, stock market, email, or news, most of us will begin some part of our day with instant access to information we seek.

Interestingly, the common answer to this question throughout history — where we go for information — has been dependent on the technology available to us, our General Purpose Technology.

Writing served as one of the first General Purpose Technologies // Jean Le Tevenier via WikiCommons

Today, we get (mostly) instant answers to every day questions, accompanied by a plethora of irrelevant information. But this is a very recent phenomenon. Traditionally, we may have ventured into a library for an encyclopedia, reached for our religious text of choice, or even looked up into the sky for a sign from the gods.

But, chances are, you don’t do this.

Le Penseur (The Thinker) // WikiCommons

Information has always surrounded us, but inference has usually been a slow process.

A defining feature of history is that sources of authority over and ease of access to information change with technology. Technology shapes the world and our place within in. The long scythe whispers to us, and leaves the hay to make.

“New technologies kill old gods and give birth to new gods. That’s why agricultural deities were different from hunter-gatherer spirits, why factory hands fantasize about different paradise than peasants and why revolutionary technologies of the 21st century are far more likely to spawn unprecedented religious movements than to revive medieval creeds… Religions that lose touch with the technological realities of the day lose heir ability even to understand the questions being asked.”

— Harari via Homo Deus, p. 269

Ok Google, what gives my life meaning?

Her (2013) // Something that “feels”

With this in mind, “technological disruption” takes on a whole new vernacular.

It doesn’t just revolutionize how we get work done, how we access information, our business models, or social/communication methods— it alters the very essence of human “being”, itself. These turn out to be surface level effects of a larger phenomenon at play that makes Homo sapiens truly unique — new technological paradigms redefine the edges of our storytelling in an attempt to 1) understand the world and 2) give us meaning.

This was just one bite size proposition I’ve been parsing through after reading through Yuval Noah Harari’s Homo Deus…. twice. It’s one of many socio-philosophical chunks presented in the book, but a few in particular have interesting implications for common questions we have today about the history and future of technological impacts on society — and particularly how the future may diverge from the history we draw understanding from.

The following is my attempt to draw out some of the arguments made in the book, relate to some short/medium-term socio-economic questions that currently peak my interest, and conclude by placing in a framework constructive to finding some answers.

Our current era… in a nutshell

The basic premise of Harari’s book (related to its precursor, Homo Sapiens) argues that humans organize as collective entities to survive and do so by the means of storytelling — we are the only animal which can believe in imagined ideas, such as gods, states, money, and human rights.

“Our first invention was the story: spoken language that enabled us to represent ideas with distinct utterances… neurology gave rise to technology. It is only because of our tools that our knowledge base has been able to grow without limit.”

— Ray Kurzweil, How to Create a Mind (2012), p. 3

The success of our stories to organize, connect, and act are largely dependent on the technology of the era. While technology has (mostly) freed Homo sapiens from the chains of famine, plague, and war, this does not mean that technology will persist as a positive agent of change in the future.

With rapid changes in technology come rapid changes in our storytelling or belief systems.

Copernicus and Galileo discovered this first-hand — progress in science and technology give us otherwise invisible worlds. But these invisible worlds take time to integrate into the rest of society. In 1610, when Galileo published surprising observations that he had made with the new telescope, namely new phases of Venus and moons of Jupiter, he was condemned by the Roman Catholic Church for promoting heresy.

What Galileo was essentially doing is Data Science — recognizing patterns and information around him, but was able to do so at a higher level with new technology at his fingertips. Using a telescope, Galileo counted, scattered between these six bright stars, over forty fainter points of light. He recorded the positions of 36 stars in his sketch of the cluster and drew outlines around the stars that had been known since ancient times.

The telescope extended the capabilities of his mind.

Galileo’s “Data Pattern Analysis” — sketch of the Pleiades // Octavo Corp./Warnock Library
The Galileo Affair 17th century // WikiCommons

Technology has brought humans closer to information, extended our senses, allowed us to create and preserve a knowledge base, and enabled us to combat our common enemies as a collective entity — transforming our stories that motivate us to work together in the process. Sometimes our stories don’t align, which can mean trouble. The current era of widespread democracy, liberalism, and globalization is a very recent experiment with only a few challengers, but most of the world agrees on the ideals they promote. Outside of religious or political variations, most large scale collective agents (aka nation states) will work together on the shared belief that famine, plague, and war is bad and economic growth and human rights are good.

These stories are imagined, but they are important. They give us meaning to waking up and doing something with our day — whether it’s favouring the gods, saving up for college, or getting that promotion at work. I get up each morning to carry out the story that gives my life meaning.

The authority over information that technology provides — whether its the Roman Catholic Church, a telescope, or Google — and the vital role it plays in giving our lives this meaning is significant. This cannot be overstated.

With great data, comes great responsibility // “Data is the new oil” via the Economist

Like any leap in general purpose technology (GPT), the Information Age powered by the internet has enabled greater access to information. But for the first time, our technology has enabled other forms of intelligence to advance and become more capable, both cognitively and physically, than we are. The authority over information is undergoing another significant shift never seen before.

We have access to the most abundant forms of information processing that the world has ever known, so where does that leave our sense of meaning — as redundant information processors? When advanced forms of Artificial General Intelligence (AGI)* replace most humans in mostly everything, how will this affect our societal narratives and systems? And who gets to benefit from this shift of authority over information?

*Also referred to as strong AI or full AI. I don’t go into technical details of AI or machine learning — see a brief primer here.

Life in an Invisible World: the Age of Abundance or Redundancy?

Harari’s book is full of thick and abstract arguments that could have very serious implications for the future of technology’s impact on society — particularly AI— so they are worth parsing out. In thinking in terms of the current state of technological change, the main question that I keep coming back to is the following:

“What will happen to the job market once artificial intelligence outperforms humans in the most cognitive tasks? What will be the political impact of a massive new class of economically useless people?” (p. 269)

This can be interpreted as: what if technology makes the ‘collective’ or the ‘masses’ economically useless? Not only do we need to ask how education and social should involve in this scenario, but what about the meaning that work gives to my life?

The fact is the sweetest dream that labour knows.

Based on the pace and scale of our current era of technological change, I’d argue that what happens to the future “job market” will mean more than just dealing with short term unemployment or structural transitions as a certain class of workers transitions from one set of jobs to another.

To put things a bit more bluntly, Harari states:

“The most important question in the 21st century economics may well be what to do with all the superfluous people.” (p. 318)

This could be viewed as a sensationalist statement, but it’s been made elsewhere by other smart people:

“The development of superintelligent AI…. would rank among the most important transitions in history… [it] will be associated with significant challenges, likely including novel security concerns, labor market dislocations, and a potential for exacerbated inequality.”

(Bostrom, Dafoe, and Flynn, 2016)

Now, the challenge: “Won’t technology create new jobs? Hasn’t this happened before?” Great question.

Lessons from history: why is this era of technological change any different?

First, the political resistance is largely the same

Indeed, Harari also clarifies that ever since the Industrial Revolution, people have feared that mechanization might cause mass unemployment… but never really did. People always found a use for their labour one way or another given enough time to transition. Outbreaks of concern over technological change include the Luddites, protests in 1920s, John Maynard Keynes coined the term “technological unemployment” in the 1930s, and President Kennedy declared that the major domestic challenges of the 1960s was to “maintain full employment at a time when automation… is replacing men.” A brief look at history shows a somewhat unnecessary anxiety over automation in pure economic terms — technology created more jobs than it destroyed.

But what should also take away from a historical perspective is that people don’t necessarily hate technology, but they certainly don’t like feeling redundant. Despite their modern reputation, the Luddite protests against the introduction of machines in new factory systems in the early 19th century wasn’t about machines taking their jobs. Most protesters were skilled machine operators, and the technology they destroyed wasn’t particularly new either.

Manufacturing protest before you could blame China for stealing “our jobs” // WikiCommons

Rather, it was a culmination of unfortunate political, economic, and technological factors that ran parallel to each other — widespread unemployment, economic upheaval, and war against Napolean’s France. On March 11, 1811, growing frustrations with the system proliferated into a crowd of protesters demanding more work and better wages, smashing machines, and spreading throughout Nottingham, a textile manufacturing center. But the most common machine they attacked was the stocking frame, a 200-year-old knitting machine. The spark that fueled their fire wasn’t the introduction of the machine, itself.

It’s probably more or less obvious to state that organized systems of human beings at scale require that those humans experience some form of meaning or general incentive for cooperation. When they don’t, they fight back.

“Never until now did human invention devise such expedients for dispensing with the labour of the poor,” said a pamphlet at the time.

A more compelling story for the Luddites // WikiCommons

The fear that machines might take over our jobs isn’t new. But, the fear that machines might take over the bulk of what humans are capable of doing in our jobs is an understandable and relatively new one. The Luddites belonged to a somewhat idolized group of people defending a pre-technological way of life, because that life — that narrative — had meaning in it for them.

They didn’t necessarily hate the machine itself, but the new story of work they were being forced into. Getting past the myth of the Luddite resistance and seeing their protest more clearly serves as a timely reminder that it’s definitely possible to live in peace with technology, but only if we question the ways in which its shaping out lives. Whether protesters are operating under a Luddite fallacy or blaming China or NAFTA— the future of work and the narrative that people believe in is important.

While it makes for nice political sentiment, manufacturing jobs won’t be what they once were // New York Times

The neoliberal experiment that calls on nation states to cooperate in a large system at a global scale (aka globalization) hasn’t exactly had a great rep lately. But, while globalization was somewhat responsible for a recent, sizable investment into Wisconsin, announced July 26th, for a high-tech manufacturing plant from Taiwanese Foxconn, it was welcomed with open arms. Why? Jobs, jobs, jobs. What was largely left out of this announcement is that this manufacturing center will likely be the most high-tech of its kind — manufacturing work won’t even resemble the manufacturing work done in the past.

Whether or not technological change creates jobs after transition, people feel tension and will act to preserve the story of work that gives their life meaning. Work means more than production and the nature of what work looks like changes constantly — so should then our means of 1) training and 2) compensating our workforce.

And to assume that each era of technological change is equivalent is just naive. Several experts now devoting their lives to the study of AI and its potential argue that, as innovation grows even more complex, it is increasingly difficult to evaluate the effects or dangers that lie ahead.

“Everything is connected — change the weapon and you change the war — and the connections tighten when you’re made explicit in computer networks. At some point, automation reaches a critical mass… People see themselves and their relations to others in a different light, and they adjust their sense of personal agency and responsibility to account for technology’s expanding role.”

— Nicholas Carr, The Glass Cage (2014), p. 193

What is clear is that our current political system of democracy will still heavily relies on the masses — the people — but our economic system, maybe less so. Here’s why…

Second, the nature of the technology may be entirely different

Here’s a common claim: “Artificial intelligence is a technology like any other — an extension of human capabilities. It will create new jobs and make others obsolete.”

This view makes two key assumptions:

  1. Artificial intelligence is a substitute and/or disruption for routines, not of the worker, him/herself (displacement effect)
  2. Workers that are displaced by artificial intelligence can transition into other occupations that are created over time (productivity effect)

So, the real question is which of these two effects will dominate in the AI era, looking at both the short term and the long term effects. Looking at past cases, it appears that the displacement effect largely dominates in the short term, while the productivity effect leads to a positive impact on employment in the longer term. From automatic looms to cars to recent examples of software capability and automated teller machines (ATMs), this trend seems to hold up.

But, should we expect the impact of AI on employment to follow a similar pattern? Again, to assume that each era of technological change and its subsequent effects are the same is naive. And may be widely incorrect when considering our highly connected and expansive modern world. When comparing to the Industrial Revolution, the McKinsey Global Institute estimates that AI’s disruption of society could happen ten times faster, at 300 times the scale, and mean roughly 3000 times the impact.

So, two questions remain:

  1. are we adequately questioning the ways in which rapid technological change is shaping our lives at scale (socially, economically, politically, individually, etc.), not just disrupting
  2. should we expect this change to follow patterns of the past?

To answer this in full and think about the most crucial considerations here is complex. I start with a simple framework, outlining common assumptions, important concepts, and sighting key trends so far. This will be followed by a series of posts that will explore related dimensions in understanding the impact of AI on the future of work.

Considering the “Unimaginable” Future of Work

In terms of technology and its effect on the nature of work, a few important concepts are helpful to keep in mind:

General Purpose Technology: Advanced forms of AI are now considered “general-purpose technology” (GPT), with the transformative power to reshape the economy and boost productivity across all sectors and industries. It is “general purpose” because it can be applied in a wide domain without many limitations and drastically increase performance. GPTs are traditionally thought to be the “engines of growth” in a macroeconomic sense. Think of electricity, the steam engine, and the automobile and their effects on the Industrial Revolution — it couldn’t have happened without them.

GPTs often require a large-scale effort at redesigning infrastructure, business models, and “cultural norms.” When the Economist reported on the ICT revolution and penetration rate of smartphones in 2012 as GPTs, it described the future as evolving to be “a very different and dramatically more productive place” citing that smartphones or “mini computers” placed everywhere will have unprecedented implications for business and economies at large.

Main Point: Advanced AI and applied machine learning across industries as a GPT will most likely be more disruptive than anything we’ve seen in the past.

Skill-Biased Technological Change: Unlike the Industrial Revolution, the computer age has favoured skilled workers, not unskilled workers. It was the skilled artisans and craftsmen of the time who destroyed machines in protest against the introduction of machines and the concept of the factory. With mass production possible, unskilled workers, mostly previous employed in farm work, were employed to carry out routine jobs alongside new machines, leaving the artisans will less opportunity (and compensation) for their work. The Industrial Revolution favoured — needed — the masses for military, economic, and political purposes. Our current era does not.

The Economist (2012)

Unlike most technology that preceded it, AI’s promise is not a mere extension of human capabilities — it is human capability, and perhaps, a higher form of it. There is no reason to believe that AI will not be able to substitute for human labour at even advanced levels as a whole — not just act as a compliment for it.

Main Point: Skill-biased technological change is different this time around than it was in the Industrial Revolution. And this means widespread disruptive effects in policy, economic, and social systems.

Job-Polarization: Many advanced economies have been undergoing important long-run changes in the nature of jobs that make up the base employment. This involves two trends: 1) the decline of middle-skill occupations, such as manufacturing and production (routinte work) and 2) the growth of both high- and low- skill occupations, such as managers and professional occupations on one end, and assisting or caring for others on the other (non-routine work).

(To be fair, the most likely drivers of job polarization are automation AND offshoring/outsourcing.)

Main Point: Middle skill and middle income areas of the economy are likely to be even more disrupted in the future. This will pose significant challenges for developing economies in the middle of their industrialization that are still highly reliant on manufacturing for economic growth. (See the effects of premature deindustrialization.)

Now, let’s challenge some common assumptions.

Battle of Brain vs. Brawn

Assumption #1: AI is a substitute for routines and tasks, not the worker.

Again, in the short term, yes. Looking at how the nature of jobs has changed, with the U.S. as an example, the data shows a clear trend of job polarization, with the majority of the fastest growth in jobs in non-routine, cognitive work (the brain).

The picture is clear: Employment in non-routine occupations — both cognitive and manual — has been increasing steadily for several decades. Employment in routine occupations, however, has been mostly stagnant. (FRED, 2016)

So far, the human brain is winning, but…

Brain is the new black / FRED 2016

…AI technology and a fully developed AGI system is defined by its replication of human psychology. The advent of machine learning means that not only is it a replication of algorithms created by a human, but also a way for the machine to update and improve (i.e. “to learn”) on its own.

The trajectory of AGI may be to replace the human worker, not what he can physically do. It becomes the worker as a whole, not a compliment of his/her capabilities. Right now, this is taking the form of most routine jobs, whether that be in high-skill or low-skill, because these require little learning and easy to identify repetition (written rules that are “easy” to express in code). Multitask learning and general-task-AI are still lagging behind human cognitive ability and performance. Nevertheless, the rapid improvement in the performance of machines through learning is something that has accelerated since 2012, when deep learning neural networks started their development. Technological advances have increased the rate with which machines improve their function, further accelerating the progress of AI.

For example, in 2015, machines managed to achieve an accuracy of 96% at the ImageNet Large Scale Visual Recognition Challenge, which evaluates algorithms for object detection and image classification at large scale. Humans on average label an image correctly 95% of the time. Since 2012, this competition has largely been regarded as a benchmark in measuring human vs. machine intelligence, sparking an explosion in AI investment ever since. It’s focus on the quality and quantity of data (rather than the algorithm itself) has played a large part in AI’s ability to excel past human performance.

Like humans, instant access to vast amounts of information are enabling machine intelligence to excel — and quickly.

Humans are still superior in performing general tasks and using experience in one task to deliver another, but it is clear that machine capabilities are catching up. So, what happens when humans no longer maintain comparative advantage as producers and processors of information?

Error rates on ImageNet Large Scale Visual Recognition Challenge / Image-net.org

Riveters need not apply… ever again

Assumption #2: More and/or better jobs will be created than are displaced.

In the short term, this is possible. But the transitional nature from old work to new work is less certain. Considering there is a slight possibility that assumption #1 does not hold, an average human can be considered replaceable. How many blue collar manufacturers from just ten years ago will be employable at a brand new, high-tech Foxconn facility? It’s highly probably that not many will. There will be jobs. But they will be different. The reality is that the combination of tax breaks, proximity to a high-spending consumer market, and more advanced engineering expertise constitutes more of an advanced manufacturing base than a large, cheap workforce. Why? Because labor isn’t the core of the equation — technology is.

‘Most jobs will not be on the factory floor but in the offices nearby, which will be full of designers, engineers, IT specialists, logistics experts, marketing staff and other professionals. The manufacturing jobs of the future will require more skills. Many dull, repetitive tasks will become obsolete: you no longer need riveters when a product has no rivets…

Offshore production is increasingly moving back to rich countries not because Chinese wages are rising, but because companies now want to be closer to their customers so that they can respond more quickly to changes in demand.”

(The Economist, 2012)

During previous waves of automation, workers could switch from one kind of routine work to another, but this time most workers will have to switch from routine, unskilled work to non-routine, skilled jobs to stay ahead of automation. Previous waves of automation also took decades to transfer labour from agriculture to industry — whereas software systems can now be deployed much more quickly.

And there is nothing to say that more advanced forms of AI won’t be able to replicate the white collar skills of engineering, design, and marketing in the future. In some aspects, they already can. Recent advances showing an exponential trend in two things are important to keep in mind when considering what AI will and won’t be able to do and the new jobs they will create: 1) computational resources and 2) data.

The following chart is just one example of the exponential growth (plotted on a logarithmic scale) of computational resources.

Exponential computational capacity over time / Koomey, Berard, Sanchez, and Wong (2011)

Data follows a similar trend — today, as we enter the ‘third wave’ of data, humanity produces 2.2 exabytes (2,300 million gigabytes) of data every day; 90% of all the world’s data has been created in the last 24 months.

Most data today is transmitted via the internet for use, so patterns focused on internet traffic serves as a proxy for the enormous increase in humanity’s data production. Collectively, we transferred approx. 100 GB of data per day in 1992, and by 2020 we will be transferring an estimated 61,000 GB per second.

(More fun data charts on technological progress here.)

So, machine intelligence as a replacement for labour in whole is possible. And the creation of jobs for the masses in its place isn’t certain. This differs from previous eras. And certainly requires a different response.

Regardless of where one lies in their opinions on these issues, one thing is fundamentally clear: companies and governments will need to be more agile in their approach to education and training as AI systems catch up to human cognitive abilities. AND, we should also remember that technology plays a larger role than in our means of production. Implications for the “future of work” mean more than economic outcomes.

Final Note: Technology is more than production

What is the compelling urgency of the machine that it can so intrude itself into the very stuff out of which man builds his world?

– Joseph Weizenbaum, Computing Power and Human Reason, (1976)

The fact is the sweetest dream a labourer knows

I began this post by questioning the role that technology plays in both our understanding of the world and our place within it through the mechanism of information — the “fact.” While historical reflection offers a necessary exercise in contextual understanding, imagining how the role of technology and advanced forms of machine intelligence may change in the future is just as important. When technology transitions from extending the human hand (/brain) to becoming the human hand (/brain), extra pause is needed.

The fact is the sweetest dream that labor knows.
My long scythe whispered and left the hay to make.

Frost was a poet of labour. The mowing as a part of the labourer, not the hay, is what matters most in the poem.

The action of work is central to both living and knowing as humans. Work brings us into a world and understanding of our place in it. Technology is as vital to the efforts of knowing as it is to the efforts of production. The scythe makes the mower, and the mower’s skill in using the scythe remakes the world for him.

Driven by interests in the history of technological change, policy, and the economics of development, I can’t help but be both extremely optimistic about the opportunities that lie ahead for AI’s role in social impact.

But. What I’ve come to realize is that the next set of opportunities and challenges of embedding technology as a form of intelligence into our world requires a more comprehensive, interdisciplinary approach in order to truly understand the implications on 1) the evolutionary self and 2) our organized, collective units. What started as a simple reflection exercise on summer reading (so far) has expanded beyond a rigid economic way of thinking and evolved into exploring a larger set of research areas I now consider both important and connected. These include evolutionary anthropology, history, neuroscience, political economy, computer science, and science and technology policy to name a few.

This series of posts will be a beginning phase to an attempt to 1) better understand the current sets of challenges 2) explore these research areas and their intersections and 3) add to the discussion of policy solutions concerning technology and social impact.

Next: Is the Future “Unimaginable?” Part I: Recent research and trends on the future of work

--

--

Terralynn Forsyth

founder, product, design // workforce tech // FutureFit AI