So what happens when robots make jobs themselves obsolete?

Is education and training enough? Or do we need an entirely new economy?

Ryan Holmes
Ryan Holmes’ Collection

--

This summer, humanity came one tiny step closer to a reality straight out of Terminator.

In the hit Schwarzenegger franchise, mankind is imperiled when the Skynet computer network — made by the menacingly named Cyberdyne Systems — becomes “self-aware” and turns against its makers. In real life this July, another company called Cyberdyne (coincidence?) based in Tsukuba, Japan, unveiled a brand-new line of artificial intelligence-aided robots. The wheeled automatons are set to be deployed as cleaners and porters in Tokyo’s Haneda Airport. For now, they’re hardly a threat to humanity — unless, of course, you happen to be a cleaner or porter put out of work.

Rest assured the robots will be coming for more of our jobs in the years ahead. We’re on the cusp of a “Second Machine Age,” one powered not by clanging factory equipment but by automation, artificial intelligence and robotics. Self-driving cars are expected to be widespread in the coming decade. Already, automated checkout technology has replaced cashiers, and computerized check-in is the norm at airports. Just like the Industrial Revolution more than 200 years ago, the AI and robotics revolution is poised to touch most every aspect of life — from health and personal relations to governance and, of course, the workplace.

But there’s one important difference this time around. The Industrial Revolution ended up being a net creator of jobs, on a massive scale. There’s a real possibility the AI revolution, by contrast, will be a job killer — and on an equally vast scale. This won’t happen all at once, of course. But considering that the pace of change only stands to accelerate, is it too soon to start asking: How do we prepare for a future where jobs themselves may be in short supply?

Sci-fi fantasy or real threat?

The idea of robots taking our jobs turns out to be far from fringe theory. Of 1,896 prominent scientists, analysts and engineers surveyed for a recent Pew report on the future of jobs, 48 percent “envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers.”

Among the most vulnerable groups: professional drivers like truckers and taxi drivers. By 2020, GM, Mercedes, Audi, Nissan, BMW, Renault, Tesla and Google all plan to be selling autonomous vehicles in some form. Uber’s CEO Travis Kalanick has already mentioned plans for one day replacing all of the company’s drivers with self-driving cars. Other fields where displacement is imminent (or already happening) include low-skill jobs in customer service, healthcare and home maintenance.

It doesn’t end there, however. White-collar roles once thought to be the exclusive domain of people may also end up on the chopping block. The first to go, according to experts surveyed, include paralegals, bookkeepers, transcriptionists and medical secretaries. The widespread use of DIY tax and finance software and automatic transcription tools like Siri only hints at changes to come in these sectors. Importantly, these jobs aren’t just repetitive, mechanical functions. They require an ability to learn and adapt to new information. And this is precisely why the coming AI revolution is so scary.

In a small way, I’ve seen how quickly new job roles can appear (and disappear) even in my own sector: social media. Just a few years ago, “social media manager” was one of the most in-demand job functions on the career site Indeed.com. Then social media management tools — including those made by my company, Hootsuite — became more widespread and easy to use. Social media use has increased exponentially since then but demand for dedicated social media managers hasn’t kept pace. This is still a critical role in large organizations. But for many businesses, ever-more-sophisticated technology has transformed social media from a discrete job to something that people across an organization can do.

Clearly, there’s room for debate on the jobs issue. Won’t the automation of low-tech roles ultimately lead to more high-tech ones? Just as in the past, new jobs — and entirely new sectors — will no doubt emerge. What’s unclear is whether these new positions will offset the loss of the old ones. Take Uber drivers. At the end of last year, Uber had more than 160,000 active drivers. When robots ultimately take the wheel, someone is still going to have to handle coordination, programming and servicing for Uber. But that workforce will presumably be tiny compared to the one currently employed.

Extend that kind of downsizing across entire industries, and the scale of the problem becomes apparent. A 2013 University of Oxford study, in fact, concluded that advances in computers, automation and AI will put 47 percent of US jobs at risk within the next two decades. Roles that require an advanced skill set will still be safe. And there will still be jobs at the bottom of the economic ladder, those that require little training and involve non-routine, service-type tasks. But whole sectors of the economy — in particular those that employ the middle class — stand to be hollowed out.

Cue the latest Hollywood dystopian blockbuster of your choice — Hunger Games, Snowpiercer, District 9, Elysium, etc. With good-paying jobs scarce and widespread unemployment, a whole Pandora’s box of social ills could open: from gross income inequality and social unrest to increasingly repressive governments and the formation of a permanent, marginalized underclass, excluded from participating in the economy.

Or not. Right now it seems premature to start evoking scenes like these, and history has no shortage of pessimists whose dire predictions now look pathetically wrong. Writing in 1798, Thomas Malthus famously predicted that since population multiples “geometrically” while food supply grows “arithmetically,” the human race faced an imminent future of famine and disease. What he failed to take into account, of course, was how new technologies would lead to exponential increases in crop yields and advances in medicine.

So should we panic or sit back and let the robots do their thing? What seems abundantly clear is that the nature of work is changing. The same jobs that support millions of people today may not be here in 10 or 20 years. The most sensible option would seem to be to start taking steps now to prepare for future job displacement. But how?

The education option

The traditional answer has been to invest in developing skills machines can’t replicate — creativity, problem-solving, ingenuity and other higher functions. Interestingly, embracing these skills means taking a step back: from the idea of man that emerged in the Industrial Revolution — cog in a machine, interchangeable and reproducible — to an older Renaissance notion of man, possessed with unique gifts to create and innovate.

The problem is that public education in the U.S. and much of the world is, in many ways, a byproduct of the Industrial Revolution. Education came to be standardized just like production, with students lined up in neat rows of desks and taught a uniform curriculum. An emphasis on memorization and rote learning helped produce a uniform citizenry — literate, compliant, interchangeable — to fill standardized roles in industry, offices and government.

None of that cuts it in an age when intelligent machines can do anything rote or repetitive far better than we can. Cultivating some of our last uniquely human abilities — namely creativity and social intelligence — requires reimagining education as a means not of reproducing uniformity but of nurturing exceptionalism, i.e. the ability to do things that can’t be codified or systematized. The kind of lateral thinking, autonomy, imagination and creativity privileged in alternative education models like Waldorf and Montessori would need to be brought to the forefront. A focus on accepting facts and internalizing codes would need to be replaced by emphasis on questioning, theorizing and, well, dreaming.

On one level, this sounds great. Let’s leave the drudgery for the machines. Let’s take back ideas, art and creativity in a kind of modern Renaissance. But here’s the thing. It might not be enough.

Promoting creativity and encouraging independent thinking may help us stay ahead of job losses in the short run. But in the long run, advanced robots may well be able to execute even some of these uniquely “human” functions better than we can. Here we’re getting into the realm of “strong” or “full” AI — machines that aren’t just able to learn basic tasks but can master pretty much anything. If you’re a futurist, this is when talk of the “singularity” comes into the picture: the moment when computers can make themselves smarter, leading to capabilities that match, then quickly exceed, our own.

Estimates for when we’ll approach this kind of capacity vary widely. But we’re creeping closer all the time. The day when robots replace high-skill human jobs may well be centuries off. Or it could be, relatively speaking, just around the corner. “The central question of 2025,” insists GigaOM lead researcher Stowe Boyd, “will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?”

To that, I’d add a few corollaries. How do we keep the economy humming when jobs themselves have grown obsolete? How do people support themselves? And what does it mean to be a productive member of society in a post-job world?

Considering radical job solutions

The scale of this problem may require some radical, even counterintuitive solutions — like giving money away. A growing chorus of tech cognoscenti, from all-star investor Marc Andreessen to Barack Obama’s one-time director of analytics Jim Pugh, have espoused the idea of living income. Not welfare or charity, living income is a stipend — roughly enough to live on with few frills — paid to every adult in the country, whether they’re working or not. In the U.S., numbers thrown around have averaged from $15,000-$20,000 per adult per year.

Let’s get past the obvious reactions to this idea: that giving away money is crazy; that the whole scheme would permanently warp the economy; and so on. Why might the concept of living income actually make sense? For starters, in a world where AI and robotics have made unemployment the norm, not the exception, people still need to eat. They still need to support families. Deeper still, they need reason to remain invested in the idea of society. Leaving the masses displaced by new technology to their own devices — jobless and destitute — is hardly a recipe for a bright future.

Living income also allows us to keep the wheels of the economy — and innovation — turning. “A fundamental insight of economics is that an entrepreneur will only supply goods or services if there is a demand, and those who demand the good can pay,” writes Center for Internet and Society expert Andew Rens. In the new millennium, technology has generated enormous wealth for innovators and entrepreneurs. This has fed a virtuous cycle, with returns invested in developing newer and better technologies. (This same cycle, it should be said, has also had the not-so-virtuous effect of concentrating wealth in ever fewer hands.) But the whole process grinds to a halt in the absence of consumers. Progress depends, in no small way, on people buying stuff. And that depends on them having an income.

Interestingly, the living income concept has its adherents on both sides of the political spectrum. Back in the day, both Martin Luther King Jr. and Richard Nixon supported variations on the idea. Today, corporate-friendly libertarians — of the Charles Koch variety — see it as a way to replace myriad government handouts with one flat, transparent payout. Progressives, meanwhile, view living income as a means to level the playing field and safeguard basic rights and dignities. Funding it, of course, could get a bit tricky. One estimate pegs the cost of providing living income in the U.S. at $4.38 trillion — more than the entire $3.5 trillion federal budget. Shifting resources from other social welfare programs could help, as could taxes on income earned in excess of the minimum.

It won’t be easy, by any measure, but living income isn’t completely without precedent. During the 1970s, a five-year basic income program in the Canadian province of Manitoba called Mincome showed promising results. Mothers spent more time raising children. Students showed higher test scores and lower dropout rates. Hospital visits, mental illness, car accidents and domestic abuse cases all declined. And in the end, total working hours only slipped by a few percentage points. In other words, having a basic income didn’t lead to sloth or indolence. It let people spend time on the things that mattered: family, education, health, personal fulfilment. If the robots do take our jobs one day — but give us back some of those things in return — it might not be such a bad trade after all.

--

--

Ryan Holmes
Ryan Holmes’ Collection

Entrepreneur, investor, future enthusiast, inventor, hacker. Lover of dogs, owls and outdoor pursuits. Best-known as the founder and CEO of Hootsuite.