Analysts, academics and amygdala: How Gartner, Oxford U, others feed on our fear of machines

Vinnie Mirchandani
6 min readOct 2, 2016

--

I heard Peter Diamandis of X-Prize fame and author of Abundance present at a tech event this week. It was staple Peter talking about how human development has moved to an age of “exponential and global” after 150,000 years of “local and linear”. He has an optimistic view on how technology helps turn things that are scarce into abundance.

As he presented on trends in robotics, machine learning, 3D printing and other areas, I correlated them to what I learned in interviews for my new book Silicon Collar. They were pragmatic and positive like Peter.

Then he joked President Clinton once asked him “Why do you have such a positive outlook? Don’t you watch the news?”

He dove into neuroscience and talked about the amygdalae, the two almond shaped portions of the brain which are said to have evolved with our ancestors’ survival instincts. Modern amygdala are said to be always on high alert and we filter out positive news and focus on negative news, which the news media is only too happy to provide.

I was tempted to stand up and shout “the news media have been joined in the negativity by analysts and academics”

In the book, I have a chapter titled Sum of All Fears. I point out the pessimism of Gartner, Oxford U and many other “brands” about machines killing hundreds of millions of jobs.

Peter Sondergaard, Head of Research at Gartner, told the audience at the firm’s 2014 Symposium/IT Expo: “By 2025, three out of 10 jobs will be converted to software, robots or smart machines.” and “By 2018, digital business will require 50% fewer business process workers.”

More recently, Gartner has projected by 2018, more than three million workers globally will be supervised by “robobosses”.

Two Oxford researchers had a similarly pessimistic assessment: “According to our estimates, about 47% of total U.S. employment is at risk.”

WEF, McKinsey, MIT and many other thought leaders have similarly large and scary projections about job losses.

I was a Gartner analyst from 1995–2000. Gartner usually assigns a probability to its planning assumptions, as an indicator of its confidence in such a prediction, but the Sondergaard statements did not indicate any such hedge.

While Gartner had a timeline for its projection, the Oxford professors did not even attempt one. They also did not appear to do a reality check for the job categories they analyzed. The researchers had calculated a high 0.79 “susceptibility to computerisation factor” (with 1.0 being the highest) to heavy truck and tractor-trailer drivers. This, when the U.S. trucking industry says driver shortages could reach as high as 175,000 positions by 2024 (even if the industry adopts autonomous trucks, regulations will likely require a driver as a backup). The professors had assigned an even higher factor of 0.84 to cartographers and photogrammetrists (who deduce measurements from images), which the Bureau of Labor Statistics projects as one of the fastest growing occupations over the next decade. They had assigned a yet higher 0.94 factor to accountants and auditors, whereas hiring at U.S. public accounting firms jumped to reach record levels in 2013–2014.

What about new jobs from the automation and new digital businesses? The Oxford profs did not feel that was worth quantifying.

Here’s what is concerning. Oxford is the oldest university in the English speaking world. It has archives going back centuries.

Did they look at the archives and factor research like I did which shows that automation only gradually erodes jobs, often over decades. Why are there still 90,000 bank branches each with several teller and other jobs (just in the US) even after decades of ATMs and Mobile banking? Why do we still have over 600,000 U.S. postal jobs in the face of all kinds of digital communications and when the USPS has automated in the form of kiosks and logistics tech? Why do we still have so many grocery checkout jobs in face of the UPC code/scanner patented 65 years ago and self checkout available for years now? Why were half the cars sold globally last year manual — 5 decades after Playboy magazine proclaimed “Bye bye, stick shift”?

Gartner has been issuing “technology hype cycles” for decades. Did they factor that AI has gone through multiple hype cycles since the 1950s. That is when Alan Turing defined his famous test to measure a machine’s ability to exhibit intelligent behavior equivalent to that of a human. In 1959, we got excited when Allen Newell and his colleagues coded the General Problem Solver. In 1968, Stanley Kubrick sent our minds into overdrive with HAL in his movie, 2001: A Space Odyssey. We applauded when IBM’s Deep Blue supercomputer beat Grandmaster Garry Kasparov at chess in 1997. We were impressed in 2011 when IBM’s Watson beat human champions at Jeopardy! and again in 2016 when Google’s AlphaGo showed it had mastered Go, the ancient board game. Currently, we are so excited about Amazon’s Echo digital assistant/home automation hub and its ability to recognize the human voice, that we are saying a machine has finally passed the Turing Test. Almost. Yann LeCun, director of AI research at Facebook, has commented, “Despite these astonishing advances, we are a long way from machines that are as intelligent as humans — or even rats. So far, we’ve seen only 5% of what AI can do.”

It’s the same hype with robots. The first humanoid robot appeared in Japan in 1928. It could do simple motions like move a pen with its right hand. Today, Japan is the leading maker and consumer of robots, accounting for half of the world’s production. Naturally, it has the world’s largest concentration of robot engineers. Yet, these world-leading experts have tried for five years following the Tohoku earthquake to use robots to clean the radiation at the Fukushima nuclear plant. So far, all the robots sent into the reactors have failed to return.

With their brands, every word they say gets amplified. The Oxford study has been parroted in over 400 other academic journals, without any of the questions I raise. Gartner issues hundreds of similar predictions each year and rarely goes back and audits them for accuracy.

Business executives know how to take such analysis with a pinch of salt. They even have a term for it — FUD. But the average citizen feels paranoid. It’s showing up in alarm in government policy.

Switzerland was the first of many countries which plan to hold a referendum on “Universal Basic Income” — a payment to each citizen irrespective of work status. It is being justified on the assumption we are moving to jobless societies. If Gartner can project a third of jobs gone by 2025, and Oxford even higher, you can see why people are panicked.

It’s time to ask the pessimists for the data which justifies their dire predictions,

Predicting the future has always been risky. The pushback I get is the past is a poor indicator of the future. I say then we should quit teaching regression analysis in Stats 101. Trend lines across a time series of data — in my case going back a century — should count for something. My analysis over a century showed only gradual impact on jobs. I saw jobs being transformed, more than destroyed.

The reality is there are compute evolution curves, and then there are adoption curves. The former are becoming exponential (but they have been since Gordon Moore wrote about his famous law in 1965) , the latter not so much. As I analyze in the book, there are societal “circuit-breakers to over automation” which stops technology from being absorbed at dystopian rates.

Our thought leaders should be more responsible in their pronouncements. And we should not be afraid to challenge them for justification of their pessimism, before we set up a new wave of social programs which could cost us trillions and potentially destroy our work ethic.

--

--