Robots Can Do Our Jobs? No: That’s Algorithmic Pseudoscience at Work
Ironically, algorithms are telling us that machines will soon be able to do most of our jobs, but those conclusions perfectly illustrate what’s non-scientific about turning human reasoning over to computers.
Barely a month passes without a news story about how robots and artificial intelligence (AI) are going to devastate a significant number of the jobs in the workplace, sweeping away solid salary paying employment for factory workers and white-collar clerks alike. Just this month, the UK Parliament reported on the future of work in the face of automation, declaring that over 70% of jobs were at medium to high risk of displacement.
While the report draws on a “more optimistic” study by the ONS to arrive at this prediction, the ONS used methods inspired by the even-more-frightening results in a 2013 paper by Oxford economist Carl Frey and machine learning expert Michael Osbourne, which found that almost half of jobs were at high risk, and two-thirds at medium risk. The paper was so important that in addition to shaping the methodologies of later reports (like those of the ONS, the OECD, and others), it informed a speech of the Bank of England’s Chief Economist to the Trades Union Congress in 2015, prompted the “Fourth Industrial Revolution” theme of the 2016 Davos Forum, provided the basis of the WEF “Future of Jobs” report, and generated a subsequent sea of articles by journalists who rarely questioned the numbers.
Unsurprisingly, this has made economist Carl Frey’s latest work The Technology Trap: Capital, Labor, and Power in the Age of Automation a popular book, topping the AI chart on Amazon. The book offers a fascinating history of technology’s effects on employment from the Industrial Revolution to today and attempts to tackle how we might avoid a repeat of past social ills, as the Computer Revolution sweeps away a majority of human jobs. The “trap” suggested in the title of Frey’s book is the disaffection of the modern middle class, who, on losing their secure employment to machines, could turn all Luddite and suppress technological innovation before society can reach the promised land, where new technologies deliver even greater economic prosperity for all.
However, there is also another “trap,” one induced by the widespread belief in these job displacement predictions in the first place. Should anyone pause to examine these analyses, they would find their acceptance without further consideration to be a prime example of algorithmic pseudo-science., whereby a computer program draws numerical conclusions that obscure underlying, non-scientific biases and people accept those conclusions as scientific facts.
The nature of such results is unsurprising, as the statistical techniques employed in today’s algorithms inevitably reduce complex human data via broad generalisations drawn from a few simplified features. Those of us who work in AI explicitly engineer feature extraction and generalisation as algorithmic goals. This is how algorithms can parse otherwise incomprehensible “Big Data.” To demonstrate how this works in practice, let’s step through the process which led to these predictions about jobs.
Firstly, in Frey and Osbourne’s study, base data representing “all” human jobs came from O*NET, a US government source that lists hundreds of jobs alongside dozens of “features” required to perform them (for instance, manual dexterity, persuasiveness, or mathematical skills) which in turn are assigned a numerical rating according to their level of importance in performing the job in question. These “feature ratings” came from surveying employers and employees. So, the opinions, views and biases of those Americans are already embedded in the base data.
Secondly, only nine features from the large set available in O*NET were used in the Frey and Osbourne algorithm. Such data reduction is commonplace and essential for the efficient functioning of any algorithm, but it undoubtedly represents a huge simplification of the complexity of the real-world problems we are trying to understand, in this case, the future of all human employment.
The nine (human-selected) features focused on dexterity (finger, manual, the ability to work in cramped spaces), creativity (originality, fine arts), and social intelligence (perceptiveness, persuasion, negotiation, caring for others). All other possible features that might be required for specific jobs were ignored in the evaluation.
Thirdly, the nine features were then used to “map” 702 selected jobs from O*NET to a “probability” of computerizability. This was done by an algorithm based on Bell Curve probabilities. Why this particular bit of mathematics? Probably because the Bell Curve is often assumed to be a sort of “natural law,” applicable across a range of complex phenomena (whether that assumption is conscious or not). However, a more informed perspective is that The Bell Curve is simply an artefact of looking at complex phenomena through the lens of statistics.
Fourthly, the data used to “train” the algorithm was a set of just 70 jobs selected by the programmers. Half of these — including Judicial Law Clerks, Truck Drivers, Dishwashers, Clerks, and others — were deemed to be fully computerizable; while the other half — including Surgeons, Clergy, Chief Executives, and Economists — were judged to be impossible for computers to do. This is yet another place where human bias enters the algorithmic process, largely undetected.
Finally, the “probabilities” of job computerizability offered by the algorithm have at best a tenuous connection to any technical or everyday meaning of that word. While most of us would judge a 98 per cent probability of a job being computerised as a virtual certainty, or an indication that 98 jobs out of every 100 will be computerisable, the reality is that the numbers in the Frey and Osbourne paper (and the other studies that followed in its wake) have little connection to a logical certainty or a frequency of occurrence. Rather, the numbers are subjective ratings with no truly objective justification, masking all manner of biases.
Furthermore, the presentation of results in the form of a “probability distribution” and a colourful infographic obscure numerous peculiarities in the detailed, job-by-job analyses, like the fact that in Frey and Osbourne’s study, Fashion Models are categorized as having a 98% probability of being replaced by computers (no doubt because the job was not considered to require dexterity, creativity or social intelligence, although the latter two are very debatable, and likely indicate biases in the original O*NET employer survey results).
In a recent talk at Google headquarters, Dr Frey referred to this glaring anomaly and presented images of AI-generated avatars wearing runway fashion. This is indeed an interesting development, however, fashion drawings, photographs and mannequins have always existed, and it is incredibly doubtful that computer-generated images present a real threat to the jobs of Gigi Hadid and Kendall Jenner.
All this is not to say that many jobs will not succumb to computerisation in the near future. Far from it. Ironically, the uninformed repetition of mere statistics from these studies, as is done in the Parliamentary report, makes this outcome even more likely. Just as with the broad misinterpretation of scientific papers in the past (for instance, race science and sexist exploitation of the I.Q. test), the acceptance of a pseudoscientific “fact” based on a set of statistical conclusions often justifies convenient conclusions for those with power.
It is not surprising, therefore, that tech billionaires (like Jack Ma and Elon Musk), tech companies (Amazon, Uber), political leaders, and the World Economic Forum (funded by 1000 member companies, typically global enterprises with more than $5 billion in turnover) also propagate this vision of the future. Because there is a long history of technology replacing people, not because it can do their jobs so well, but because it is economically expedient for employers. Such was the case with the frame-breaking Luddites in the early 1800s, who Frey points out weren’t simply ignorant technophobes, but rather workers desperately campaigning against a major threat to their livelihoods and communities, just as workers are today.
However, if we set the dubious numbers in these reports aside, it is still within our power to decide how best technology and automation should fit in the future workplace. That workplace will need to balance the best of technology with worker’s rights, rising sustainability issues, and other emerging metrics of success such as health and wellbeing, alongside economic efficiency. This means carefully considering every job from a human perspective, rather than via numbers generated by pseudo-scientific algorithms.
(Read further about this topic, and more, in Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All)