Photo by Fauzan on Unsplash

The hidden cost of artificial intelligence

Carl Dawson
The Startup
Published in
7 min readJul 26, 2019

--

If you ask the average machine learning professional what her algorithms actually do, you’ll notice that you have to do quite a bit of digging before you understand their precise workings. The first layer of explanation will include the word learn as a given, an attempt to employ an experience all humans understand in the abstract. Data (or more specifically, training data) will also feature. Probe again and you’ll get to the idea of a loss function — some metric of correctness that can be optimised via college-level mathematics. At this point the well runs dry. Her algorithms, like most, detect patterns simply by finding a representation of the incoming data that is least wrong. This is how machines learn.

The common sense follow up question, of course, is: Is this how humans learn?

The answer is an unequivocal no.

Despite their name, neural networks have little in common with biological neurons. Neurons in the human brain are cells, the fundamental units of biology. Electrical charge notwithstanding, they are awash with strange chemicals, subject to natural death and cancerous growths, and are reliant on oxygenated blood. Neurons in neural networks, though inspired by the biological units, are abstract data types, divisible still into the more commonplace objects of computer science, which exist only in binary representations on a substrate of silicon.

These differences notwithstanding, the name neural network is an optimistic one. A moniker that suggests that the secret to true intelligence is the mere amassing and arranging of these violable units. This is a very attractive suggestion indeed. In the world of Moore’s law, where the number of transistors per square inch, hard drive storage space, and data transfer speeds are increasing along a near exponential curve, what could be more promising an answer to general intelligence than continuing on with the plan we’ve already set in motion?

Just as us humans have chimpanzees and evolution to thank for our present position and cognitive abilities, modern day computers also have their precursors. The problem, however, is that there have only been minor changes in the areas which matter for intelligence. Similar or derivative programming languages are used, the raw materials remain mostly the same, and the internal architectures are simple refinements of the ones from bygone eras.

If you take a moment to consider the computers you used before high-speed internet became ubiquitous, you’ll perhaps remember the PCs in libraries that helped you locate books, the large, unattractive machine in the corner of the living room which was used to manage personal finances, or the workstation at your office which was connected via a hideous interface to a tightly-controlled, company-wide file sharing network. Today’s computers have not meaningfully changed from these examples, although it may seem like it, and these are the very same machines that a large number of academic and industrial scientists are trying to make intelligent.

Framed in this way, the journey towards artificial intelligence can seem destined for failure. Memories of Clippy, the infamous MS Word helper program, will quickly deflate the optimism of anyone who suggests that such an endeavour may be possible. And yet a series of incredible developments have given us the ability to concoct at least an illusion of intelligence.

The first of these developments is, of course, the algorithms. Neural networks are generalisations of a machine learning algorithm known as the Perceptron. Though invented by someone else (Rosenblatt), it was Marvin Minsky who developed an in-depth mathematical understanding of the abilities and limitations of neural network learning algorithms.

Minsky, now known as the father of artificial intelligence, chose to walk a different path than most world-renowned researchers. As he progressed in his career, he resisted the urge to search for governing principles and a small number of fixed laws. Individual algorithms became less important than the structures of the mind they were intended to simulate. And the discovery of the overall architecture of how these pieces connected occupied the majority of his career as a research computer scientist. As Minsky saw it, intelligence was a complex phenomena at the intersection of several fields, each with their own web of interdependencies. To him, then, the search for unifying equations was an unrewarding exercise in hubris.

Today’s computer scientists, or at least the ones that are held in high esteem by tech companies, are not heeding the full meaning of Minsky’s work, and are instead continuing to deploy piece-meal statistical methods rather than integrative approaches.

Even people as well-informed as Sam Altman, one of the founders of OpenAI, believe that the laws that govern intelligence will be short enough to be printed on to a t-shirt, should they be discovered. The reasons someone like Altman may want to hold such a view, despite the evidence to the contrary, are two-fold.

Most innocently, this search for governing laws benefits recruitment. To computer science graduate students surviving on stipends and bursaries, what could sound more attractive than a six-figure salary and a shot at solving some of the most complex problems in computer science? The reality of eighty hour work weeks, shared living in outrageously expensive cities, and stock options turning to dust all notwithstanding.

More problematically, though, this approach also allows companies to use any means necessary to collect vast amounts of data. The less that is expressed by the laws, the more that remains to be learned. And what is a little privacy invasion or exploitative employment compared to the curing of cancer or the rectification of a humanitarian crisis?

And so it is the availability of data that is the second and most crucial factor in the development of modern artificial intelligence. But not for the reasons you’ve commonly heard.

Entrepreneurs and investors like to cast data as a natural resource, that is to say pre-existing and readily available, but it remains true that their production is the work of individuals, at best a by-product of other activities and at worst an exploitative, poorly compensated endeavour.

Data doesn’t naturally accumulate, it results from the dedicated effort of the billions of people online.

Today’s broadband enabled computers have created a massive network of cheap, unskilled (or worse, unknowing) labour and this, alone, is the difference between the computers of old and the ‘intelligent’ systems of today.

The datasets used by researchers are collected, collated, and codified by a slew of poorly paid workers (graduate and undergraduate students, individuals operating as mechanical turks, and so on) and generate economic returns far beyond the remuneration received by those who compile them.

An object detection task, for example, requires at least 500 annotated images (and preferably many, many more) of a particular object class (a dog, a species of plant, a cancerous lesion) before the algorithms in question are accurate enough for use in industrial applications. What’s more, these algorithms, once tuned for a particular dataset, are of very little use in other applications, resulting in a near-infinite stream of low skill, low value tasks to be completed.

Remunerating such workers fairly may well be economically intractable, but this approach to AI becomes more exploitative when you realise that the existence of these people is hidden from view.

Many Silicon Valley companies will dangle the carrot of six-figure incomes to graduate students, hoping to recruit others to join them on their questionable search for the foundational equations of intelligence, all while obfuscating their reliance on crowds of toiling workers.

It is not Moore’s law which is responsible for this explosion of artificial intelligence, despite what insiders would have you believe, it is the ever-increasing availability of low cost labour, which has itself been created by the very technologies these workers are helping to build.

This disguising of labour requirements is typical of an industry that systematically misclassifies employees as temporary contractors. After all, including all of these people in a pitch deck would decimate valuations.

The artificial intelligence of today is an illusion. It is the averaging of the labour of many. And we should expect as much given what we know about computers and the rapid increase of platform-economics businesses. These algorithms have only one use — the statistical codification of niche knowledge, which is liable not only to require large amounts of low skill work but also eventually make swaths of the middle class redundant.

True artificial intelligence may well bring about utopia, but this lurching half-step will displace workers, consolidate power in large technology companies, and improve the web’s grasp over our productivity and attention long before it delivers scientific or humanitarian breakthroughs.

Treating human brains and neural networks as equivalent may help to raise a funding round but it also allows you to disregard the people who generate these datasets. And this dull, repetitive busywork will become commonplace unless we curb our appetite for statistical learning and the associated venture capital.

The hive-mind won’t get us to artificial intelligence, we need better ideas — ones that prioritise creativity and dynamism and the other traits we consider most human. Our current best ideas reduce people to mechanistic workers and restrict our imagination about what could be.

--

--