Human Machine Interlace
The impact of AI on society is typically posed in terms of how it will replace humans, as pundits draw up lists of jobs that are at risk and those which are ‘AI proof’. While some tasks — and even careers — will be replaced, a more useful way to think about the future is how we will interlace the strengths of machines with those of humans
in new ways.
Before he left Google to head up AI at Apple, John Giannandrea made it clear that he had little time for the inflated claims made about his field. Stating his preference for the term ‘machine intelligence’ over artificial intelligence, he told audiences at Tech Crunch Disrupt in 2017 that: ‘there’s just a huge amount of unwarranted hype around AI right now… [much of which is] borderline irresponsible’. His aim, he says, was not to match or replace humans but to make ‘machines slightly more intelligent — or slightly less dumb’. This approach does not dismiss the potential of computers to radically alter the way we work. It merely presents the nuanced ways it will do so.
The more we learn about AI and human psychology, the more we understand how differently people think and machines calculate. Unlike machines, we typically lean on a variety of mental rules of thumb that yield narratively plausible judgments. The psychologist and Nobel laureate Daniel Kahneman calls the human mind ‘a machine for jumping to conclusions’. On the other hand, machines using deep-learning algorithms must be trained with many thousands of photographs to recognize kittens — and even then, they have formed no conceptual understanding of cats. In contrast, even small children can easily learn what a kitten is from just a few examples. To paraphrase Michael Polanyi, the father of the idea of tacit knowledge, ‘We know more than we can code’. Not only do machines not think like humans, they apply their ‘thinking’ to narrow fields, and cannot associate pictures of cats with stories about cats.
One of the fundamental insights AI researchers have made is that tasks humans find hard, machines often find easy — and vice versa. Cognitive scientist Alison Gopnik summarizes what is known as Moravec’s Paradox: ‘At first, we thought that the quintessential preoccupations of the officially smart few, like playing chess or proving theorems — the corridas of nerd machismo — would prove to be hardest for computers.’ As we have discovered however, these are the very things that computers find easy whereas understanding what an object is and handling it — something a child can do — is much harder for a computer. The conundrum is, in Gopnik’s words, this: ‘it turns out to be much easier to simulate the reasoning of a highly trained adult expert than to mimic the ordinary learning of every baby’. When IBM’s Big Blue beat Garry Kasparov at chess in 1997, it didn’t know it was playing chess, never mind know that it had beaten a grandmaster.
AI casts new light on what makes us human, not as distinct from animals, but from machines. This poses the question of what kind of relationship we should seek with smart things. If we can get beyond the thinking on them as malevolent and/or being in possession of super intelligence, but having complementary advantages to ourselves, new possibilities emerge. What if we can combine our human strengths of inspiration, judgments, making sense and empathy with computer strengths of brawn, repetition, following rules, data recall and analysis?
The term Artificial Intelligence was coined by the cognitive scientist and inventor John McCarthy in 1955. McCarthy’s mentor was a psychologist and computer scientist JCR ‘Lick’ Licklider who had graduated with a triple degree in physics, math and psychology in 1937. Rather than speculate on computers achieving human-style intelligence, Licklider argued with remarkable prescience that humans and computers would develop a symbiotic relationship, the strengths of one would counterbalance the limitations of the other. Lick said: ‘men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinisable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. … the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines
we know today.’
Developments in both psychology and AI suggest that Licklider’s vision of human-computer symbiosis is a more productive guide to the future than speculations about ‘super-intelligent’ general AI. As Steve Jobs put it, ‘that’s what a computer is to me … it’s the most remarkable tool that we’ve ever come up with; it’s the equivalent of a bicycle for our minds’. Predictions of a robot apocalypse may grab the headlines (see Rage against the machines, page 30) but AI is just the latest in many phases of automation, each of which have begun with fear and ended with more jobs, economic growth and prosperity. It is worth bearing in mind the words of the philosopher and cognitive scientist Daniel Dennett: ‘The real danger … is not machines that are more intelligent than we are. The real danger is basically clueless machines being ceded authority far beyond their competence.’
More enlightened managers are starting to imagine what AI enabled work might be like, instead of fearing it. The goal is subtly shifting from building machines that think like humans, to designing machines that help humans think and perform better. Most work, after all, is comprised of a mix of tasks: some of which are better suited to us and some of which could one day be done better by machines. As the capabilities of these grows, managers will redesign work to take advantage of the strengths of both their human workers and their automated assistants.
The challenges of designing this hybrid type of work should not be underestimated. Recent fatalities during the test driving of autonomous vehicles are a good example. The tests themselves reveal how difficult it is for humans to focus on monitoring full automation and suggest that designing heavily automated systems which require only occasional human input is a folly. It will take a lot of human ingenuity and experimentation to construct and nurture these new working relationships — but the potential gains in productivity and job satisfaction are vast, as machines take on more mundane tasks.
It’s time to change our perspective. The rise of AI and automation isn’t a conflict. It isn’t a case of ‘man vs. machine’, but of man and machine complementing one another, allowing deeper collaboration. In an age of automation that tends to overestimate comptuters and underestimate people, let’s embrace the potential of AI, while championing human strengths.
Originally published at Plan.