Two Men from King’s and their Quest for Artificial Intelligence

Matthew G. Johnson
DataSeries
Published in
6 min readMar 20, 2019

Humans have been the most intelligent beings on the planet for the last million years. This may soon change. While the quest for Artificial Intelligence (AI) began over sixty years ago and attracted many of the brightest researchers, it almost ended three decades later. The initial results did not meet expectation and an “AI winter” took hold for the following two decades. The winter may never have ended had it not been for a tenacious few who had the temerity and tenacity to follow a different path. The story of the quest for AI is as compelling as it is fascinating.

Our story begins over a century ago on the eve the first world war when in 1912 Alan was born in North London. His parents met while working in British India where they remained for most of his childhood. They placed him in foster care to pursue his education in England. He developed an early fascination for science which provided some respite from his challenging circumstances. His passion, however, was far from well received with his headmaster commenting, “if he is to be solely a scientific specialist, he is wasting his time at a public school.” Nonetheless, his brilliance prevailed and he won a place to study mathematics at King’s College, Cambridge.

His academic star rose quickly and two years before the outbreak of the second world war, he published a seminal paper in which he defined an abstract computational machine which executed logical instructions in sequence. He went on to develop this into the first computer of the modern era. The Bombe, as he called it, was an electro-mechanical device designed to break German military codes and played a critical role in the outcome of the conflict. After the end of the second world war, he took up the quest for AI, developing the first program for playing chess and creating a unique test for assessing the intelligence of computers.

Inspired by Alan’s work, the second half of the twentieth century saw phenomenal progress in the development of information technology. It became a major industry and ultimately changed the way we live. Nonetheless by the end of the century, progress was slowing and the AI winter had taken hold. Computers had achieved superhuman performance in many domains, but in others such as vision, they could not match an insect let alone a human. It was becoming increasingly evident that the quest for AI would have to find a new path.

The path was far from evident at the beginning. As game designers in the 1990’s tried to build realistic virtual worlds, they found that central processing units (CPUs) could not keep up: either the images were of too low quality or the frame rates were too slow. In order to create high quality interactive games, dedicated graphical processing units (GPUs) were developed. GPUs were not built to advance computer science, but by solving a fundamental problem in computer graphics, a different architecture emerged. It was an architecture that favoured speed and efficiency over versatility. By carrying out many similar computations in parallel, GPUs could achieve much better performance. It might have been easy to dismiss GPUs as child’s play, but gaming was serious business and enormous investments drove rapid innovation. It was not evident at the time, but GPUs were to have a pivotal impact on the quest for AI two decades later.

In 1947, as Alan was publishing his first papers on AI, and just seven miles from his birthplace, Geoffrey began life in South London. He was born into an academic family and unlike Alan, his scientific interests were warmly encouraged. In 1967, Geoffrey followed in Alan’s footsteps through the gates of King’s College, Cambridge and on to the quest for AI. However, inspired by his studies in experimental psychology, he was to take a fundamentally different path. Instead of focussing on logic as output of intelligence, he focussed rather on the human brain as a source of intelligence.

By the time he moved to Edinburgh in 1972 to start his doctoral studies, the computer revolution was in full force. Geoffrey developed his research in vision, parallel processing, learning and artificial neural networks. However unlike Alan, his early career was far from exceptional. After Edinburgh he took up posts in Sussex, San Diego and CMU before finally settling in Toronto. While he continued to develop ground breaking work for over four decades, the significance of his work remained largely unrecognised. That is until at the age of 65, when two of his students linked his research in artificial neural networks and vision with the power of a GPU for the ImageNet competition in 2012. Ironically they took a computing component optimised for producing images from symbolic representations, and reversed the direction to produce symbolic representations from images. Their entry, AlexNet, beat every other entrant by an enormous margin. Like the first flower of spring, their win marked the end of the AI winter which had descended two decades earlier.

The life of Alan Turing, tragically ended in 1954, a few days before his forty-second birthday. His brilliance nonetheless created an enduring legacy which extends far beyond his short years. The abstract computing framework which he defined in 1937, the Turing Machine, continues to be cited today as a foundation of modern computing. The imitation game he devised in 1950 for assessing AI, now named the Turing Test, remains the gold standard to this day. Meanwhile at the age of seventy-one, Geoffrey Hinton, is seen as the godfather of Deep Learning and continues to publish exciting new research to this day.

Alan Mathison Turing OBE FRS (1912–1954)

For Turing and and those that followed him, machines were to be programmed by sequences of mathematical statements processed in order. This powerful idea fundamentally changed the way we live, but it was ultimately insufficient on its own to complete the quest for AI. By constraining computers to think in the way we communicate, we had ultimately limited their potential. We had nonetheless developed electronic circuits that were miraculously fast and small and amassed vast quantities of electronic data. Both would prove essential in supporting the next stage of the quest.

Geoffrey Everest Hinton CC FRS FRSC (1947 — )

For Hinton and those that worked with him, machines were not to be programmed, but like humans were to use neural networks to learn from experience. By migrating from CPUs inspired by language and logic, to GPUs inspired by vision and play, they had tapped on a more elemental and powerful source of intelligence. Focussing on the structure of the brain as the source of human intelligence, rather than on language and logic as an output of human intelligence, he and his students detonated a veritable “neuron bomb” whose shockwave continues to rip apart the world of classical computing and will most certainly change the course of history.

References

[1] Turing, A.M. (1936). “On Computable Numbers, with an Application to the Entscheidungsproblem”.
https://londmathsoc.onlinelibrary.wiley.com/doi/abs/10.1112/plms/s2-42.1.230

[2] Turing, A.M. (1950) “Computing Machinery and Intelligence”
https://www.csee.umbc.edu/courses/471/papers/turing.pdf

[3] Krizhevsky, A.; Sutskever, I.; Hinton, G.E. “ImageNet Classification with Deep Convolutional Neural Networks”
https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

Acknowledgements

Many thanks to Peter, Hardeep, Dmitry, Ravi and Donald for your thoughtful reviews and encouragement.

--

--

Matthew G. Johnson
DataSeries

I am an informatician, fine arts photographer and writer who is fascinated by AI, dance and all things creative. https://photo.mgj.org