The Deep History of Deep Learning… Part III
War Games, Exclusive-Ors, and Winter is Coming…
When we last left our discussion of the history of deep learning, we had arrived at Checkers (or Draughts for you Brits out there) — that ever exciting board game made far more interesting my Arthur Samuel. Machines were learning… or at least collecting useful feedback and optimizing.
In truth, and as is often the case, Samuel was not alone or even first in his thinking. Ultimately, he was most successful, but such notables as Alan Turing (mentioned in the last article) and John Von Neumann (who we had some fun with here) were just as busy playing games. Both died a few years before Samuel’s success, but their work on game theory, computing, and information theory was critical to our story.
Neurons & Perceptrons
No we are not about to discuss Transformers, we are about to discuss tranfer functions. Along with threshold logic, these topics introduced the world of neural networks. They were first popularized in the 1940s (yes — we took a small step back) and were introduced by McCulloch and Pitts. If you will excuse the oversimplification, this logic created functions that allowed input but only created output once a certain threshold was reached. Imagine a teapot or one of those giant water buckets at a water park.
Soon came Rosenblatt. His work developed in the 1950’s not so far from Samuel. Rosenblatt was at Cornell, once again funded by the US Navy (see Tainter — PT II), and working on image recognition (not quite optics, but pretty close). His Perceptron is regarded as the first Artificial Neural Network (ANN). Again simplifying — the perceptron was designed to evaluate two dimensional inputs and divide them into distinct regions.
From his work, Ivakhnenko and Lapa would originate Deep Learning. It was essentially just layered perceptrons. And so, after a very long history, Deep Learning was born in 1965. By 1971, it ran eight layers deep. Only that was two year after — it had already died.
Enter the Contrarians
In 1969, Jobs & Wozniak… oops sorry… I mean Minsky & Papert stepped on the scene. They really sort of stomped on. The perceptrons, too — stomp! Their book, Perceptrons basically ended the party.
They declared connectionism (the broader term that perceptrons fell under) dead on arrival. One issue, the computing world wasn’t capable of effectively generating the necessary parallel processing in the late 60’s. This would eventually remedy itself. The larger issue, these networks were incapable of such necessary functions as the exclusive or (XOR). If they couldn’t replicate such basic logical functions, what good were they!
In 1969, the Artificial Intelligence died. It lasted ten years longer than ‘the music’. That had died in 1959. Well not really… just figuratively. The death of AI was much more real, at least for a while. The AI Winter had begun!
Thanks for reading. Part IV… coming soon.