John Launchbury of DARPA has an excellent video that I recommend everyone watch ( viewing just the slides will give one a wrong impression of the content). The video distills the current state of AI into 3 waves.
Handcrafted Knowledge — Where programmers craft sets of rules to represent knowledge in well‐defined domains
Statistical Learning — Where programmers create statistical models for specific problem domains and train them on big data.
Contextual Adaptation — Where systems construct contextual explanatory models for classes of real world phenomena.
It’s a bit of a simplified presentation because it lumps all of machine learning, Bayesian methods and Deep Learning into a single category. There are many more approaches to AI that don’t fit within DARPA’s 3 waves.
Pedro Domingos author of the “The Master Algorithm”. He talks about the 5 Tribes of AI: Connectionists, Symbolists, Evolutionaries, Bayesians and Analogizers ( I discuss something like 17 tribes of AI).
But let’s give DARPA the luxury of simplifying their presentation of the current state of the field.
They do cover some of the known problems of Deep Learning such as adversarial features. Unfortunately, from this point is where the press took off with their presentation and ran with it. Creating titles like “Understanding the Limits of Deep Learning” (VentureBeat).
The key slide though out of the presentation is this:
DARPA’s third wave model takes a lot of inspiration from some of their previously announced research initiatives such as Explanatory interfaces and Meta-Learning. I write about these two in previous articles (see: “The Only Way to Make Deep Learning Interpretable is to have it Explain Itself” and “The Meta Model and Meta Meta Model of Deep Learning” DARPA’s presentation nails it, by highlighting what’s going on in current state-of-the-art research. If anyone is seeking out a short explanation of what’s going on in the field, then this is the video to watch.
The main problem that we face today is bridging the semantic gap between what I would call Artificial Intuition and rational (symbolic) machines. Deep Learning systems have flaws analogous to our own intuitions having flaws. When you have cognitive processes that have limits on memory and time in the context information overload and lack of meaning, then you are bound to have flaws. These flaws are however caught by logical systems. That’s why bridging the gap can have some profound effects.
One reason that the semantic gap wasn’t address as vigorously before was that Connectionist systems (i.e. Artificial Neural Networks) did not historically work well. With the advent of Deep Learning, there’s a new emphasis in finding a solution that melds Symbolic and Connectionist systems. That’s where a lot of research is chipping away at the problem. The excitement here though is that it appears that the researchers are making outstanding progress!
Just to recap, here’s the roadmap that I have ( explained here ):
- Classification Only ( C )
2. Classification with Memory (CM)
3. Classification with Knowledge (CK) ←- This is what DARPA is talking about.
4. Classification with Imperfect Knowledge (CIK)
5. Collaborative Classification with Imperfect Knowledge (CCIK)
One disclaimer though regarding this roadmap. It’s a Deep Learning roadmap and does not cover developments in other AI fields. One point however that I want to make here is that, achieving the 3rd wave is likely to be an evolution of how we do Deep Learning.
I suspect the solution to this problem will have something to do with “Late Binding”. It is very impressive that DARPA actually chose a very precise phrase to describe this (i.e. Contextual Adaptation). Which does imply adapting behavior depending on context.
A question that is very important to ask is “how far away are we from bridging the semantic gap?” Given the brisk pace of DL development. Understand here for a moment that some of the techniques mentioned in the video (i.e. few-shot learning and generative models) are being refined in the last year or so. I would not be surprised that within 2–3 years that the gap will be bridged! That’s just how crazy the developments in Deep Learning are.