Hybrid AI and why we never want a system that says “1+1 probably equals 2”

With OpenAI’s release of their full “general purpose language model”, GPT-2 1558M, fun creative writing tools like Adam King’s Talk To Transformer are more “realistic” than ever. But it has its limits.
Deep learning is probabilistic technology. That is, it predicts with a certain level of confidence. This is useful when looking for patterns and guides, but less useful when trying to make important decisions.
A simplest example is “1+1 = 2”. As language generation improves, it’s tempting to think that because it got a date wrong or got numbers wrong “it’s a simple fix”. But the way current deep learning systems operate they have no intrinsic concept of “number”: the numbers in current systems are no different to any other “word” (or token).
Rules have their place when you need a hard edge. In the same manner as above, you it’s unlikely you want a system that determines that the user is “probably over 13” when they explicitly said they are, nor do you want a system that determines they “probably smoke” when the user specifically says they do.
Rules are a form of machine reasoning, which is still the subject of early AI research. Many argue that while it may be conceivably possible to build practical reasoning systems by example (deep learning), sometimes it’s vastly more effective to have some rules in there too. I think the real answer will be a mix, so expect to see a growing number of hybrid architectures.
My current favourite hybrid AI system is Snorkel: instead of experts labelling examples, they write rules, and even if people give conflicting rules, Snorkel uses machine learning decide when to apply the rules.

Expect to see more tools like Snorkel appear that benefit greatly from deep learning, but are not solely reliant on it.
