Beyond the symbolic vs non-symbolic AI debate
There has been recently a regain of interest about the old debate of symbolic vs non-symbolic AI. The latest article by Gary Marcus highlights some success on the symbolic side, also highlighting some shortcomings of current deep learning approaches and advocating for a hybrid approach. I am myself also a supporter of a hybrid approach, trying to combine the strength of deep learning with symbolic algorithmic methods, but I would not frame the debate on the symbol/non-symbol axis. As pointed by Marcus himself for some time already, most modern research on deep network architectures are in fact already dealing with some form of symbols, wrapped in the deep learning jargon of “embeddings” or “disentangled latent spaces”. Whenever one talks of some form of orthogonality in description spaces, this is in fact related to the notion of symbol, which you can oppose to entangled, irreducible descriptions.
There is actually something deeper than this surface difference about symbols that is at play here and it has to do with the learning experience itself. There are mainly two axis where I see a spectrum of variation, and which should help to think about the right way to go beyond the current dominant paradigm:
- The way new information is internally modifying the system, in other words: the learning mechanism itself.