Neuro-symbolic/hybrid AI or just Deep Learning towards AGI?

Jordi Linares
2 min readAug 22, 2022

--

Gary Marcus (Gary Marcus) is perhaps the strongest advocate of an AI with more innate capabilities or the need to hybridize AI with symbolic capabilities (more aligned with classic AI or GOFAI) to achieve AGI. He doesn’t think deep learning is enough. This is also known as Neuro-symbolic or hybrid AI.

Instead Yann Lecun (Yann LeCun) is more in favor of the simpler approach. His point of view has always been that of systems that can learn as much as possible on their own and avoid, as far as possible, higher-level constraints that regulate or control AI models.

The debate has been open since the explosion of neural networks and I recommend some confrontation between Marcus and LeCun to have more context (a fantastic face-to-face here).

Will deep learning alone be able to solve all the challenges towards AGI as Geoffrey Hinton also stated here?

Currently I would say that all deployed AI solutions hybridize in one way or another. Any implemented AI solution is almost certainly managed by a piece of traditional software, be it the user interface, some high-level logic, or small details or adjustments, or complex algorithms that complement the necessary deductive parts.

As we well know, induction is the essence of machine learning and deduction is the essence of traditional software. Both are combined in almost any current solution.

But this is not the debate. The debate is whether or not deep learning will be able to respond to problems now covered in a symbolic and deductive way, such as reasoning and planning problems (reinforcement learning in particular). The debate is whether deep learning will be able to manipulate symbols, symbolic reasoning and support the much-discussed ‘common sense’ that allows humans to provide so many solutions to different problems.

Each one can have their particular vision at this time, since there is not yet a definitive solution to the problem.

LeCun has outlined his proposal on how to solve the AGI in this role here (a must-read) and also here for a more easy to read work.

Gary Marcus discusses it here.

Perhaps my admiration for Yann LeCun’s work does not allow me to be objective, but I find his reasoning much more solid, his work much more formal and grounded, and much less blurred and lack of clear solutions than Gary Marcus’s.

Gary Marcus gives me the feeling of defending an easier position. Always in a comfortable continuous opposition, even ridiculing deep learning in tweets with ridiculous and totally unnecessary examples (even for the obviousness of a possible solution).

Is it necessary Marcus? Really?

Gary Marcus also shows an irritating obsession with LeCun, whom he even reproaches for not citing him in some of his works. And his constant reference to his books, like ‘The Algebraic Mind’, ‘Rebooting.AI’ etc., seems to me to be pure marketing and excess of ego.

Time, we hope, will put each one in their place and give an answer to this debate.

--

--

Jordi Linares

Spain AI — ALCOI — Leader of VertexLit group at Valencian Research Institute for Artificial Intelligence — Professor at Universitat Politècnica de València