“Neuro-Symbolic” AI

Nandhini Swaminathan
The Research Nest
Published in
3 min readMar 24, 2022

--

Where deep learning meets traditional rule-based coding

In 2018, Uber’s self-driving car hit a pedestrian, pushing her bike fatally wounding her. It was later discovered the car’s core system had been trained on countless hours of driving data, but that data did not include pedestrians jay-walking, nor did it include them pushing a bike across a street. The car’s system did not classify the victim as a pedestrian and thus did not stop.

This incident highlighted one of the biggest and fundamental problems in deep learning; it is impossible to train a system based on deep learning on infinite scenarios. Even if it has been trained and can handle 99.9% of the cases, there will always be that 0.1% of the cases where the algorithm has no idea what is happening, and the results would be disastrous.

This incident was proof to the AI community that pattern recognition alone could not be the source of all intelligent behavior. When humans encounter an unfamiliar situation, we use our experience to guide us. Even simple bits of information such as ‘fire is dangerous’ make a big difference. A self-driving car must have the common sense to know what a bike is and why seeing one in the middle of a highway would signify as unusual.

Gary Marcus, the former CEO of Geometric Intelligence, suspects that this sort of intelligence can be achieved through a “hybrid” approach — neural nets to learn patterns, but guided by some old-fashioned, hand-coded logic. This approach is already being attempted by companies like IBM…

--

--