Write for Towards NeSy

Share Neural-Symbolic concepts, ideas, and tools one at a time

Jesse Jing
Towards NeSy
5 min readMar 14, 2023

--

Photo by Joshua Sortino on Unsplash

Menu for Towards NeSy New Authors

  1. What is Neural-Symbolic AI?
  2. Why become a contributor?
  3. Guidelines
  4. FAQs

What is Neural-Symbolic AI (in my opinion)?

Industry and academics are so divided currently due to the hype of ChatGPT. On one hand, some top scholars stated that the remained shortcomings of NNs are obvious and that we should NOT massively deploy AI to the general public.[1][2] On the other hand, VCs’ money at tech hubs like Silicon Valley/Bangalore/Zhongguancun poured into the imperfect AI startups based on a falsely promised future at the cost of uncontrollable spreading of misinformation and structural bias in an unprecedented scale.

Despite the remarkable performance of deep neural networks(DNN), in many tasks, they still have several limitations and challenges shown below:

  1. Lack of Interpretability: Deep neural networks are often referred to as black boxes because it can be difficult to understand how they arrive at their decisions. This lack of interpretability can be problematic, especially in applications where transparency and accountability are important.
  2. Overfitting: Overfitting occurs when a model learns to fit the training data too well and fails to generalize to new data. This can happen when the model is too complex or when there is not enough training data.
  3. Limited Robustness: DNNs can be vulnerable to adversarial attacks, where an attacker intentionally perturbs input data to mislead the model into making incorrect predictions.
  4. Data Dependence: DNNs are highly dependent on the quality and quantity of training data. If the training data is biased, incomplete, or unrepresentative, the model may fail to generalize to new data.
  5. Limited Transfer Learning: Transfer learning, which involves using a pre-trained model to solve a new task, is a popular technique for reducing the amount of training data required. However, transfer learning is limited by the similarity between the pre-trained and target tasks.

Generally, NeSy approaches try to introduce reasoning ability, either as a single module or in more sophisticated ways, into the current DNNs. Symbolic AI relies on logic and rules to manipulate symbols and make inferences. NeSy leverages a long list of traditional AI methods framed with logical languages and addresses the problems above at the same time.

NeSy brings huge challenges as well that varies across roughly three dimensions.

  1. The first dimension concerns the learning algorithm for mixed precision end-to-end training. Given a new component of reasoning abilities, how do we adapt the popular optimization method on NNs to the hybrid system that deals with both symbolic module and sub-symbolic NNs? According to Dai, new challenges emerge as the sub-symbolic module lacks direct supervision and the symbolic module lacks accurate primitives to reason upon[3]. Some other efforts are [7][8][9].
  2. The second dimension relates to the inference algorithms/tools. NeSy is born to land real-world solutions, such as autonomous self-driving systems and cross-discipline scientific discoveries. We need fast inference engines that deal with a huge class of languages/models and an interface that enables domain experts to step in easily. These are some of the efforts pushing this frontier.[4][5][6]
  3. The third dimension is the general theoretical advancement bridging the gap between connectionist models and symbolic methods. This is an overarching set of novel ways to integrate two fields or address some of the central AI questions like the symbol grounding problem or intersections between Philosophy/Cognitive Science/AI. We still have a long way of reaching a consensus on where the field goes. Some of the efforts are [10][11].

Hence, generally, anything related to the learning/inference algorithm that improves interpretability and tractability, philosophy/cognitive science-inspired models, or software/tools/languages that facilitates pure logical reasoning on a specific application are ALL welcomed!

Why become a contributor?

Reasons you should contribute to Towards NeSy are:

  1. Writing daily helps build knowledge structures in mind. For early Ph.D. students like me, reading papers without writing doubts/summary or reproducing experiments is like traveling without taking pictures: eventually, you forget everything. Writing down the ideas and doubts and posting them here would position you in a larger community and potentially provide chances to communicate with authors directly.
  2. For researchers, you can promote your ideas/open-source tools to the general audience in a much friendlier format. Let’s lower the learning curve for everybody.
  3. For industry practitioners, post your questions, pain points, and solutions on Towards NeSy and build a close relationship with us. It turns your unknown unknowns into known unknowns or known unknowns into known knowns, which could potentially save you dozens of hours searching the literature. Also, it helps shape the NeSy research around practical problems.

We feature our content on the publication’s pages and social media like LinkedIn and Twitter. Most importantly, you own your work. After publication, you may delete it or revise it without our permission.

Guidelines

Rules of thumb:

  1. Base your blogs on a peer-reviewed paper or self-tested GitHub repo;
  2. Position your focus among the three dimensions of Neural-Symbolic concentration mentioned above;Convey your key points by illustrating them with compelling visuals. Make your own charts or cite references clearly if they come from other resources.

A stricter standard is the TDS writing guide. [12]

References

[1] G. Marcus, The long shadow of GPT (2023), GaryMarcus_Substack

[2] N. Chomsky, The False Promise of ChatGPT (2023), NYTimes

[3] W. Dai, S. Mulggleton, Abductive Knowledge Induction From Raw Data (2021), IJCAI

[4] D. Aditya, K. Mukherji, S. Balasubramanian, A. Chaudhary, P. Shakarian, PyReason: Software for Open World Temporal Logic (2023), AAAI-make

[5] A. Vergari, Y. Choi, A. Liu, S. Teso,G. Broeck, A Compositional Atlas of Tractable Circuit Operations for Probabilistic Inference (2021), NeurIPS

[6] E. Krieken, T. Thanapalasingam, J. Tomczak, F. Harmelen, A. Teije, A-NESI: A Scalable Approximate Method for Probabilistic Neurosymbolic Inference (2022), ArXiv

[7] R. Evans, E. Grefenstette, Learning Explanatory Rules from Noisy Data (2018), JAIR

[8] H. Dong, J. Mao, T. Lin, C. Wang, L. Li, D. Zhou, Neural Logic Machines (2019), IJCAI

[9] R. Manhaeve, S. Dumančić, A. Kimmig, T. Demeester, L. Raedt, DeepProbLog: Neural Probabilistic Logic Programming (2018), Arxiv

[10] R. Evans, The Apperception Engine (2022), Kant and AI

[11] D. Dold, J. Garrido, V. Chian, M. Hildebrandt, T. Runkler, Neuro-symbolic computing with spiking neural networks (2022), ACM ICONS

[12] Write for Towards Data Science (2023), TDS Editors

--

--

Jesse Jing
Towards NeSy

CS PhD student at ASU pushing the frontier of Neural Symbolic AI. We host the publication @Towards Nesy Try submitting your technical blogs as well!