Member-only story

Part I | Part II | Explainable Artificial Intelligence — Part III

Black-box and White-Box Models towards Explainable AI

Generating explanations from black-box models using model properties, local logical representations, and global logical representations

Orhan G. Yalçın
TDS Archive
Published in
6 min readJun 23, 2021

--

Figure 1. Photo by Andrew “Donovan” Valdivia on Unsplash | and Figure 2. Photo by Kelli McClintock on Unsplash

Quick Recap: XAI and NSC

Explainable AI (XAI) deals with developing AI models that are inherently easier to understand for humans, including the users, developers, policymakers, and law enforcement. Neuro-Symbolic Computing (NSC) deals with combining sub-symbolic learning algorithms with symbolic reasoning methods. Therefore, we can assert that Neuro Symbolic Computing is a sub-field under Explainable AI. NSC is also one of the most applicable approaches since it relies on combining existing methods and models.

Figure 3. Symbolic AI vs Subsymbolic AI (Figure by Author)

If explainability is the ability to meaningfully describe things in a human language. In other words, it is the possibility to map raw information (data) to a

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Orhan G. Yalçın
Orhan G. Yalçın

Written by Orhan G. Yalçın

I write about AI and data apps here building them at Vizio.ai with my team. Feel free to get in touch!

No responses yet