Explaining Complex Machine Learning Models

Open Machine Learning, ETc.
2 min readSep 21, 2018

--

We are in the era of Machine Learning (ML) resurgence. There has been a lot of excitement around AI and ML, backed by some state-of-the-art results in many domain like computer vision and natural language processing. Access to large amounts of data combined with computational power has enabled us to build ultra-complex models with great results.

Not only these successful results have been shown in academia, many in industry have been putting these complex models into production, to be used in everyday life. In this era, the main focus seems to be the performance and accuracy of these systems. While it is remarkable that now we are able to train very accurate and complex models, it is important to spend more time to understand how these complex models work, why they make certain decisions for the sake of transparency and trust. Otherwise, we are left with powerful tools that act like a blackbox.

In our latest event led by Soysal Degirmenci, we focused on the topic of explaining and understanding complex machine learning models. In the first part, Soysal went over the problem definition and why it is important, both from an AI/ML practitioner and a user’s perspective. The discussion was then focused on how we can explain ML models and what some of the differences between model-based and model-agnostic methods. We presented two specific methods which are popular today namely, LIME and SHAP. You can find the details of this discussion in the slides provided below.

Soysal explaining explainability.

--

--

Open Machine Learning, ETc.

A group of active machine learning experts in San Diego with a common goal of promoting open source mini-projects