Glass-Box Models: A Gentle Introduction

alkali.app
4 min readJul 4, 2018

--

This post will be 1/N in a series explaining glass-box models in AI. In this article, we will be introducing, without any technical details, what this neat family of algorithms is and how it is useful. Be on the lookout for the technical articles that will follow.

How and why is AI making its decisions?

This is a driving question behind not just people curious on how this defining technology works, but behind those worried that everything could come crashing down one day.

AI is obviously transforming a wide range of industries and business models. It’s already partially responsible for everything from advertising to bail approval. And although it’s one thing if AI sells lingerie to young children, it is another thing altogether if AI approves a flight risk for bail (or more viscerally, accelerating an occupied car into a graffiti-covered wall). Everyone from lawyers to newsroom editors are dreadfully anticipating the day when something awful happens, perhaps in hopes of cashing out or writing the next top headline.

Opening Pandora’s Box

Buyer beware

AI is currently used in a black-box manner. In layman’s terms, this means the only thing of value is its output and not its decision making process. The reason for this is simple: the decision making of most AI model boils down to mathematical optimization over a set of probabilities. “I optimized a mathematical function” isn’t a very helpful explanation.

Things have gotten so opaque that even seminal experts in a field are unable to explain why an AI model works so well. You know the field has taken a toll for the worse when physicists are attempting to explain AI models with quantum mechanics [1].

The object recognition AI model thinks the skier is a dog [2]

This isn’t just a hypothetical situation. Above, we see that an AI model will sometimes very confidently confuse two completely incomparable things.

There’s a proactive movement inside universities looking to prevent a doomsday scenario. This movement has coined the term eXplainable AI (XAI), which is only now gaining prominence in research and tech circles. XAI models must rationalize their predictions so that a non-expert may understand how and why it came about a decision.

A brief explanation of XAI, courtesy of DARPA [3]

Many recognize the practical implications of XAI; DARPA recently started a well-publicized grant program funding XAI research programs. Unfortunately, as of the this article’s date of publication, XAI has not been implemented the ever-popular neural networks. More practically, how can any AI model “rationalize” when it performs mathematical optimization?

An Overview of Glass Box Algorithms

One practical compromise between the needs of XAI and realities of current AI models is through the glass box algorithm. The glass box algorithm is a unique creature; it quantifies some sort of uncertainty in its predictions, so that the user may understand when its predictions are unreliable.

Let’s introduce you to the king of glass box models, the Gaussian process (GP). GPs are great for two reasons:

  1. They are simple to train.
  2. They quantify uncertainty in any prediction.

Expanding on point (1), GPs don’t require a human-in-the-loop choosing hyperparameters (the notoriously difficult part of training something like a neural network). Expanding on point (2), here’s a helpful diagram to understand how this quantification works exactly:

Red points indicate observed data points while the blue line is the prediction over all the unobserved data points. Lighter blue areas indicate a confidence interval. Note that this interval is zero for observations, as you are fully confident in any data point you may see

Gaussian processes are widely used in scientific research but only in a handful of companies. GPs can be used to forecast sales, predict fraud, and calculate financial risk, without much difficulty. More importantly, a GP will predict not only the concrete numbers but also how certain it is in its predictions.

Want to use GPs in your application? GPs are predominantly available in a narrow fashion from specific statistical and scientific toolkits (for example, in scikit-learn). However, they are also just starting to be deployed in industry. As an example, the Alkali framework (https://alkali.app, disclaimer: we are the authors of the tool) provides glass-box algorithms as first class citizens. We have also heard of a few other companies starting to move in this direction. It’s an exciting area, and we hope to be seeing these new models deployed much more.

[1] https://www.technologyreview.com/s/602344/the-extraordinary-link-between-deep-neural-networks-and-the-nature-of-the-universe/

[2] https://www.wired.com/story/researcher-fooled-a-google-ai-into-thinking-a-rifle-was-a-helicopter/

[3] https://www.darpa.mil/program/explainable-artificial-intelligence

--

--