AI Explainability: Why we need it, how it works, and who’s winning

Startups and incumbents are investing massive amounts of money into AI (approximately $35 billion in 2019 according to IDC). As these AI models become increasingly effective, some businesses are facing a new problem: they can’t use them. Why? Well, the majority of these AI models are considered “black boxes”, meaning that there is no way to determine how the algorithm came to its decision. While that may be okay for companies in industries like gaming or retail, it certainly is not acceptable for companies that operate in heavily regulated industries like financial services or healthcare. Fortunately, there are numerous explainability solutions popping up to help businesses interpret their model and make the metaphorical black box a little bit more transparent. In this post, I’m going to dig into why we need AI explainability, how existing explainability techniques work, why investors are excited about it, and what companies are attacking the problem.

Also, if you like this post and want future articles sent to your inbox, please subscribe to my distribution list! Alright, let’s dive in.

1. Why do we need AI explainability?

AI in Financial Services

To build this technology, the bank would have to (1) choose which type of algorithm it wants to use, (2) modify that algorithm for its specific use case, and then (3) feed the algorithm a massive amount of training data. That data would likely include prior applicants’ gender, age, work history, salary, etc. It would also include information on the outcome of each of those lending decisions. With this information, the program could start predicting which applicants are “credit worthy”. At first, the algorithm would probably do a terrible job; however, it would continue to learn from each outcome and eventually reduce the number of bad loans that the bank makes. In doing so, it would save the bank a substantial amount of money, allowing it to write new loans at more attractive rates. This would enable the bank to attract more applicants and significantly scale up its customer base.

So what’s the problem with that? As I alluded to earlier, the problem is that the bank’s algorithm is most likely a “black box”, meaning that the bank can’t explain why it approved one applicant but declined another. This is a huge issue, particularly if that applicant believes they were declined on the basis of race, gender, or age. If the applicant accused the bank of algorithmic bias, it could seriously damage the bank’s reputation and may lead to a lawsuit. Unsurprisingly, we’ve already seen this play out in real life. In one high profile example, Apple’s co-founder, Steve Wozniak, tweeted that the Apple card gave him a credit limit that was 10x higher than his wife’s. Apple was not able to explain why its algorithm made that decision and was (understandably) raked over the coals.

Zooming Out

This hopefully makes it clear that AI’s influence has continued to expand into pretty much every industry. While that presents an incredible opportunity for businesses to increase efficiency and overall customer value, the lack of explainability represents a huge roadblock in the path to widespread implementation and adoption.

One last thing. It’s important to note that, while explainability is a necessity for businesses that operate in the regulated industries listed above, it actually has even broader applicability because it can be used to de-bug, and improve trust in, AI models.

2. How does AI Explainability work?

**Just a heads up, this section is a little dense so feel free to skip ahead if you don’t care about the technical explanation!**

Integrated Gradients (“IG”)

For a multi-dimensional function like y = 2x + 13z + 2, the slope is not overly useful because it can only be calculated with respect to one variable at a time. The gradient, on the other hand, can be determined for the full equation. It is the vector of the co-efficients of each variable. In our example, that would be [2,13]. Similar to the single variable scenario, the gradient can be used to determine the impact that each variable has on the function’s output. Why is that relevant? Well, if our AI algorithm determines that there is a linear relationship between our inputs and outputs, we could simply run a multiple regression and use the gradient to interpret the importance of each variable.

Unfortunately, AI models are a bit more complicated than that and rarely come up with nice linear relationships. For example, deep learning models often have numerous layers and each layer typically has its own logic. To make things even more complicated, many of those layers are considered “hidden layers”, which means they are unintelligible to humans.

Generic S Curve for visualization purposes

The worst part is that those hidden layers often account for the majority of the deep learning model’s predictive power. Visually, you can picture a deep learning model as an S-Curve where the x-axis represents the layer number and the y-axis represents the amount of predictive value generated. What we really want to measure is the impact of each variable at the points where that variable is having the greatest impact on the model’s output. To find that, you can’t simply take the gradient at the input or the output, you need to evaluate the impact of each variable throughout the model’s entire decision-making process. Luckily, IG provides us with a sleek way to do that!

The IG methodology works as follows. First, you need to start with a “baseline” input. That baseline can be any input that is completely absent of signal (i.e. has zero feature importance). The baseline would be a black picture for image recognition models, a zero vector for text-based models, etc. Once you give IG a baseline and a final output, the program constructs a linear path between the two values and splits the path up into a number of even intervals (usually between 50 and 300). From there, IG calculates the gradient at each interval and determines the impact that each feature had at that point in the model’s decision-making process. Last, it averages those gradients to determine the overall contribution of each feature to the model’s output and calculates which features had the greatest impact.

A more technical explanation is that IG calculates a Riemann sum of the gradients to approximate the path integral. Hence…wait for it…integrated gradients. For a more detailed explanation and numerous interesting examples, I’d definitely recommend checking out this post by the creators of the integrated gradients methodology!

SHAP (aka SHapley Additive exPlanations)

In order to make this calculation, the program first breaks out every possible permutation of the model’s variables. As a quick reminder, a permutation is basically a combination where the order matters. For example, for a credit underwriting model that uses “Gender”, “Eye Color”, and “City” as inputs, one permutation would be a blue-eyed female that lives in New York. After breaking out the various permutations, SHAP calculates the contribution of each variable in each permutation and then takes the average of those contributions. Those average values are known as the SHAP values.

SHAP values are powerful because they (i) enable users to understand how much each variable contributed to an individual prediction and (ii) can be summed to determine how much impact each variable had on the model as a whole. Said another way, SHAP values provide both local and global interpretability.

If you want additional detail on the SHAP calculation methodology, I’d recommend checking out this article. Also, if you want to see how SHAP values can be used to explain a real life example, I’d check out this article by Dan Becker at Kaggle. I’ve included two of his graphics below as a teaser. The first shows how SHAP values can be used to interpret individual predictions (local interpretability). The model predicts whether a soccer team has the “man of the match” on their team. This graphic shows the model’s prediction for one game and visualizes how much impact each of the 12 variables had in predicting whether that soccer team had the “man of the match” relative to a baseline prediction of 0.5.

The second graphic shows how SHAP values can be used to interpret the model as a whole (global interpretability). Each dot corresponds to a prediction. The vertical location shows tells us which variable is being visualized, the color shows us whether the variable was high or low for that individual prediction, and the horizontal location tells us whether the data point had a positive or negative impact on the model’s prediction. For instance, the top-left dot tells the reader that the team scored very few goals that game (because it’s blue) and that significantly decreased its probability of having the man of the match.

Advanced Uses of SHAP values by Dan Becker

3. Why invest in AI explainability?

  • Massive TAM: The explainability market is huge and continues to grow at a rapid pace. From a bottoms-up perspective, there are over 50,000 U.S. businesses in the regulated sectors listed above. If each of these businesses implemented an explainability solution (which start at around $50k per year and scale up with total compute power), this easily represents a multi-billion dollar market. Additionally, the more valuable these models become, the more companies will be willing to pay for explainability solutions.
  • Real stakeholder pain: As I discussed above, decision-makers are under increasing pressure to utilize AI solutions due to the potential efficiency gains and increasing pressure from innovative startups. However, they can’t reap the benefits of AI until they can adequately explain the model’s decisions to customers and regulators.
  • Government Regulations: There are several new government regulations that provide tailwinds to the market. Notably, GDPR in the EU, the Algorithmic Accountability Act in the US, SRC 11–7 in the US, and the Directive on Automated Decision Making in Canada. Each set of legislation is different but they effectively require businesses to conduct AI impact assessments, disclose the methodology / data used in AI models, and understand the adverse impacts of any algorithmic decisions.
  • Cultural Tailwinds: Consumers are increasingly focused on how their data is being used and are demanding more transparency and protection from big tech. Additionally, consumers are aware of the impact that AI algorithms are having on their daily decisions and are now weary of any real or perceived bias.

4. What companies are attacking this problem?

Open Source Solutions

Core AI

Purpose Built Solutions

5. Who is going to win?

  • Product: Fiddler currently has one of the most advanced products on the market. They provide a purpose-built SaaS solution that can hook into any data warehouse via an API connection. It uses the latest explainability methods and can interpret any model type. It provides dashboards to help users identify / address algorithmic bias, interpret the impact of each input, and assess data drift. Additionally, it allows clients to create custom dashboards and reports to further visualize or audit data. Last, they have set up “Fiddler Labs” to continue improving their explainability techniques and tools.
  • Team: Fiddler is led by a rockstar team. The founders have significant experience leading AI efforts at prominent technology companies (Facebook, Pinterest, Twitter, Microsoft, eBay, Samsung, Lyft, etc.). More importantly, they deeply understand the black box pain point. In fact, the firm’s CEO, Krishna Gade, previously led the development of explainability functionality for Facebook’s news feed ranking algorithm. The team has also previously founded, operated, and exited several businesses, demonstrating its ability to generate attractive outcomes for investors.
  • Investors: The firm has raised $13.2 million to date from some of the most well-respected venture capital firms in the country (Lightspeed, Lux Capital, and Haystack). I’d expect Fiddler to use this recent funding, along with the networks and expertise of its VC partners, to significantly scale up the team and enhance its product offering. I believe this will solidify Fiddler as a top player in the explainability segment and give them a reasonable chance to win the category. Of course, nobody really knows what will happen but it will be a fun company to track!

Sources:

Venture Architect @ Create Venture Studio | Previously Strategic Tech @ Hamilton Lane

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store