Explainable AI: what is it and who cares?

andrea b
andrea b
Nov 6 · 10 min read

In this Q&A on Explainable AI, Andrea Brennen speaks with In-Q-Tel’s Peter Bronez about descriptive vs. prescriptive models, “white box” vs. “black box” explanation techniques, and why some models are easier to explain than others. Peter also discusses the reproducibility crisis in Psychology and why good experiment design is so important. Peter is a VP on the technical staff at IQT.

****

ANDREA: Let’s start with a general question. Could you tell me about your experience with machine learning and AI?

PETER: As an undergraduate, I studied econometrics and operations research, so my exposure to machine learning was in the context of designing models of the world that you could test mathematically — basically, doing hypothesis testing using statistics. Afterwards, I worked at the Department of Defense and used a lot of the same techniques. From there, I went to the private sector and [worked on] social media and data mining in marketing applications, trying to create mathematical models to categorize people, activities, and messages in order to understand them better.

ANDREA: How do you think about Explainable AI? How would you summarize what this term means?

PETER: I’ve been following this literature, but I saw the theme even before we had the term.

In my understanding, Interpretable AI is about enabling analysts to use modern probabilistic tools to find conclusions in massive amounts of data, and to understand the mechanisms that help form those conclusions.

For more about “explainable” vs. “interpretable” AI, see this recent post.

[Think of] an analyst whose job is to understand complex systems and form an opinion that a decision maker can use to make a better decision. How do you help analysts trust that [an answer] makes sense, that it’s not just a fluke, and that they can use it as a reliable piece of information? We need to [give] analysts tools to understand why a computer gives a certain answer.

ANDREA: In your experience, when has the interpretability of a model mattered most? And who did it matter to?

PETER: [It’s important to understand] two different applications of statistical models, descriptive and prescriptive.

In the descriptive sense — that’s what I was studying in school — you’re observing a system, taking measurements, and creating statistical models to describe how a system works. The system could be a physical process where you’re measuring something like temperature, or a social process where you’re measuring people’s attitudes.

For example, say you’re trying to understand something about the gender wage gap. You might look at where people work and how long they’ve been there. You might look at education and whether someone has kids. If you use a linear regression model, you might find that the coefficient on gender, controlling for all of the other factors, indicates that women are paid less than men. Knowing that is the point of the exercise and interpretability really matters.

Modern machine learning is prescriptive. This is useful for figuring out ambiguous situations. For example, an email arrives on my server: is it a real email or is it spam? It turns out that a lot of the same [statistical] tools are useful if you provide them to your computer to make spam/not spam-type decisions. But in this case, you don’t really care why an email is classified as spam. No one’s asking that question; people just want less spam in their inboxes. The output matters more than the guts of the model.

Traditional statistics is about creating models and understanding the structure of those models to give you some larger understanding of the universe. In modern software engineering, we are often creating a probabilistic algorithm that provides a function. The output of that function is what matters.

ANDREA: Are there certain types of models that lend themselves to being explained?

PETER: Definitively, yes.

The generic way to think about this is that simple models are easier to understand than complex models. Here, I mean complex as in a mathematical definition of complexity. In other words, there are a lot of things that are interrelated and you don’t necessarily have linear relationships between them.

Two of the most explainable algorithms are linear regression and decision trees.

Linear regression essentially assumes that you have a structured record and that every element of [the record] has a linear impact on the outcome.

For example, we don’t see a linear relationship between age and income. When you’re under fifteen years old, you probably have very little income and the change from eight to nine is probably not going to increase your earning potential. In the middle of your earning years, your income might grow quite a lot if you move into an executive role. But then, the relationship between age and income might reverse later in your life if you retire. So, there is a non-linear relationship [between age and income] and this matters to understand the problem. It also matters to make a prediction. If there was a linear relationship, I could just tell you, “if you’re one year older, you’re going to make five hundred dollars more.”

Another type of modeling is categorization. Say we want to categorize a credit card transaction as fraudulent or not. We could use a decision tree to walk through a whole bunch of rules to determine if a transaction is likely to be fraudulent. Was the transaction more than a thousand dollars? If yes, maybe it is more likely to be fraudulent. Which credit card processor did it come from? Some might not have good fraud control.

[With a decision tree] you make a series of cumulative yes or no [decisions] and at the end you can say, “I think this is a fraudulent credit card transaction because it was more than a thousand dollars and purchases that are more than a thousand dollars are twice as likely to be fraudulent then the ones that are less than a thousand dollars.” That’s about as simple as it gets [to explain].

With these two types of models — linear regression and [decision] trees — we can give an explanation using what’s called a white box method, where we look inside of [the model] to see how things actually work. However, more complex models like neural networks — which have a lot of parameters and an internal state that is not directly interpretable — are really hard to understand, so we have to move toward black box methods [of interpretation] where we ask “if I put this in, what happens? If I change this a little bit, what happens?”

ANDREA: I want to go back to the example of the linear regression model. It seems to me that there are a lot of other things that you might need to know, in order to use a model like that with confidence. For example, where did the model come from? Who created it? What data did you use to generate it? What other things — besides how the model works — are important to explain?

PETER: Yeah, so the way I was describing explainability was focused on the model, but the model is just one part of the overall experimental design. If you have the wrong experimental design there’s no way you can get the right answer — it’s garbage in, garbage out. We can measure things, but are we measuring the right things? Are we measuring them accurately?

ANDREA: Are there standardized guidelines that can help people explain the important aspects of experimental design?

PETER: What we’re talking about here are the fundamentals of quantitative methods and this stuff is hard enough to get right that academia has systematically gotten it wrong. Look at the reproducibility crisis in Psychology.

Researchers have discovered that they cannot reproduce fundamental findings in their field. It turns out that their sample sizes weren’t big enough, the statistics were not rigorous enough and there was a positive publishing bias. People want to publish really interesting results, but it turns out interesting results are rare results. The statistical fluke tends to be interesting, so it’ll get published.

Now people are going back and spending a lot of money redoing tons of studies to figure out if the results hold up. There’s a whole discipline of reproducible research that’s trying to come up with guidelines. These include things like: be explicit about how you collected your data and how you processed that data; make the code open and available; preregister your hypothesis so you can’t be like, “well, the thing we were looking for wasn’t there but we found this other weird thing.” If you don’t have a theoretical reason why that weird thing is there, it might be a statistical fluke. Design another study to look at that more closely.

This is all to say that you can come up with guidelines but the process of creating, communicating, enforcing, and tracking them over time is hard enough that modern American scholarship is struggling with it.

This is actually an argument against complex models in some ways. As you come up with more complex models, they’re harder to interpret and even the academics who want to use these tools often make incorrect assumptions, or incorrectly understand the statistical properties of their tests and their models.

ANDREA: Do you think that the data science and AI research community can learn from the reproducibility crisis? Is this something that people in data science are talking about now?

PETER: Everything is worse in data science and machine learning.

In Psychology, this is coming from professional academics who care about methods and devote their lives to creating and maintaining a body of peer reviewed research. Data scientists tend to be people who are blending statistics and computer science, or programming and some sort of domain interest. This means they have a broader set of skills and can do a lot of cool stuff, but they [may not be] as deep on any[thing]. They’re solving a business problem in a practical context, and they’re trying to get the statistics to make it work, which means they’re going to be less careful about their methods.

ANDREA: Why do you think people care about explainability? Why does it matter if we can explain AI?

PETER: I think of three different uses: one is the data scientist or statistician who is building a model, and wants to understand exactly how it’s working and how it’s failing. Some people call this Machine Learning Operations or “ML Ops.”

Software that has a probabilistic component is different from traditional software in that you can’t just “fire and forget;” ongoing maintenance is needed because the data that flows through your system might change. Going back to the credit card processing example, maybe there was a sketchy credit card company that had bad fraud prevention. Maybe they got a new V.P. who tightened their fraud prevention and suddenly that’s [no longer] a strong signal about fraud. Now I need to retrain and update my model because the world has changed. This is called model drift. You need test your model continuously, to see how it is performing.

Second, you have [explainability] tools that face the analyst. When I [as a data scientist] present the result of an analytic, how can I help the analyst trust that I came up with a good answer? How can I help the analyst know why the answer is that way, so he or she can put the result into greater context and carry it forward more confidently?

The third category — and I think this is primarily of interest to the security community — is understanding models you didn’t create. This is almost like reverse engineering.

ANDREA: Reverse engineering also seems relevant to auditing. If a company is using a model, in the case of regulation or litigation, wouldn’t a third party want to understand what is going on under the hood?

PETER: One place to look for an example of that is the autonomous vehicle space. How do you certify that a company has built a sufficiently “smart” car that you can actually let out on the roads and trust that it’s not going to start running over children? There is an idea that a regulator could use a synthetic data environment to put an AI system that drives a car into a video-game-style world and see how it performs. That would be an example of a black box technique applied to achieve a regulatory goal. I don’t think anyone’s put that into practice yet but it’s been discussed.

ANDREA: One last question: if you could explain one thing about machine learning to policymakers, what would it be?

PETER: I would tell them that their intuitions about people, incentives and biases are all applicable. If [they] have a political opinion that certain things shouldn’t be taken into consideration for certain analytics, this is a big deal.

For example, you might say that protected classes like race, religion and age can’t be used to decide if someone gets a mortgage or not. That seems straightforward, but [it’s important to keep in mind that] there are sneaky ways to get around this. Someone could drop the data [about race, religion or age] out of their dataset, but instead, combine or use other factors that are a proxy for that [data]; in this way, they could still create a biased algorithm that would hurt certain classes of people.

A model that meets the letter of the law can produce outcomes against the spirit of the law.

Understanding and auditing this is hard from a statistical perspective, but as a decision-maker, you need your own experts and techniques in order to keep practitioners honest.

****

This post is part of a series of interviews that IQT Labs is conducting with technologists and thought leaders about Explainable AI. The original interview with Peter took place on April 9, 2019; this Q&A contains excerpts that were edited for clarity and approved by Peter.

Image credits:
Illustrations by
Andrea

high stakes design

design tips for data products

Thanks to Vishal S.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade