What to do with Explainability?

Dany Majard
Nov 12, 2019 · 10 min read
Photo by You X Ventures on Unsplash

As much as the prowess of AI are lauded in the media and its impact in many industries create value for some, the penetration of this technology into the foundations of our institutions will not happen unless the systems they are incorporated into respond to some basic precepts of human centered design. This is why the field of Explainable AI, or XAI - a term coined by the DARPA (Defense Advanced Research Project Agency)- is a burgeoning field. In their own words:

“ The effectiveness of AI systems will be limited by the machine’s inability to explain its thoughts and actions to human users. Explainable AI will be essential, if users are to understand, trust and effectively manage this emerging generation of human partners.”

I borrowed the above quote and took inspiration from Jacob’s Turner’s Robot Rules and I believe that my thesis here will go along his points in the book, which I recommend. I would go beyond the essential and consider it a necessary condition would AI be of any help with our most important concerns as societies. And considering that it is an open door to higher realms of knowledge, to borrow terms from spirituality, it is no small matter. Although it would be tempting to go dive into technical details on how to engineer XAI, as this series of articles tastefully does, I would like to take the time to review our motivations so as to guide our engineering.

The motivations/justifications for explainability belong to either of these categories:

  • Intrinsic : giving power and agency to the user or the human interacting with the AI agent.

Intrinsic explainability — A human outside the loop

There seem to be four level of power that is afforded to the human interacting with a decision maker: Subjection, Comprehension, Adhesion, Dispute.

Subjection, also called servitude, is a state of complete powerlessness in the face of decisions impacting one’s well being. This is unfortunately often the state of things in our modern times. Complexity having entered our lives at unmanageable rates, we often unknowingly enter spaces of subjection without acknowledging it. The slow abandon of the political space by the youngest generations is a mere reflection on how pernicious this apathy to subjection is becoming.

We trade meaningful decisions for convenience, and leave these to private for-profit companies. Unfortunately, single-metric-optimizing, for-the-bottom-line AI, as it is most often engineered, will only worsen the situation.

Adhesion is a much desirable and sensible motivation for explainability. This state is one where the subject has seen/heard enough to defer their judgement to the decision making authority. It means that it has sufficiently experienced its functioning to have trust in it. Basically, the human has validated enough of his own internal models against that of the authority to relinquish their reserve. This is how we build most of our relationship to knowledge, whether it be religious, spiritual, political or scientific knowledge: adhering to religious, spiritual, political or scientific authorities.

What it would mean in practice is that the user or agent interacting with the AI has the power to check a certain number of assumptions. That could be through some sort of conversational BI tool reconfigured as conversational XAI, through a publicly accessible report or other means to create trust.

For example, let the AI be a medical diagnosis tool assisting doctors. Then a doctor could test the AI as it tests its students, by asking questions about different situations and seeing if the AI matches its knowledge.

The issue here is two-fold:

  • How does one avoid the “lying AI” shadow, i.e. an XAI tool that would be a facade mostly unconnected to the real functioning of the AI and merely built to manufacture adhesion? And how would a trustworthy company or institution make sure the public sees the XAI as being one with the AI?

Aside from potential numerous education campaigns, Adhesion will need endorsement campaigns from many public figures. Short of these, there are risks that the AIs in public institutions will produce more subjection than adherence, even with appropriate tools in place.

In other words, lots of external factors are involved in manufacturing adherence-based XAI and the risk to still get Subjection is closely related to the speed at which we roll out of these technologies.

Comprehension is potentially the most difficult state to achieve, though it will be requested first. In its common use, it is taken as a synonym of understanding, with all the murky grey areas. Most of the time in our lives, we understand without comprehending, just enough to stop questioning. For that reason, we take it here in the etymological sense - that of the act of carrying with one self. In that light, a person comprehending an idea, a decision making process, can reproduce this idea or process with limited to no effort, at will, whenever and wherever they are. By definition, it is the capacity for a human to access a phenomenon via an easily accessible, high fidelity model. Newton’s law a gravity is mostly comprehensible, General relativity isn’t.

In terms of the phenomenons that we attempt to model, with classical mathematical tools or data fed AIs, very few may be comprehended:

  • piece-wise constant functions or decision trees.

These are models where the behavior is constant in chunks. The easiest examples are driving speed, drinking age, birth & death, income tax bands. Before drinking age — no drinking. After: drinking. No breathing but heart beating : unborn, Breathing and heart beating : born & alive. None : dead. This is one we find a lot in human designed systems since it is the easiest for us to comprehend. In reality, all decision tree is a piecewise constant function. Every if Case A, then Outcome C system is of such form. Yet I can hear many raise their eyebrows at my last example. “No one can understand, let alone comprehend taxes”.

That is showing the limitations of the comprehension motivation already, at high number of cases or steps, such model stops being comprehensible already, for simple memory reasons. The tax code is just too convoluted for a regular human to produce at will.

  • linear functions

These are models where the relationship between the quantities is constant, so there is a single case to memorize. If one knows the amount of water lost through sweating while walking at a regular pace on flat ground is 1L / 50km, one knows how much to pack for any trip.

  • periodic and trigonometric functions

This is another class of models that is quite comprehensible. Temperatures, tides, seasons, day/night cycles are part of our life. Though actually being able to compute on the fly cosines or sines (e.g. evaluating the portion of a force applied to an object that goes into making it move) is the feat of few. Trigonometry is only taught in high level classes, and remembered very little afterwards.

  • polynomials, logs and exponentials:

We start reaching the very limits of comprehensibility here. Many know that the “law of compounding interests” is what makes the riches, but no one can produce the exponential of a number on the spot.

Sums of these functions may be manageable as well but as soon as some interactions, i.e. multiplications, come into play, we are hopelessly lost.

What that comes to show is that explainability motivated by comprehension, in the etymological sense, is an oxymoron. The very goal of modeling with complex models is to make incomprehensible phenomena accessible to the human intellect.

Photo by Roland Samuel on Unsplash

Dispute is the most desirable motivation of all intrinsic ones. It doesn’t require adhesion but the existence of a process of communication in case of a lack of adhesion. Adopting this as a motivation for explainability would mean to put in place either:

  • A procedure to submit points of contention to an AI system, potentially some corrective data, and see the changes impact the decision making in a meaningful time frame for the human being interacting with it. That could mean correct personal data on a given case or ask for a modification of a whole set of decisions (say for a local area of the training manifold).

I believe that striving for a dispute-enabled XAI is the best setting for implementation in institutions such as health or justice. In this case though, there exists a non-avoidable topic that must be reckoned with, a dark, overarching cloud that has plagued most of our systems, AI or not for ages: Goodheart’s Law

Instrumental explainability — making the loop

Considering the difficulties of intrinsic explainability, many will make a case for instrumental explainability. The state of the art tools being built by the same crowd that builds complex models, it is no surprise.

Photo by NESA by Makers on Unsplash

Being instrumentally motivated, XAI attempts to give the best tools to the engineers of the system to address the problems the system might have. It allows a company or institution to be as proactive as possible to reduce the risks associated with the rolling out of the AI agent. It allows access to adherence and dispute states to the engineering team, which can then act upon it. As I heard at a conference (paraphrasing):

The worst thing to do when facing an algorithm with a bias is to can it. Though the PR pressure will mount, that would only revert the decision making to the human process that produced the biaised data in the first place. It is much preferable to learn from it and continuously improve the AI decision agent. We are throwing away the best tools we have to implement our ideals of fairness as Human networks have an inertia that AI systems don’t have.

Any company or institution currently making efforts to systematically implement XAI in their teams should be lauded and valued. We should make an effort to see past the tech glitter and PR tricks of AI prowess and ask more often how these shiny new algorithms will be implemented. As though this is a necessary step, I believe it is not sufficient. There is a long road ahead.

Dissipating the Black Box myth

I did not wish to finish this article without mentioning the “black box” myth that every conversation on explainability will drag along. There is indeed an unhealthy trend to paint any algorithm that is not providing instant comprehensibility as an insurmountable mountain of un-explainability.

But centuries of science should have prepared us better: the most accurate physical theory of the world we have, The Standard Model, is born out of hundreds of years of poking at the biggest black box we know: the universe.

As a former Theoretical Physicist, I can recall my wonder at how we got to QED and QCD and how, from the explicit decision NOT TO TRY to know what really happens during a particle collision, we could infer so much on the world from asymptotic behaviors. What it means is that though we gave up on probing the inner workings of the collision itself, seeing in what state the system was long before and long after the collision, we could produce a model capable to explain the world so well it is unrivaled in precision.

All is to say, any AI that is vulnerable to the scientific method is therefore not safe from explanations. Which brings me to the point:

The scientific method itself is born out of the need to explain black boxes and the only such algorithms out there are those to whom we have limited access to. This can be due to commercial or security reasons, depending on the interests of the entity deploying the AI.

Black boxes are therefore not qualities describing an algorithm but describing our access to it. It is a commercial barrier, not a mathematical or computational one. There should be no such description of an AI system within a company or institution.

image taken from Matthiew Fancis’ blog

Conclusion

From these reflections, it is clear that the tools that we may build for XAI are quite different depending on the motivation. Instrumentally minded approaches will favor accurate, technical descriptions of the relationships between current knowledge (the data) and the decision taken by the AI. That is the way for engineers to best improve and maintain the AI system.

But a company with large impact parading its instrumental XAI may be a masquerade for real dispense of agency to the user. We will need more discussions on the design of intrinsic XAI systems, discussions involving more than AI engineers and Data Scientists. It may take another decade, but one day XAI will not be seen as an advance, just a mere prerequisite to any deployment in the public sphere. And the longer we keep pretending it is a pure tech topic, the slower we will be.

If you enjoyed this article, please consider following my current series Musings on ethical data science.

The Startup

Get smarter at building your thing. Join The Startup’s +792K followers.

Sign up for Top 10 Stories

By The Startup

Get smarter at building your thing. Subscribe to receive The Startup's top 10 most read stories — delivered straight into your inbox, once a week. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Dany Majard

Written by

Low frequency high quality writing on data tech.

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +792K followers.

Dany Majard

Written by

Low frequency high quality writing on data tech.

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +792K followers.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store