Keys to Keeping AI Accountable: Transparency, Explainability, and Interpretability

Tobias Schaefer
StratyfyAI
7 min readDec 22, 2023

--

The dynamic realm of Artificial Intelligence (AI) and Machine Learning (ML) is shaping our daily interactions, with crucial decisions abound. Here, transparency, explainability, and interpretability play a pivotal role. These concepts are not mere technicalities; they are the lifelines ensuring reliability and responsibility, especially in domains where decisions wield considerable impact on human lives — think finance, medicine, and beyond. Explaining AI decisions becomes paramount, mirroring the human expectation of accountability.

In this article, we explore the concepts of transparency, explainability, and interpretability that serve as guiding lights on a journey to AI and algorithmic fairness. Moving beyond, we explore how interpretable models stand tall in high-stakes decision-making, offering excellence, troubleshoot capabilities, and a harmonious fusion of human expertise with machine learning. It’s a journey into the heart of responsible AI towards an emerging sustainable and accountable technological landscape.

Transparency: Building Bridges to Responsibility in AI

Transparency acts as the vital link between technology and human responsibility. The concept of transparency is complex, but in the context of ML and AI, it can generally be defined as the ability to trace underlying reasoning of a certain decision within a decision-making model and acquire human-interpretable explanations. In other words, both experts and non-experts should be able to understand how a machine arrived at a certain decision based on the relevant information it exhibits and provides. Some researchers add that notions of transparency involve several aspects: openness (providing relevant information to the right audience), comprehensibility (ensuring the audience can understand and use the information), and explicitness (the communicator consciously and effectively expresses information to achieve the first two aspects) [1].

Yet, transparency isn’t just a technical feature–it’s the bridge connecting technology with (human) responsibility. On one side is transparent communication involving explanations which present an essential part of human interaction, trust, and responsibility. When things go wrong or differently, we need someone competent to provide us with answers and relevant explanations. On the other side is the complex technology that, if misled, could lead to detrimental outcomes. Transparent technologies, with explainable and interpretable features, offer reliability because we know how these operate.

Imagine a financial algorithm determining risks in lending or a medical AI system diagnosing ailments — both wielding immense influence on lives. The need for clarity in decision-making processes becomes essential. For instance, if a loan applicant seeks to understand a decision, a bank officer or manager can clarify it, elaborating on values and other relevant details that help to grasp the outcome. This is not only helpful for users (like loan applicants), but also to deployers (such as bank officers, managers) and developers (creators of a model) who can troubleshoot and correct the model in case inconsistencies arise.

At a deeper level, the type of explanation needed to provide sufficient understanding of a decision might vary depending on the context, stakes, and people involved. Humans do not generally provide complete causal chains but implement what they believe are the explainee’s beliefs [2], meaning they adapt explanations depending on the person they are talking to. It could also mean that we expect different types of explanations from a machine than from humans; likely demanding a next-level of transparency from a ML model, a higher standard. This aligns with the evolution of ML models, where individuals can combine human and artificial intelligence to enhance efficiency, discover knowledge, and uphold responsibility and fairness.

In the realm of high-stakes decision-making, where lives are directly impacted, understanding the inner workings of a model becomes pivotal.

Critical Elements of the Fairness Equation: Explainability and Interpretability.

At the heart of fair and sustainable AI technologies, explainability and interpretability take the spotlight, and it is for a good reason. Despite often being used interchangeably, these terms hold distinct meanings.

Shift the focus to the end-user — the individuals affected by these decisions. Here, explainability goes beyond the complexity of algorithms, providing clear and intuitive justifications for why a particular decision was made. Picture a scenario where a patient receives a medical diagnosis; it’s insufficient for the AI system to reach a conclusion — it needs to explain the “why” in a way that the person whose health is at risk can comprehend. This is where explainers find their purpose, helping to better construct user-friendly explanations.

However, this is not just about the end-users. Developers and deployers grappling with the intricacies of building and maintaining these systems need a different perspective. Interpretability ensures that those responsible for crafting and deploying such models can comprehend, contribute to and, when necessary, fine-tune the decision-making framework. In addition, interpretable models allow the inclusion of expert knowledge, making it both useful and more accurate in high-stakes situations.

The intricate relation between explainability, interpretability, and transparency forms a harmonious equation in the realm of responsible AI. Transparency serves as the main idea, involving the clear and direct provision of necessary information. Interpretability and explainability contribute to transparency by either revealing how systems operate or analyzing and summarizing the processes that may have occurred, respectively. Explainers can also be used together with interpretable models (even though these are already transparent), to acquire a specific form of an explanation or enhance it in terms of user-comprehension.

To better understand the relation between these terms, imagine a healthcare AI system diagnosing a patient with a life-altering condition. Transparency ensures the patient and healthcare providers understand the decision-making process, explainability articulates why a specific diagnosis was reached in a manner comprehensible to the patient, and interpretability allows developers to delve into the model’s intricacies, ensuring continuous refinement for better healthcare outcomes.

While interpretable models aim to provide intrinsic clarity, incorporating an explainer can further enhance understanding for all stakeholders, offering additional insights for the decision-making process. This optional feature ensures a deeper comprehension of the model’s operations, contributing to a more informed and collaborative AI environment. There are technologies that allow taking advantage of both features.

In the realm of fair, ethical, and sustainable AI models, interpretability stands tall, offering nuanced insights into a model’s intrinsic operations. Unlike explainability, which caters to user understanding, interpretability focuses on the model itself. Together, these terms shape the landscape of responsible AI technologies, providing a panoramic view for developers, deployers, and end-users to navigate with clarity.

Excellence of Interpretable Models: Why to Have One?

Interpretable models are at the forefront of high-stakes decision-making, as the strength of these lies in their capacity to combine human expert knowledge with the power of machine learning. This fusion fosters results that transcend conventional boundaries.

Moreover, interpretable models bring a unique troubleshooting capability to the table. For instance, in the realm of medical imaging, interpretable models can effectively identify and rectify inconsistencies in X-ray interpretations. A convincing illustration of critical interpretability can be seen in a medical study [3] that highlighted a potential pitfall in their proprietary black-box neural network’s reliance on the word “portable” in an x-ray image, focusing on the equipment type (portable x-ray) rather than the medical context. This issue was not uncovered until devices from other manufacturers were involved, emphasizing the need for interpretable models to avoid such errors in medical datasets prone to confounding. With this in mind, the troubleshooting ability becomes paramount in scenarios where the model’s attention is redirected, resulting in inconsistent or false outcomes.

In the ever-evolving landscape of AI, interpretable models also facilitate a successful audit process, allowing for the continuous tracing of models’ dynamics. This ensures alignment with new data and changing environments, a crucial aspect in effective operations as well as compliance with regulatory frameworks. As regulations in the AI field continue to evolve, interpretable models offer a proactive approach to legal compliance, fostering a responsible and adaptable AI ecosystem. It also promotes users’ trust, a better image of the company and effective operations overall.

Dispelling the Myth: Unveiling the Harmony of Accuracy and Interpretability

In the world of machine learning, there’s a common belief that complex black-box models are crucial for top-tier predictive performance. However, this idea doesn’t always hold true. Despite the prevailing notion of a trade-off between accuracy and interpretability, the difference in performance can be made nonexistent, especially when data is well-structured and the appropriate approach to modeling is chosen [4, 5]. This challenges the existing belief in the inevitability of the trade-off, suggesting that simplicity can be the ultimate sophistication. The models evolve to be interpretable, which allows addressing the lack of emphasis on interpretable ML research and training in the past.

In areas where precision is non-negotiable, the necessity to understand the decision-making process is unparalleled. Interpretable models not only maintain accuracy but elevate it by providing a transparent and comprehensible framework. This dispels the notion that interpretability compromises accuracy, showcasing that these two elements coexist to ensure decisions are not only precise but also understandable and justifiable.

In the dynamic world of AI, interpretable models shine in high-stakes domains, fostering excellence by combining human expertise with machine learning. They facilitate audits, ensure regulatory compliance, and enhance user trust — a proactive approach in the evolving landscape of AI. Dispelling the myth of accuracy and transparency tradeoff, interpretable models prove that fairness and accuracy can coexist. In conclusion, these pillars — transparency, explainability, and interpretability — craft a resilient foundation for responsible and impactful AI advancements.

References

[1] Nyrup, Rune. “The limits of value transparency in machine learning.” Philosophy of Science, vol. 89, no. 5, 2022, pp. 1054–1064, https://doi.org/10.1017/psa.2022.61.

[2] Coeckelbergh, Mark. AI Ethics. The MIT Press, 2020.

[3] Zech, John R., et al. “Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.” PLOS Medicine, vol. 15, no. 11, 2018, https://doi.org/10.1371/journal.pmed.1002683.

[4] Rudin, Cynthia. “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.” Nature Machine Intelligence, vol. 1, no. 5, 2019, pp. 206–215, https://doi.org/10.1038/s42256-019-0048-x.

[5] Schaefer, Tobias. “Navigating an Ever-Changing World with AI (& Pre).” Stratyfy, 13 Dec. 2021, stratyfy.com/navigating-an-ever-changing-world-with-ai-pre/.

--

--