Operationalising Transparent, Explainable​ and Interpretable AI Solutions: A Strategic Roadmap

Fernando Mourao
SEEK blog
Published in
9 min readMar 14, 2024

--

In this blog post, Fernando Mourao, SEEK's Head of Responsible AI, alongside the seasoned Data Science team of Pauline Luo, Tao Zhang, Sue Ann Chen, and Saumya Pandey, navigate through the complexities organisations face in delivering transparent, interpretable, and explainable AI systems. Offering hands-on insights, this post aims to help decision-makers measure the investment needed and strategically pivot towards a novel AI development approach. The following text represents an example of AI augmentation capability, where ChatGPT-4 was employed to enhance the authors' original draft, resulting in a more engaging narrative.

Image created by OpenAI's DALL-E.

The discourse on transparency, explainability​ and interpretability has taken centre stage in forums focused on AI ethics, AI regulation​ and Responsible AI governance worldwide. There is a growing consensus on the benefits of these concepts for users, society and even businesses. Yet, the gap between declared intentions versus solid progress made by organisations towards more transparent, interpretable and explainable systems is enormous.

How can we bridge the gap between the widespread support and endorsement of making AI systems transparent, explainable, and interpretable, and the practical actions taken by companies to achieve these goals?

Rather than discussing the importance and definitions of AI Transparency, Explainability and Interpretability, we intend to shed light on the practical barriers to their operationalisation in AI-based services. In practice, challenges related to financial health, strategy, value perception​ and brand reputation often limit actions towards shifting the AI development paradigm. We aim to help decision-makers and AI developers quantify the required effort and plan a transition to more transparent, explainable​ and interpretable AI systems effectively. The following discussion is grounded on the consolidated view of a diverse and qualified team of AI specialists in the recruitment domain, aiming to inspire similar reflections across other teams and domains. While it's not an exhaustive list of factors, it provides a starting point for discussion and action.

To go through this complex topic objectively, we will create a series of posts focused on six main hurdles to the effective operationalisation of transparency, explainability​ and interpretability of AI:

  1. The Business Trade-Off Illusion: The first hurdle is an apparent trade-off between business interests and AI transparency. It is important to debunk this myth and demonstrate how transparent AI can enhance business value.
  2. The Hidden Cost of Paradigm Shift: Shifting towards transparent AI involves rethinking the entire design and execution process. We must explore how companies can budget for this shift and strategically plan for long-term gains despite short-term costs.
  3. Legacy System Costs: Many organisations operate on inherently opaque legacy systems. Operationalising AI transparency requires delving into the financial and logistical challenges of updating or replacing these systems with AI solutions that are more transparent, explainable and interpretable.
  4. Bridging Conceptualisation and Objective Specification: A significant gap exists between the theoretical understanding of AI transparency and its practical application. Hence, discussing methods for setting specific, measurable objectives for transparent AI is essential.
  5. The Challenge of Verifiability: Ensuring that AI systems do what they claim requires rigorous verification processes. Organisations should consider the complexities and methodologies for verifying AI transparency, explainability​ and interpretability.
  6. A Roadmap for Operationalisation: Finally, we will outline a strategic roadmap for companies to successfully transition to transparent, explainable​ and interpretable AI systems.

This initial blog post in our series will focus on overcoming the first major obstacle: The Business Trade-off Illusion. We will illuminate the misconception that business interests are at odds with AI transparency, demonstrating instead how transparent AI can amplify business value. The second post will tackle strategic and management challenges, explicitly addressing The Hidden Cost of Paradigm Shift and Legacy System Costs, highlighting the investments and strategic planning necessary for embracing transparent AI despite the hurdles of updating or replacing opaque legacy systems. The third post will examine execution-level barriers, offering insights into Bridging Conceptualisation and Objective Specification and The Challenge of Verifiability. We will discuss effective methods for setting clear, measurable goals for transparent AI and delve into the complexities of verifying AI systems' claims to transparency and explainability.

Concluding the series, we will present a comprehensive Roadmap for Operationalisation designed to guide organisations in navigating the transition towards meeting the standards of transparency, explainability, and interpretability. This roadmap aims to help organisations move from their current state to fully embrace and implement AI solutions that are not only powerful but also transparent and understandable.

Finally, while regulation is a game changer for effectively enforcing more responsible practices by AI providers, we do not need to sit idly by waiting for an eventual regulation that will define what we should do.

Acting in favour of the values and visions that we believe in is always an option.

As we bring you along this journey of how to take small steps to contribute to a future where AI aligns seamlessly with our values, ethics​ and societal expectations, let's begin by asking how else can we accelerate the creation of more transparent, interpretable​ and explainable AI? We're looking forward to hearing from you!

Hurdle #1: The Business Trade-Off Illusion

The belief that business interests are inherently at odds with AI transparency and explainability is a pervasive misconception. Many executives and business leaders have historically viewed the inner workings of AI systems as proprietary assets crucial for maintaining competitive advantage, fearing that transparency could erode this value. However, this perspective overlooks transparency's substantial long-term benefits, including enhanced trust, brand reputation, stakeholder understanding​ and regulatory compliance. Transparent AI fosters innovation and collaboration, ultimately contributing to a more robust and sustainable business model.

Exploring the multifaceted benefits of AI transparency goes beyond the ambit of this initiative. Several key resources offer valuable insights for those seeking to deepen their knowledge. For instance, the OECD has crafted a nuanced definition of transparency and explainability, while the FEAT Principles provide a robust framework for assessing transparency. Deloitte's report sheds light on the intersection of transparency and responsibility within AI, and a recent survey provides an in-depth discussion on prevailing attitudes towards these critical issues. These resources are instrumental for any reader looking to grasp the complexities and applications of transparency in AI.

Balancing confidentiality with transparency is achievable by disclosing the principles and methodologies underpinning AI systems without revealing sensitive algorithms or data sets, mitigating risks of scepticism, mistrust​ and potential legal complexities. A practical example of an organisation achieving this balance could be a fintech company that uses AI for credit scoring. This company could publicly disclose the broad principles guiding its AI system, such as fairness, non-discrimination​ and accountability. It might also share the types of data it considers (e.g., payment history, income level, employment status) and the methodologies behind the AI decision-making process, like machine learning techniques used to predict creditworthiness. However, the company would stop short of revealing the specific algorithms or the detailed data points of individuals, which are proprietary and sensitive. This approach ensures transparency and builds trust with users and regulators while protecting the company's competitive advantage and complying with privacy laws.

Manifestation of the Misbelief

This misbelief manifests in various scenarios, particularly in sectors where technological differentiation is critical. Concerns over commercial sensitivity and the risk of competitors exploiting disclosed information are common, alongside fears that detailed explanations might enable users to manipulate system outcomes. Additionally, in heavily regulated industries like finance and healthcare, there's a tendency to err on the side of caution, potentially under-disclosing in a misguided effort to avoid non-compliance. Such practices underscore a fundamental misunderstanding of what AI transparency entails and its strategic importance.

A concrete example illustrating this misbelief can be seen in the healthcare sector, particularly with companies developing AI-driven diagnostic tools. Let's consider a company that has developed a groundbreaking AI system capable of accurately diagnosing diseases from medical imaging. The company may fear that disclosing too much about the AI's functioning and the data it was trained on could lead to two main issues: firstly, competitors might use this disclosed information to develop a similar tool, reducing the original product's competitive edge. Secondly, there's a concern that if the system's decision-making process is too transparent, it might enable users (in this case, possibly healthcare professionals or patients) to game the system, for instance, by manipulating input data to achieve a desired diagnostic outcome.

This company might, therefore, choose to be very secretive about its AI, sharing minimal information on how the diagnoses are derived. This approach is taken in the belief that it protects business interests. However, this secrecy can lead to mistrust among healthcare providers and patients, who may be reluctant to rely on a tool whose workings are a black box. Moreover, it might hinder the company's ability to pass regulatory scrutiny, as regulators increasingly demand transparency in AI tools used in healthcare to ensure they are safe and non-discriminatory.

Root Causes

Several factors contribute to this misbelief. Grounded on practical experiences and discussions with AI practitioners and researchers, we would like to emphasise the six most common causes:

  1. Misunderstanding AI Transparency: There's a general lack of awareness that transparency aims to appropriately shape users' mental models, not to unveil trade secrets.
  2. Undervaluing Transparency: The business benefits of transparency, such as enhanced customer engagement and trust, are often overlooked.
  3. Complexity and Confusion: The distinctions among AI transparency, explainability​ and interpretability are muddled, frequently presented in overly technical terms without clear business relevance.
  4. Mismatched Expectations: Companies often misjudge what users actually need or want in terms of transparency, leading to a disconnect in communication.
  5. Fear of Exposure: There's apprehension about uncovering potentially problematic aspects of business practices that were not intended to be public.
  6. The Transparency Paradox: There exists a paradox where increasing transparency inappropriately reduces trust in AI systems. This paradox emerges because too much information can overwhelm users, making AI systems appear more complex and less understandable.

Potential Impacts

The consequences of neglecting AI transparency extend across the organisation, its customers​ and society. While there are many potential impacts, in a nutshell, we highlight five ones for organisations:

  1. Eroding Trust: Lack of transparency undermines customer trust, leading to disengagement, churn​ and low retention of users.
  2. Reputational damage: taking a reactive approach instead of a proactive approach can result in large-scale reputational risk in media.
  3. Inhibiting Control and Verifiability: Without transparency and explainability, verifying AI actions or giving users control becomes challenging, impacting user satisfaction and compliance.
  4. Impairing Decision Making: Non-transparent AI systems hinder informed and adaptive decision-making, potentially associating the brand with negative perceptions and affecting long-term market position.
  5. Legal and Regulatory Consequences: A lack of transparency in AI systems can lead to violations of data protection and privacy law.

Several high-profile organisations have faced significant challenges due to a lack of AI transparency. A notable example includes a global social media company that encountered widespread mistrust and legal issues after mishandling user data, leading to heavy fines and a tarnished reputation. Similarly, a leading international recruitment firm suffered reputational damage when its opaque AI-driven applicant filtering system was found to be biased, resulting in legal scrutiny and a loss of user confidence. These incidents underscore the critical importance of transparent AI practices to maintain trust, compliance​ and a positive brand image.

Strategies for Debunking the Misbelief

Finally, to debunk the misbelief that AI transparency is not aligned with business ambitions, we believe organisations should start by considering the following approaches:

  1. Enhancing AI Literacy: Educate senior management and business leaders on the value of AI transparency, explaining its strategic importance beyond mere compliance. Educational platforms offer resources to initiate this educational journey.
  2. Developing an AI Transparency Framework: Establish a framework that connects transparency with business objectives, clarifying how transparent AI practices can serve as a competitive advantage. The framework developed by researchers at UCL for transparency in AI within the educational sector can serve as a template for other industries.
  3. Implementing Continuous Monitoring and Audit Trails: Set up systems for ongoing monitoring and create detailed audit trails for AI decisions. This ensures accountability and facilitates retrospective analyses to improve system transparency. Guidelines and best practices for AI audits can be invaluable for organisations in this regard.
  4. Adopting a Goal-Oriented and User-Centric Approach: Articulate transparency goals in relation to user needs, ensuring that efforts to explain AI systems are aligned with the end goals of various stakeholders. Discussions and analyses on user-centric AI transparency, exemplified by various industry posts, can offer foundational insights for decision-makers.

Debunking the trade-off illusion between business interests and AI transparency is crucial for fostering an ecosystem where ethical, understandable​ and accountable AI systems contribute to sustainable business growth. By embracing transparency, organisations can build trust, encourage innovation​ and navigate the complexities of modern regulatory landscapes more effectively.

--

--