Societal Implications — Explainable AI Visualization (Part 8)

Parvez Kose
DeepViz
Published in
5 min readFeb 17, 2023

This article continues the background research for the study ‘Explainable Deep Learning and Visual Interpretability.’

AI has been democratized in our life and now permeates every aspect of our social lives, from social media platforms, streaming services, and smart speakers to the healthcare system. It is now deeply embedded in various business decisions, both routine and high stakes, because they promise to provide superior performance and improve the consistency and quality of the decision-making.

These techniques have also been embraced in regulated domains such as employment, credit, and insurance. Critical decisions like who gains access to healthcare, who’s approved for a loan, who gains access to critical opportunities and criminal risk assessment depend on the output of the machine learning model. Yet there is a perception and emerging concern that these learning algorithms from historical data can end up encoding human biases and prejudice they promised to alleviate.

Over the past few years, algorithmic bias and fairness question has become a concern for consumer advocates, regulators, policymakers, civil rights groups and even businesses. Society is built upon a fabric of expected behavior and mutual trust. For these models to be accepted by society, they need to know the rationales for their decision and understand why they’re making the decisions they’re making. We also should design AI systems that respect social norms and ensure their decision-making is consistent with ethical judgment and human rights.

With the growing influence of AI and its impact on the public domain, there is a critical need to understand their decisions better and realize how these models operate. In a societal context, the reasons for a decision matter a lot. For example, death caused intentionally (murder) vs. death caused unintentionally (manslaughter) are distinct crimes in a court of law. Similarly, a hiring decision is based (directly or indirectly) on specific protected characteristics such as race, the socioeconomic class has a bearing on its legality. However, predictive models are incapable of reasoning or explaining their decisions.

In this section, I examine the social implications of the black box model, including how the relative opacity of the model is potentially endangering social equality and its wide-ranging impact on various social domains. I focus my discussion on the societal implication through six critical themes:

  • Trust
  • Transparency
  • Fairness and Inclusion
  • Ethics
  • Privacy
  • Safety

I selected these themes because they are overarching concerns in the public domain. I discuss the harmful effect in each of these areas and identify emerging challenges in the present and future. In our discussion about these themes, I also ask the underlying questions such as: what are the current social and economic challenges faced by the rapid integration of AI, and how can I build a deeper understanding of AI in the present time that will help us create a fair and equitable future?

The wide-ranging impact of these selected themes makes it essential to look at how these automated decision-making systems are being applied now in regulated industries, whom they are benefiting, whom they are undermining, and how they are structuring the socio-economic aspect of society and individuals.

Trust

Humans find it easier to trust a system that explains its decisions than a black box. For AI and deep learning to be confidently rolled out by industries and governments, users demand greater transparency through explanations and justification of their decisions. It is essential not only for risk management but also to establish greater trust from the general public, regulators, and supervisors in financial services.

Transparency

Transparency is considered here at the level of the entire model at the level of individual components such as parameters and the level of the training algorithm (algorithmic transparency). The element of transparency is important to ensure that small changes in the input do not lead to large changes in the prediction.

Ethics

The concerns over the accountability and fairness of modern AI systems extend well into the ethical and legal domains. The ethical questions surrounding AI systems are wide-ranging, extending from their creation, uses and outcomes.

There are critical questions posed by the advocates of ethical AI, such as, How do we delegate power and decision-making to AI systems? How do we integrate specific ethical concerns within the systems? It is important to consider which values and interests are reflected in AI and how machines can recognize the values and ethical paradigms we humans care about. AI ethics spans broader social concerns about the effects of AI systems and the different choices made during their development.

Fairness and Inclusion

It is important to ensure that predictions made by the models are unbiased and do not implicitly or explicitly replicate human bias or discriminate against protected groups. An interpretable model can explain why it decided to deny a loan for a particular person, and it becomes transparent and more accessible for a human to judge whether the decision made by the model is based on a learned demographic or not.

Privacy

AI challenges current understandings of privacy and strains the laws and regulations we have in place to protect personal information. Established approaches to privacy had become less and less effective until the General Data Protection Regulation (GDPR) came into effect across the European Union on May 25, 2018.

The former approaches to privacy were ineffective because they were focused on outdated metaphors of computing, where adversaries were primarily human and failed to evolve with the advancement in computing. Whereas the intelligence of AI systems, as such, depends on ingesting as much training data as possible. This primary objective is adverse to the goals of privacy. AI thus poses significant challenges to traditional efforts to govern data collection and enact laws to reform government and industry surveillance practices. Thus assuring that sensitive information in the user data is protected and was not compromised is important for privacy and security.

Safety

Interpretability is especially critical if we want to consider autonomous vehicle safety before deploying the system, e.g., if specific errors are unacceptable even during training, like specific edge case testing for self-driving cars. This aspect extends beyond autonomous vehicles to general AI systems, weighing what kind of understanding is most helpful for safety.

Causality

The use of proxy features during feature selection poses a greater risk. The model should ensure that it considers only causal relationships. The system can only be compromised if the inputs are proxies for a causal feature but do not cause the outcome. Hence proxy features should be avoided as they make models vulnerable.

Causal inference methods focus solely on extracting causal relationships from data, i.e. statements that altering one variable will cause a change in another. In contrast, interpretable ML and most other statistical techniques are generally used to describe non-causal relationships or relationships. The model should consider only causal relationships when it comes to feature selection.

The next article in this series covers the interpretable and explainable solutions to solve the problems of the black box models.

--

--

Parvez Kose
DeepViz

Staff Software Engineer | Data Visualization | Front-End Engineering | User Experience