AI Explainability, Interpretability & Transparency

Ethics in AI

Swapna M
Swapna M
Sep 11 · 5 min read
Image for post
Image for post

Technology needs to be ethical, or in more clearer terms, it needs to be used ethically. And ethics can mean a lot of different things — it can mean being transparent with how it’s used to solve a customer challenge, being able to explain and interpret the results and decisions made by that technology, being fair and unbiased in those decisions and also being accountable for the use of that said technology.

Why is regulatory compliance a elephant in the room for most organizations

You should still follow good governance practices in transparency and data consent in all scenarios, however the level of regulatory compliance is incrementally tiered depending on the impact intensity of the challenge you’re trying to solve — hence financial and healthcare industries tend to be tightly regulated due to the very nature of their customer challenges.

Regulatory bodies want to understand about your —

data infrastructure & systems architecture — how robust are the internal systems to handle data hygiene, integration and robustness; where will the data be stored — on premise or on cloud, where will the AI models run — inhouse or third party vendor API integrations and such, how legacy systems interact with each other..

data gathering & distribution practices — what are the sources of data, validity of that data set, which datapoints are being extracted or asked for, how will they be used, where will they be used..

user privacy and consents — are consent checks in place, are user notified explicitly about the usage of their data, are those consents /terms and conditions digestible; basically are you following privacy by design?

machine learning models & algorithms — how do we verify the validity and accuracy of the decision made by these AI models? Are we able to accurately explain and interpret the decisions made? Is there a human cross-checking these results? are domain experts involved during curation and labeling of data and during designing of the AI models? Is the algorithm or the data fed to the AI fair and unbiased? Is the dataset expansive and inclusive? what modelling techniques were used?

risk management — risk mitigation in event of inaccurate or bad results; what are the ramifications and the corresponding plan of action due to the inaccuracy of the data and results?

Organizations have to go through intense regulatory hurdles (especially in financial and healthcare industries) to create safe and reliant AI-based systems, hence more often than not, these checks and balances might seem like an impediment to innovation.

However, rather than looking at regulation as a barrier to business success, we need to look at it as a way to evolve our business practices to be progressively ethical and humane.

Barriers (internal, external) preventing firms from delivering capabilities around AI explainability

From my experience, the main barrier around explainability is the lack of awareness in an organizational setting. The thought that we even need to explain the decisions made by an AI model to our regulators and customers is not thought through during project inception. Hence minimal effort is put into creating practices around explainability for each capability or product.

Secondly, the beauty of an AI algorithm is its ability to learn by itself, to find inferences and hidden patterns from datasets that a human mind won’t be able to extract easily. Hence explainability is a valid challenge and will continue to remain a challenge for quite sometime until we get to an advanced stage of an AI ecosystem.

That being said, there are companies around the world that are creating algorithms that would be able to reverse engineer the result of an AI algorithm, and get to the details around how and why was a certain decision made, what assumptions were made, which combinations of data led to the said result.

How can firms combat bias in their algorithms?

  • Have an inclusive & diverse employee-set — is there a bias in my own judgement as a data scientist when I design these models? Make sure your data science team is diverse and inclusive because the bias in models stems from the people who design these models. The person who designs the model and the data they feed into the model are two of the most important factors that determine the result/output of an AI model.
  • Expand edgecases — Make sure the model is constantly learning from diverse usecases and challenges, maybe even the most minute edgecases.
  • Governance template — have AI and data governance as a practice/ cadence in the organization, irrespective of the project or business.
  • Involve regulators during innovation — instead of regulation being a frantic emergency item, bring regulators along your innovation design process from the start, such that they understand the challenge you’re trying to solve and your end goal.
  • Privacy/Fairness/Explainability by design — create an infrastructure of designing ‘right’ right from the start of the project, into the blueprint of the product.
  • Macro level transparency into the model building process — similar to the above, embed transparency into your AI models through documentation and dialogue.
  • Measure— try to identify the metric/s by which you measure fairness or (un)bias in your models and outputs. This is a hard one to crack, however not all products are skewed towards bias-ness; indeed some products might be partial to a certain segment since that’s the very nature of their business model.
  • collaboration with domain stakeholders & academic institutions —is a great way to harness the collective intelligence of the experts into your product/service to identify bias in data or AI models.
  • Model management and governance — normalize this in your organization so that AI governance becomes a must-have item in your ‘definition of ready’ checklist.

Trust is directly proportional to Transparency

If organizations can be transparent, fair, ethical and accountable in their business and technology practices, it’d lead to a level of trust with your end customers, which in turn marshal the way towards value added opportunities for both sides of the equation.

Data Driven Investor

from confusion to clarity not insanity

Sign up for DDIntel

By Data Driven Investor

In each issue we share the best stories from the Data-Driven Investor's expert community. Take a look

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Swapna M

Written by

Swapna M

Product Lead @RBC | #Fintech #Innovation #Payments #AI | Previously Head of Product @ Klood, @Scholastic, @Accenture | https://www.swapnamalekar.com/

Data Driven Investor

from confusion to clarity not insanity

Swapna M

Written by

Swapna M

Product Lead @RBC | #Fintech #Innovation #Payments #AI | Previously Head of Product @ Klood, @Scholastic, @Accenture | https://www.swapnamalekar.com/

Data Driven Investor

from confusion to clarity not insanity

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store