AI Explainability, Interpretability & Transparency
Ethics in AI
Technology needs to be ethical, or in more clearer terms, it needs to be used ethically. And ethics can mean a lot of different things — it can mean being transparent with how it’s used to solve a customer challenge, being able to explain and interpret the results and decisions made by that technology, being fair and unbiased in those decisions and also being accountable for the use of that said technology.
Why is regulatory compliance a elephant in the room for most organizations
By and large it depends on the context — the problem you’re trying to solve, and also how you’re solving that problem.
Where is the AI going to be used (context/industry/problem), and how it is going to be used (solution/usecase) is important in determining what kind of regulatory compliance and scrutiny your product/service needs.
You should still follow good governance practices in transparency and data consent in all scenarios, however the level of regulatory compliance is incrementally tiered depending on the impact intensity of the challenge you’re trying to solve — hence financial and healthcare industries tend to be tightly regulated due to the very nature of their customer challenges.
Regulatory bodies want to understand about your —
data infrastructure & systems architecture — how robust are the internal systems to handle data hygiene, integration and robustness; where will the data be stored — on premise or on cloud, where will the AI models run — inhouse or third party vendor API integrations and such, how legacy systems interact with each other..
data gathering & distribution practices — what are the sources of data, validity of that data set, which datapoints are being extracted or asked for, how will they be used, where will they be used..
user privacy and consents — are consent checks in place, are user notified explicitly about the usage of their data, are those consents /terms and conditions digestible; basically are you following privacy by design?
machine learning models & algorithms — how do we verify the validity and accuracy of the decision made by these AI models? Are we able to accurately explain and interpret the decisions made? Is there a human cross-checking these results? are domain experts involved during curation and labeling of data and during designing of the AI models? Is the algorithm or the data fed to the AI fair and unbiased? Is the dataset expansive and inclusive? what modelling techniques were used?
risk management — risk mitigation in event of inaccurate or bad results; what are the ramifications and the corresponding plan of action due to the inaccuracy of the data and results?
Organizations have to go through intense regulatory hurdles (especially in financial and healthcare industries) to create safe and reliant AI-based systems, hence more often than not, these checks and balances might seem like an impediment to innovation.
However, rather than looking at regulation as a barrier to business success, we need to look at it as a way to evolve our business practices to be progressively ethical and humane.
Barriers (internal, external) preventing firms from delivering capabilities around AI explainability
AI is still a nascent technology; these algorithms are as good as the data we feed them to train them on and the design of the AI models. And these models are still a work in progress, learning with each failure and inconsistent result.
Hence AI is a blackbox, but my belief is that it will slowly start to unravel as we make progress in perfecting and refining these models.
From my experience, the main barrier around explainability is the lack of awareness in an organizational setting. The thought that we even need to explain the decisions made by an AI model to our regulators and customers is not thought through during project inception. Hence minimal effort is put into creating practices around explainability for each capability or product.
Secondly, the beauty of an AI algorithm is its ability to learn by itself, to find inferences and hidden patterns from datasets that a human mind won’t be able to extract easily. Hence explainability is a valid challenge and will continue to remain a challenge for quite sometime until we get to an advanced stage of an AI ecosystem.
That being said, there are companies around the world that are creating algorithms that would be able to reverse engineer the result of an AI algorithm, and get to the details around how and why was a certain decision made, what assumptions were made, which combinations of data led to the said result.
How can firms combat bias in their algorithms?
- Start by asking questions — does my model have an all-inclusive dataset that I can train it on? Does it represent different age groups, genders, ethnicities, backgrounds, circumstances, challenges?
- Have an inclusive & diverse employee-set — is there a bias in my own judgement as a data scientist when I design these models? Make sure your data science team is diverse and inclusive because the bias in models stems from the people who design these models. The person who designs the model and the data they feed into the model are two of the most important factors that determine the result/output of an AI model.
- Expand edgecases — Make sure the model is constantly learning from diverse usecases and challenges, maybe even the most minute edgecases.
- Governance template — have AI and data governance as a practice/ cadence in the organization, irrespective of the project or business.
- Involve regulators during innovation — instead of regulation being a frantic emergency item, bring regulators along your innovation design process from the start, such that they understand the challenge you’re trying to solve and your end goal.
- Privacy/Fairness/Explainability by design — create an infrastructure of designing ‘right’ right from the start of the project, into the blueprint of the product.
- Macro level transparency into the model building process — similar to the above, embed transparency into your AI models through documentation and dialogue.
- Measure— try to identify the metric/s by which you measure fairness or (un)bias in your models and outputs. This is a hard one to crack, however not all products are skewed towards bias-ness; indeed some products might be partial to a certain segment since that’s the very nature of their business model.
- collaboration with domain stakeholders & academic institutions —is a great way to harness the collective intelligence of the experts into your product/service to identify bias in data or AI models.
- Model management and governance — normalize this in your organization so that AI governance becomes a must-have item in your ‘definition of ready’ checklist.
Trust is directly proportional to Transparency
If organizations can be transparent, fair, ethical and accountable in their business and technology practices, it’d lead to a level of trust with your end customers, which in turn marshal the way towards value added opportunities for both sides of the equation.