Explainable AI Explained

Beyond the Black Box

Ananthakrishnan G
featurepreneur
3 min readDec 7, 2023

--

If someone asks who is smarter than Elon Musk and if i say “ME” will you trust that. No way, the immediate counter would be prove it. But have you ever questioned an AIs answer like how you questioned mine. AI tools or Machine Learning models makes predictions with the help of given data, but these tools and models never gave us a explanation for its prediction. What if its wrong..?????

Well I might lie, but an AI will not, but there are still possibilities that a model might make a wrong prediction. Now this can be identified with the help of Explainable AI(XAI).

Definition:

Explainable AI refers to the concept of designing and developing artificial intelligence (AI) systems in a way that their decision-making processes and outcomes can be easily understood and interpreted by humans.

No one completely knows what happens inside a model or a neural network, its called a blackbox and the solution for this situation is Explainable AI. This helps us to understand how the AI model comes up with its predictions(results). This make the AI more trustable and ethically considerable.

XAI Setup

XAI consist of three main methods

  • Prediction accuracy
  • Traceability
  • Decision understanding

The first two methods address technological requirements and Decision understanding address human needs.

Prediction accuracy is figured out by running simulations and comparing the XAI output with the training data. A commonly used technique for this is Local Interpretable Model-Agnostic Explanations (LIME). It is a technique that approximates any black box machine learning model with a local, interpretable model to explain each individual prediction.

Traceability can constrain decision-making by restricting the features utilised to develop a model. DeepLIFT or Deep-Learning Important Features is one traceability technique which compares every neuron with its reference neuron in order to highlight traceability relationships and dependencies.

Decision understanding is nothing but a dashboard that is presented to the user to understand how the decision was made. This dashboard can also show the requirement for the AI to produce a different outcome. An example dashboard is given below.

This is a graphical representation of a machine learning model that predicts possibility of stroke in patients. This red and blue bars in the graph represents the medical data of a patient and the grey line over the bars is the reference data that explains the prediction. In this case the age, glucose level and heart disease bars have reached or crossed the reference grey line, indicating that their is a possibility of stroke for this patient.

Application of XAI

  • Healthcare — helps to make accurate diagnosis and treatments.
  • Finance — It explains credit scoring models, fraud detection algorithms, and investment strategies and building trust with clients.
  • Autonomous Vehicles — used in self-driving cars for ensuring passenger safety.
  • Human Resource — gives reason for eliminating candidates resumes.
  • Social Media Suggestions — XAI can be used to explain content moderation decisions on social media platforms

Conclusion

Explainable AI (XAI) is a significant leap in the implementation of artificial intelligence. Transparency and interpretability are becoming increasingly important as AI systems become more integrated into many sectors. XAI not only answers doubts about “black box” models, but it also promotes trust and responsibility in AI applications.

We create a future where AI judgements are not only powerful but also transparent, fair, and understandable by accepting and improving XAI. This promotes the smooth integration of artificial intelligence into our daily lives.

--

--

Ananthakrishnan G
featurepreneur

I'm a bachelor of technology Student at Crescent Institute of Science and Technology. Programming enthusiast, Graphic designer and a budding DevOps engineer.