Introducing the XaiPient blog

Prasad Chalasani
xaipient

--

AI models have advanced far beyond the ability of humans to comprehend them: model developers, users or even domain experts are unable to easily understand if they are safe, fair and working as intended. Especially in high-stakes domains where these models assist in human-impacting decisions (e.g. lending, insurance, policing, medical diagnosis, recruiting, admissions) these concerns are paramount, and skepticism and mistrust of these models has been a significant barrier to their widespread adoption.

At XaiPient our aim is to help remove this “explainability barrier” to AI adoption; we are developing a suite of novel human-friendly explanation modules that shed light on the behavior of Machine Learning (ML) models. For example using our technology, ML engineers can debug and monitor models; domain experts can vet and guide the development of models since insights are presented in high level domain-specific terms with narratives and visuals; business users can take actions based on explanations, to improve key metrics; consumers can better understand what caused an adverse model decision, and what factors they might change to get a better decision; regulators can be reassured that models are safe and unbiased.

In this blog we will occasionally write about explorations on our journey that might be interesting to AI researchers, ML engineers, designers or anyone curious about ways to improve human trust of AI models. In the first article, Youchun Zhang, a UI/UX Designer at XaiPient, walks through an example of how we leveraged the popular tool Streamlit to easily develop a novel way to display a heatmap of feature attributions in a tabular dataset.

--

--

Prasad Chalasani
xaipient

CEO & Co-Founder, XaiPient. Ex MediaMath, Yahoo, Goldman Sachs, WorldQuant, HBK, ASU, Los Alamos Nat’l Labs. PhD CS/ML CMU. BTech CS IIT Kharagpur.