LIMEcraft: handcrafted superpixel selection and inspection for Visual eXplanations
The lack of possibility to interact with explanations makes it difficult to verify and understand exactly how the ML model works. The LIMEcraft algorithm comes out against this.
We recently published the article “LIMEcraft: handcrafted superpixel selection and inspection for Visual eXplanations” in Machine Learning journals.
Our experience shows that the automatic selection of superpixels is often misplaced and difficult for human interpretation. LIMEcraft is a new explanation process that allows for human-explanatory interaction.
Explanations are especially important when using Artificial Intelligence in medicine. This is one of the main focuses of xLungs — Responsible Artificial Intelligence for Lung Diseases project that is carried out by MI2DataLab together with radiologists at Warsaw University of Technology. In that project, the goal will be to explain models for lung images classification, using visual XAI methods e.g. LIMEcraft, in combination with ontologies with domain knowledge.
LIMEcraft concept
Our method, LIMEcraft is based on Local Interpretable Model-Agnostic Explanations (LIME) but with the possibility of inspecting image features, such as color, shape, position, and rotation for the creation of Visual eXplanations. It also allows handcrafted superpixel selection, which eliminates non-interaction problems with explanation methods and improves the explanation quality of complex image instances.
Where LIMEcraft can be useful
Without careful verification, we cannot be sure that a recognized object of any color will be properly recognized by the model and classified into the correct class. Image features’ inspection may help us to investigate the possibility of bias based on, e.g, skin color.
LIMEcraft may be particularly useful for models trained on medical data (images), where the semantic boundaries of the examined objects (e.g. skin or lung lesions) are crucial in the doctor’s diagnosis, therefore, boundaries should be crucial for the neural network while making a diagnosis. Similarly, for partially covered or noisy images, the automatic segmentation algorithms (e.g. LIME) divide the images into superpixels in an unacceptable way. The lack of semantic understanding of the image makes automatic algorithms unsafe for many medical images and for images taken in the natural environment when some objects are partially obscured by others.
LIMEcraft (like LIME) is a model-agnostic explanation algorithm. Model-agnostic means that the architecture of the model does not influence the possibility to explain the model. That means LIMEcraft is universal for models of any architecture.
How LIMEcraft works
The image is divided into segments called “superpixels”. Superpixels are selected by a user using a tool, which allows for drawing an irregular path of shapes. Moreover, LIMEcraft allows uploading a prepared mask of superpixels. After manual or predefined superpixel selection, we can determine into how many subsegments the areas both “inside” and “outside” of the selected area will be divided. Such automatic subsegments are generated using image segmentation based on the K-means clustering algorithm.
Then, a dataset with some superpixels occluded is generated. Each perturbed instance gets the probability of belonging to a class. On this locally weighted dataset, the linear model is trained. The highest positive and negative weights for a specific class are presented in the original image by addition, respectively, a green or a red semitransparent mask on the most important superpixels.
Additionally, the interface, we have created, makes it possible to analyze the impact of image perturbation on the prediction of the model. The user can edit the color, shape, and position of the selected area, and then subject the edited image to the LIMEcraft algorithm.
It is crucial to see how the model responds to a change in individual image elements. For example, for microscope images, the shapes and colors of cells may be relevant to a classification task. However, position and rotation should not significantly affect the prediction of the trained model. By running such experiments using LIMEcraft, we can observe whether the model has learned to recognize objects by the correct features.
LIMEcraft not only for computer scientists?
The undeniable advantage of the dashboard is that it can be used by people unfamiliar with programming because the interface is very intuitive and user-friendly. It enables the explanation of the model prediction without the need to know the model architecture or other IT specifications. Therefore, it is even more worth getting acquainted with the possibilities of the algorithm and dashboard.
The content of this article is based on the work (Hryniewska, Grudzień, & Biecek, 2022). To read more about LIMEcraft please visit: https://link.springer.com/article/10.1007/s10994-022-06204-w and to test how it works: https://limecraft.mi2.ai. If you use any part of this article, please cite:
Hryniewska, W., Grudzień, A. & Biecek, P. LIMEcraft: handcrafted superpixel selection and inspection for Visual eXplanations. Mach Learn (2022). https://doi.org/10.1007/s10994-022-06204-w
If you are interested in other posts about explainable, fair, and responsible ML, follow #ResponsibleML on Medium.