Skip to content

Implementation of LIME focused on producing user-centric local explanations for image classifiers.

License

Notifications You must be signed in to change notification settings

XAI-Demonstrator/visualime

Repository files navigation

VisuaLIME

Unit Test Status Coverage Status Documentation Status License PyPI version PyPI - Status

VisuaLIME is an implementation of LIME (Local Interpretable Model-Agnostic Explanations) [1] focused on producing visual local explanations for image classifiers.

In contrast to the reference implementation, VisuaLIME exclusively supports image classification and gives its users full control over the properties of the generated explanations. It was written to produce stable, reliable, and expressive explanations at scale.

VisuaLIME was created as part of the XAI Demonstrator project.

A full documentation is available on visualime.readthedocs.io.

Getting Started

💡 If you're new to LIME, you might want to check out the Grokking LIME talk/tutorial for a general introduction prior to diving into VisuaLIME.

To install VisuaLIME, run:

pip install visualime

VisuaLIME provides two functions that package its building blocks into a reference explanation pipeline:

import numpy as np
from visualime.explain import explain_classification, render_explanation

image = ...  # a numpy array of shape (width, height, 3) representing an RGB image

def predict_fn(images: np.ndarray) -> np.ndarray:
    # a function that takes a numpy array of shape (num_of_samples, width, height, 3)
    # representing num_of_samples RGB images and returns a numpy array of
    # shape (num_of_samples, num_of_classes) where each entry corresponds to the
    # classifiers output for the respective image
    predictions = ...
    return predictions

segment_mask, segment_weights = explain_classification(image, predict_fn)

explanation = render_explanation(
        image,
        segment_mask,
        segment_weights,
        positive="green",
        negative="red",
        coverage=0.2,
    )

For a full example, see the example notebook on GitHub.

Roadmap

  • Verify that the algorithm matches the original LIME and document differences
  • Add performance benchmarks and optimize implementation of the algorithm
  • Include utilities to assess and tune explanations for stability and faithfulness
  • Create a user guide that walks through a best practice example of implementing a fully configurable LIME explainer

References

[1] Ribeiro et al.: "Why Should I Trust You?": Explaining the Predictions of Any Classifier (arXiv:1602.04938, 2016)