This repository is dedicated to providing practical examples and educational resources for understanding Explainable AI (XAI). It contains notebooks and scripts demonstrating the use of various XAI frameworks such as LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlanations), and more to be added.
The goal is to offer a hands-on approach to learning how these tools can be used to interpret machine learning model predictions and provide insights into the decision-making process of complex algorithms.
LIME_examples/
- Contains Jupyter notebooks and Python scripts demonstrating the usage of LIME for different types of data (tabular, text, and images).SHAP_examples/
- Contains Jupyter notebooks and Python scripts showing how to use SHAP to explain model predictions.others/
- This directory will be updated with more XAI frameworks and examples in the future.
To get started with the examples in this repository, follow these steps:
- Clone the repository:
git clone https://github.com/Naviden/XAI-Examples.git
Install the required dependencies: bash Copy code pip install -r requirements.txt Navigate to the desired example directory and open the Jupyter notebooks: bash Copy code cd LIME_Examples/ jupyter notebook Contributing Contributions to this repository are welcome! If you would like to add more examples, improve existing ones, or suggest new XAI frameworks to include, please feel free to submit a pull request or open an issue.
License This project is licensed under the MIT License - see the LICENSE file for details.