Blackblox ML classifiers visually explained
ML interpreter demonstrates auto-interpretability of machine learning models in a codeless environment.
Currently it focuses on high-performance blackbox tree ensemble models (random forest, XGBoost and lightGBM) for binary/multi-class classifications on tabular data, though the framework has the capability to extend to other models, other prediction types (regression), and other data types such as text/image.
It provides interpretation at both global and local levels:
- At global level, it indicates feature importance
- At local level, one can view how features affect individual predictions
- demo data/upload a small csv (a demo csv included in the github folder)
- choose among algorithms
- data preview and classification report
- global/local interpretations
- inspect misclassified data
To view how individual classification decision is made, one can toggle which datapoint to view.
Note: If preprocessing is needed, it is recommended to preprocess the data prior to the upload, since the app does not provide automatic data cleaning.
-
Run from repo
git clone [email protected]:yanhann10/ml_interpret.git
cd ml_interpret
make install
streamlit run app.py
- Pull from Docker
docker pull yanhann10/ml-explained
streamlit run app.py
Tutorials
ML Explainability by Kaggle
Interpretable ML book
Feedback welcomed