Skip to content

[0.1.6] Multi-Model Support, Enhanced Visualizations, and Improved Testing

Compare
Choose a tag to compare
@ombhojane ombhojane released this 26 Sep 05:43
· 145 commits to main since this release

ExplainableAI v0.1.6

ExplainableAI introduces significant enhancements, including support for multiple model types, improved visualizations, and a more robust testing framework.

New Features

Multi-Model Support

  • Now supports multiple model types including Random Forest, Logistic Regression, XGBoost, and Neural Networks.
  • Automatically compares and selects the best-performing model based on cross-validation scores.

Enhanced Model Comparison

  • Generates comparative analysis of all provided models.
  • Includes ROC curves and performance metrics for each model type.

Improved Visualizations

  • Enhanced feature importance plots with better readability.
  • Interactive visualizations using Plotly for more engaging data exploration.

Robust Testing Framework

  • Comprehensive test suite covering all major functionalities.
  • Parameterized tests to ensure compatibility with various model types.
  • Edge case handling and input validation tests.

Improvements

Core Functionality

  • Refactored XAIWrapper class for better handling of multiple models.
  • Improved data preprocessing pipeline with enhanced categorical variable handling.

Report Generation

  • Added model comparison section to the PDF report.
  • Improved layout and formatting for better readability.

LLM Integration

-Updated prompts for Gemini model to provide more insightful explanations of multi-model results.

Performance Optimization

  • Improved efficiency in handling large datasets.
  • Optimized SHAP value calculations for faster processing.

Bug Fixes

  • Fixed issues with feature importance calculation for certain model types.
  • Resolved compatibility issues with the latest scikit-learn version.
  • Corrected error handling in prediction explanation function.

Introducing Notebooks

  • Added colab codes of explainableai package usage
  • Free feel to use it with for your datasets and preferred models
  • head to explaianbleai/notebooks for access

#Installation

pip install explainableai==0.1.6

Usage

from explainableai import XAIWrapper
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from xgboost import XGBClassifier
from sklearn.neural_network import MLPClassifier

# Initialize your models
models = {
    'Random Forest': RandomForestClassifier(n_estimators=100, random_state=42),
    'Logistic Regression': LogisticRegression(max_iter=1000),
    'XGBoost': XGBClassifier(n_estimators=100, random_state=42),
    'Neural Network': MLPClassifier(hidden_layer_sizes=(100, 50), max_iter=1000, random_state=42)
}

# Create XAIWrapper instance
xai = XAIWrapper()

# Fit the models and run analysis
xai.fit(models, X_train, y_train)
results = xai.analyze()

# Generate report
xai.generate_report()

# Make and explain predictions
prediction, probabilities, explanation = xai.explain_prediction(input_data)

Breaking Changes

  • The fit method now requires a dictionary of models instead of a single model.
  • Some visualization function signatures have been updated to accommodate multiple models.

We encourage users to update to this version for access to these new features and improvements. As always, please report any issues or suggestions through our GitHub issue tracker.