XWhy: eXplain Why with SMILE -- Statistical Model-agnostic Interpretability with Local Explanations
pip install xwhy
import xwhy
import xgboost
# train an XGBoost model
X, y = xwhy.datasets.boston()
model = xgboost.XGBRegressor().fit(X, y)
# explain the model's predictions using xwhy
# (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)
explainer = xwhy.Explainer(model)
xwhy_values = explainer(X)
# visualize the first prediction's explanation
xwhy.plots.waterfall(xwhy_values[0])
It would be appreciated a citation to our paper as follows if you use X-Why for your research:
@article{Aslansefat2021Xwhy,
author = {{Aslansefat}, Koorosh and {Hashemian}, Mojgan and {Martin}, Walker and {Papadopoulos}, Yiannis},
title = "{SMILE: Statistical Model-agnostic Interpretability with Local Explanations}",
journal = {arXiv e-prints},
year = {2021},
url = {https://arxiv.org/abs/...},
eprint = {},
}
This project is supported by the Secure and Safe Multi-Robot Systems (SESAME) H2020 Project under Grant Agreement 101017258.
If you are interested in contributing to this project, please check the contribution guidelines.
