You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<p>This study explores the implementation of SMILE for Point Cloud offering enhanced robustness and interpretability, particularly when Anderson-Darling distance is used. The approach demonstrates superior performance in terms of fidelity loss, R<sup>2</sup> scores, and robustness across various kernel widths, perturbation numbers, and clustering configurations.</p>
51
-
<p>Additionally, a stability analysis using the Jaccard index establishes a benchmark for model stability in point cloud classification, identifying dataset biases crucial for safety-critical applications like autonomous driving.</p>
52
-
</div>
53
-
</section>
42
+
<sectionclass="section">
43
+
<divclass="container">
44
+
<h2class="title is-3">Abstract</h2>
45
+
<p>This study explores the implementation of SMILE for Point Cloud offering enhanced robustness and
46
+
interpretability, particularly when Anderson-Darling distance is used. The approach demonstrates superior
47
+
performance in terms of fidelity loss, R<sup>2</sup> scores, and robustness across various kernel widths,
48
+
perturbation numbers, and clustering configurations.</p>
49
+
<p>Additionally, a stability analysis using the Jaccard index establishes a benchmark for model stability in point
50
+
cloud classification, identifying dataset biases crucial for safety-critical applications like autonomous
51
+
driving.</p>
52
+
</div>
53
+
</section>
54
54
55
-
<sectionclass="section" id="BibTeX">
56
-
<divclass="container">
57
-
<h2class="title">BibTeX</h2>
58
-
<pre><code>
55
+
<sectionclass="section" id="BibTeX">
56
+
<divclass="container">
57
+
<h2class="title">BibTeX</h2>
58
+
<pre><code>
59
59
@article{aslansefat2024pointcloud,
60
60
title={Explainability of Point Cloud Neural Networks Using SMILE: Statistical Model-Agnostic Interpretability with Local Explanations},
61
61
author={Aslansefat, Koorosh and others},
@@ -64,14 +64,16 @@ <h2 class="title">BibTeX</h2>
64
64
publisher={IEEE}
65
65
}
66
66
</code></pre>
67
-
</div>
68
-
</section>
67
+
</div>
68
+
</section>
69
69
70
-
<footerclass="footer">
71
-
<divclass="content has-text-centered">
72
-
<p>Licensed under <ahref="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA 4.0</a>. <ahref="https://github.com/koo-ec/xwhy">Source Code</a>.</p>
73
-
</div>
74
-
</footer>
70
+
<footerclass="footer">
71
+
<divclass="content has-text-centered">
72
+
<p>Licensed under <ahref="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA 4.0</a>. <a
# (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)
30
+
explainer = xwhy.Explainer(model)
31
+
xwhy_values = explainer(X)
32
+
33
+
# visualize the first prediction's explanation
34
+
xwhy.plots.waterfall(xwhy_values[0])
35
+
36
+
```
37
+
38
+
## Citations
39
+
It would be appreciated a citation to our paper as follows if you use X-Why for your research:
40
+
```
41
+
@article{Aslansefat2021Xwhy,
42
+
author = {{Aslansefat}, Koorosh and {Hashemian}, Mojgan and {Martin}, Walker and {Papadopoulos}, Yiannis},
43
+
title = "{SMILE: Statistical Model-agnostic Interpretability with Local Explanations}",
44
+
journal = {arXiv e-prints},
45
+
year = {2021},
46
+
url = {https://arxiv.org/abs/...},
47
+
eprint = {},
48
+
}
49
+
```
50
+
51
+
## Acknowledgment
52
+
This project is supported by the [Secure and Safe Multi-Robot Systems (SESAME)](https://www.sesame-project.org) H2020 Project under Grant Agreement 101017258.
53
+
54
+
## Contribution
55
+
If you are interested in contributing to this project, please check the [contribution guidelines](https://github.com/koo-ec/xwhy/blob/main/docs/contribute/contributing.md).
0 commit comments