Feat: DeepSHAP-based model explainer with configurable visualisation outputs#7
Open
samrat-rm wants to merge 3 commits intoOrion-AI-Lab:mainfrom
Open
Feat: DeepSHAP-based model explainer with configurable visualisation outputs#7samrat-rm wants to merge 3 commits intoOrion-AI-Lab:mainfrom
samrat-rm wants to merge 3 commits intoOrion-AI-Lab:mainfrom
Conversation
…ensure precise attribution values, especially for layers that are not natively supported or recognized by the DeepExplainer.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR implements a DeepSHAP-based model explainer for analysing feature (band) contributions in testing models. It help us have a better understanding of feature dominance and make more informed decision for model improvements.
This explainer is also intended to serve as a diagnostic tool for the Unimodal problem, helping us understand and mitigate IR band dominance.
Support for training-time explainability will be added after review and feedback.
DeepExplainer
An implementation of Deep SHAP, a faster (but approximate) method to estimate SHAP values for deep learning models. It leverages connections between SHAP and DeepLIFT to efficiently compute feature attributions.
Limitations :
DeepExplainer does not fully support certain layers:
As a result:
GradientExplainer
An implementation of expected gradients, combining integrated gradients with sampling over background data to approximate SHAP values. It estimates feature attributions by averaging gradients across inputs.
As a result:
Features
Current Limitations
1. Background Sampling Sensitivity
Possible Solutions :
Possible Improvements / Next step
Usage :
python model_explainer.py -e <dataset> -f <format> -m <explainer method>Example :
AI Disclosure :
Used AI assistance to refine code structure, improve readability, and format documentation.
All logic, implementation decisions, and validations were reviewed and verified manually.