-
Updated
Apr 30, 2021 - Jupyter Notebook
#
interpretability
Here are 266 public repositories matching this topic...
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
machine-learning
data-mining
awesome
deep-learning
awesome-list
interpretability
privacy-preserving
production-machine-learning
mlops
privacy-preserving-machine-learning
explainability
responsible-ai
machine-learning-operations
ml-ops
ml-operations
privacy-preserving-ml
large-scale-ml
production-ml
large-scale-machine-learning
-
Updated
May 1, 2021
A collection of infrastructure and tools for research in neural network interpretability.
-
Updated
Mar 19, 2021 - Jupyter Notebook
Open
Interpret
5
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
deep-learning
grad-cam
pytorch
visualizations
interpretability
class-activation-maps
interpretable-deep-learning
interpretable-ai
score-cam
vision-transformers
-
Updated
May 5, 2021 - Python
Model interpretability and understanding for PyTorch
-
Updated
May 4, 2021 - Python
A curated list of awesome machine learning interpretability resources.
python
data-science
machine-learning
data-mining
awesome
r
awesome-list
transparency
fairness
accountability
interpretability
interpretable-deep-learning
interpretable-ai
interpretable-ml
explainable-ml
xai
fatml
interpretable-machine-learning
iml
machine-learning-interpretability
-
Updated
Apr 24, 2021
adocherty
commented
Nov 27, 2019
Description
Currently our unit tests are disorganized and each test creates example StellarGraph graphs in different or similar ways with no sharing of this code.
This issue is to improve the unit tests by making functions to create example graphs available to all unit tests by, for example, making them pytest fixtures at the top level of the tests (see https://docs.pytest.org/en/latest/
python
machine-learning
transparency
lime
interpretability
ethical-artificial-intelligence
explainable-ml
shap
explainability
-
Updated
May 5, 2021 - HTML
Algorithms for monitoring and explaining machine learning models
-
Updated
May 5, 2021 - Python
[ICCV 2017] Torch code for Grad-CAM
deep-learning
heatmap
grad-cam
convolutional-neural-networks
interpretability
iccv17
visual-explanation
-
Updated
Mar 3, 2017 - Lua
moDel Agnostic Language for Exploration and eXplanation
black-box
data-science
machine-learning
predictive-modeling
fairness
interpretability
explainable-artificial-intelligence
explanations
explainable-ai
explainable-ml
xai
model-visualization
interpretable-machine-learning
iml
dalex
responsible-ai
responsible-ml
explanatory-model-analysis
-
Updated
May 5, 2021 - Python
Interpretability Methods for tf.keras models with Tensorflow 2.x
-
Updated
Apr 19, 2021 - Python
Federated Learning Library: https://fedml.ai
machine-learning
privacy
computer-vision
semi-supervised-learning
transfer-learning
wireless-communication
distributed-optimization
interpretability
neural-architecture-search
federated-learning
continual-learning
vertical-federated-learning
non-iid
decentralized-federated-learning
hierarchical-federated-learning
adversarial-attack-and-defense
communication-efficiency
straggler-problem
computation-efficiency
incentive-mechanism
-
Updated
Apr 28, 2021
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
streaming
timeseries
time-series
lstm
generative-adversarial-network
gan
rnn
autoencoder
ensemble-learning
trees
active-learning
concept-drift
graph-convolutional-networks
interpretability
anomaly-detection
adversarial-attacks
explaination
anogan
unsuperivsed
nettack
-
Updated
Sep 25, 2020 - Python
XAI - An eXplainability toolbox for machine learning
machine-learning
ai
evaluation
ml
artificial-intelligence
upsampling
bias
interpretability
feature-importance
explainable-ai
explainable-ml
xai
imbalance
downsampling
explainability
bias-evaluation
machine-learning-explainability
xai-library
-
Updated
Apr 23, 2021 - Python
Visualization toolkit for neural networks in PyTorch! Demo -->
visualization
machine-learning
deep-learning
cnn
pytorch
neural-networks
interpretability
explainability
-
Updated
Apr 27, 2021 - HTML
Interesting resources related to XAI (Explainable Artificial Intelligence)
-
Updated
Apr 23, 2021 - R
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
python
data-science
machine-learning
data-mining
h2o
gradient-boosting-machine
transparency
decision-tree
fairness
lime
accountability
interpretability
interpretable-ai
interpretable-ml
xai
fatml
interpretable
interpretable-machine-learning
iml
machine-learning-interpretability
-
Updated
Feb 10, 2021 - Jupyter Notebook
Public facing deeplift repo
sensitivity-analysis
saliency-map
interpretability
guided-backpropagation
interpretable-deep-learning
deeplift
integrated-gradients
-
Updated
Nov 11, 2020 - Python
Code for the TCAV ML interpretability project
-
Updated
Apr 30, 2021 - Jupyter Notebook
H2O.ai Machine Learning Interpretability Resources
python
data-science
machine-learning
data-mining
h2o
xgboost
transparency
jupyter-notebooks
fairness
accountability
interpretability
interpretable-ai
interpretable-ml
explainable-ml
mli
xai
fatml
interpretable-machine-learning
iml
machine-learning-interpretability
-
Updated
Dec 12, 2020 - Jupyter Notebook
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
pytorch
neural-networks
imagenet
image-classification
pretrained-models
decision-trees
cifar10
interpretability
pretrained-weights
cifar100
tiny-imagenet
explainability
neural-backed-decision-trees
-
Updated
Mar 20, 2021 - Python
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
nlp
awesome
computer-vision
deep-learning
neural-network
chainer
tensorflow
matlab
keras
torch
pytorch
awesome-list
papers
cvpr
iccv
iclr
interpretability
icml
eccv
neurips
-
Updated
Sep 29, 2020
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
python
deep-learning
grad-cam
cnn
pytorch
saliency-map
interpretability
smoothgrad
interpretable-deep-learning
gradcam
activation-maps
class-activation-map
gradcam-plus-plus
score-cam
-
Updated
Apr 23, 2021 - Python
Human-explainable AI.
python
data-science
machine-learning
statistics
simulation
model-selection
data-analytics
hyperparameter-tuning
interpretability
explainable-ai
shap-vector-decomposition
-
Updated
Apr 29, 2021 - Jupyter Notebook
machine-learning
deep-learning
sentiment-analysis
tensorflow
transformers
interpretability
aspect-based-sentiment-analysis
explainable-ai
explainable-ml
distill
bert-embeddings
transformer-models
-
Updated
Apr 21, 2021 - Python
A collection of research materials on explainable AI/ML
xml
interpretability
explanation-system
interpretable-ai
explainable-ai
xai
counterfactual-explanations
recourse
-
Updated
Apr 28, 2021
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
python
data-science
demo
machine-learning
tutorial
statistics
ai
scikit-learn
ml
artificial-intelligence
uncertainty
supervised-learning
interpretability
rule-learning
bayesian-rule-lists
optimal-classification-tree
rulefit
imodels
-
Updated
May 4, 2021 - Jupyter Notebook
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
machine-learning
scikit-learn
transparency
blackbox
bias
interpretability
explainable-artificial-intelligence
interpretable-ai
explainable-ai
explainable-ml
xai
interpretable-machine-learning
machine-learning-interpretability
explainability
aws-sagemaker
explainx
-
Updated
Feb 7, 2021 - Jupyter Notebook
Improve this page
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."


Yes