#
model-interpretability
Here are 25 public repositories matching this topic...
pytorch实现Grad-CAM和Grad-CAM++,可以可视化任意分类网络的Class Activation Map (CAM)图,包括自定义的网络;同时也实现了目标检测faster r-cnn和retinanet两个网络的CAM图;欢迎试用、关注并反馈问题...
-
Updated
Jan 13, 2021 - Python
Pytorch Implementation of recent visual attribution methods for model interpretability
pytorch
explanation
excitation
interpretability
saliency
interpretable-deep-learning
xai
visual-explanations
model-interpretability
excitation-backpropagation
patternnet
-
Updated
Feb 27, 2020 - Jupyter Notebook
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
machine-learning
human-in-the-loop
interpretability
explainable-artificial-intelligence
researchers
interactive-machine-learning
deep-learning-visualization
human-in-the-loop-machine-learning
explainable-ml
xai
interpretable
interpretable-machine-learning
iml
model-interpretability
explainable
interpretable-models
explainable-models
interpretable-learning
explaining-ai
explanation-methods
-
Updated
Jun 18, 2021 - R
Class Activation Map (CAM) Visualizations in PyTorch.
-
Updated
May 20, 2020 - Python
A set of tools for leveraging pre-trained embeddings, active learning and model explainability for effecient document classification
python
sklearn
word-embeddings
document-classification
flair
active-learning
model-interpretation
eli5
document-classifier
model-interpretability
-
Updated
Sep 9, 2019 - HTML
Overview of different model interpretability libraries.
-
Updated
May 11, 2021 - Jupyter Notebook
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
-
Updated
Jun 8, 2020
Code for "Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability" (https://arxiv.org/abs/2010.09750)
-
Updated
Nov 10, 2020 - Python
AI to Predict Yield in Aeroponics
-
Updated
Sep 10, 2021 - Jupyter Notebook
Instance-wise Causal Feature Selection for Model Interpretation(CVPRW 2021)
-
Updated
May 29, 2021 - Python
Used the Functional API to built custom layers and non-sequential model types in TensorFlow, performed object detection, image segmentation, and interpretation of convolutions. Used generative deep learning including Auto Encoding, VAEs, and GANs to create new content.
object-detection
image-segmentation
generative-adversarial-networks
auto-encoders
saliency-map
mask-rcnn
variational-autoencoders
coursera-specialization
retina-net
model-interpretability
tensorflow2
functional-api
custom-model-development
custom-loss-functions
gradcam-visualization
custom-layers
gradient-tape-optimization
custom-training-loops
-
Updated
Jun 9, 2021 - Jupyter Notebook
Implementation of the Grad-CAM algorithm in an easy-to-use class, optimized for transfer learning projects and written using Keras and Tensorflow 2.x
python
deep-learning
tensorflow
transfer-learning
interpretability
gradcam
model-interpretability
tensorflow2
-
Updated
May 19, 2021 - Python
Model interpretability for Explainable Artificial Intelligence
sentiment-analysis
torch
spacy
imdb-movie
databricks-notebooks
imdb-sentiment-analysis
model-interpretability
ms-azure
captum
download-dataset
-
Updated
Dec 14, 2020 - Jupyter Notebook
A machine learning project developing classification models to predict COVID-19 diagnosis in paediatric patients.
python
data-science
machine-learning
random-forest
pipelines
hyperparameter-optimization
data-preprocessing
data-cleaning
decision-tree
hyperparameter-tuning
shapely
smote
oversampling
classification-models
model-interpretability
classification-algorithms
missing-data-imputation
model-agnostic-explanations
model-optimization
global-surrogate-method
-
Updated
Jul 21, 2020 - Jupyter Notebook
Visualizing an XGBoost model in R using a sunburst plot (using inTrees)
-
Updated
Jun 25, 2019 - R
Course project for 6.869: automatic summarization for neural net interpretability
-
Updated
Dec 12, 2018 - Jupyter Notebook
Using Captum library and interpreting the models to understand their decisions better
-
Updated
Jun 20, 2020 - Jupyter Notebook
Using LIME and SHAP for model interpretability of Machine Learning Black-box models.
python
random-forest
regression
classification
lime
model-interpretation
black-box-model
shap
model-interpretability
-
Updated
Sep 13, 2019 - Jupyter Notebook
Interpretability and Fairness in Machine Learning
interpretable-deep-learning
interpretable-ai
interpretable-ml
interpretable-machine-learning
fairness-ai
model-interpretability
fairness-ml
-
Updated
Aug 5, 2020 - Jupyter Notebook
The project provides explanation of what SHAP is and how it can be used to interpret model. Also contains Notebook with detail on model interpretability method SHAP and code implementation on Heart disease dataset.
-
Updated
Nov 24, 2020 - Jupyter Notebook
Will They Pay? A machine learning solution to understand mobile app user payment behavior
data-science
machine-learning
data-visualization
data-analysis
mobile-apps
shap
model-interpretability
-
Updated
Feb 10, 2020 - Jupyter Notebook
Code for "High-Precision Model-Agnostic Explanations" paper. A follow up to LIME model.
-
Updated
Dec 5, 2018 - Jupyter Notebook
Using machine learning models to predict if patients have chronic kidney disease based on a few features. The results of the models are also interpreted to make it more understandable to health practitioners.
data-science
machine-learning
machine-learning-algorithms
data-transformation
data-visualization
feature-selection
dimensionality-reduction
diagnostics
feature-engineering
health-data-analysis
machine-learning-algorithm
model-interpretability
data-cleaning-pipeline
health-data-science
preventative-medicine
-
Updated
Aug 24, 2021 - HTML
The "keras-translator" helps you to understand a keras trained model.
-
Updated
Jun 26, 2021 - Jupyter Notebook
Improve this page
Add a description, image, and links to the model-interpretability topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the model-interpretability topic, visit your repo's landing page and select "manage topics."


/kind feature
Describe the solution you'd like
In
pkg/apis/serving/v1beta1/inference_service_defaults.gothe default InferenceService resource requests and limits are hard coded to be 1 cpu and 2Gi memory. These are reasonable defaults. However, the entire existence of these defaults should be disablable. Moreover, administrators should be able to quickly adjust defaults globally via t