Hi there 👋 . I'm Chandan, a PhD candidate at UC Berkeley working on interpretable machine learning.
🤖 I like to explain and simplify machine learning
csinva.github.io
Slides, paper notes, class notes, blog posts, and research on ML, stat, and AI.
imodels
Interpretable ML package for concise and accurate predictive modeling (sklearn-compatible).
stable-pipelines
Making it easier to build stable, trustworthy data-science pipelines.
🧠 Some of my research focuses on interpreting neural networks
hierarchical-dnn-interpretations
"Hierarchical interpretations for neural network predictions" (ICLR 2019)
deep-explanation-penalization
"Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" (ICML 2020)
transformation-importance
"Transformation Importance with Applications to Cosmology" (ICLR Workshop 2020)
adaptive-wavelet-distillation
Adaptive, interpretable wavelets across domains.
📊 I care about working on serious applied data-science problems
covid19-severity-prediction
Extensive and accessible COVID-19 data + forecasting for counties and hospitals (HDSR, 2021)
iai-clinical-decision-rule
Interpretable clinical decision rules for predicting intra-abdominal injury.
molecular-partner-prediction
Predicting successful CME events using only clathrin markers.
And I also explore various aspects of deep learning and machine learning
gan-vae-pretrained-pytorch
Pretrained GANs + VAEs + classifiers for MNIST/CIFAR in pytorch.
gpt2-paper-title-generator
Generating paper titles with GPT-2.
matching-with-gans
Matching in GAN latent space for better bias benchmarking.
mdl-complexity
"Revisiting complexity and the bias-variance tradeoff".
disentangled-attribution-curves
"Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"

