Grow your team on GitHub
GitHub is home to over 50 million developers working together. Join them to grow your own development teams, manage permissions, and collaborate on projects.
Sign upRepositories
-
ML_course
EPFL Machine Learning Course, Fall 2019
-
OptML_course
EPFL Course - Optimization for Machine Learning - CS-439
-
ChocoSGD
Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356
-
DeAI
Decentralized privacy-preserving ML training software prototype on p2p networking stack
-
sent2vec
General purpose unsupervised sentence representations
-
collaborative-attention
Code for Multi-Head Attention: Collaborate Instead of Concatenate
-
powergossip
Code for "Practical Low-Rank Communication Compression in Decentralized Deep Learning"
-
Bi-Sent2Vec
Robust Cross-lingual Embeddings from Parallel Sentences
-
attention-cnn
Source code for "On the Relationship between Self-Attention and Convolutional Layers"
-
kubernetes-setup
MLO group setup for kubernetes cluster
-
ContinuityPlan
Forked from indy-lab/ContinuityPlanContinuity Plan in accordance with https://www.epfl.ch/campus/security-safety/en/health/coronavirus-covid19/
-
autoTrain
Open Challenge - Automatic Training for Deep Learning
-
powersgd
Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727
-
error-feedback-SGD
SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847
-
opt-summerschool
Short Course on Optimization for Machine Learning - Slides and Practical Labs - DS3 Data Science Summer School, June 24 to 28, 2019, Paris, France
-
correlating-tweets
Correlating Twitter Language with Community-Level Health Outcomes: https://arxiv.org/abs/1906.06465
-
cola
CoLa - Decentralized Linear Learning: https://arxiv.org/abs/1808.04883
-
sparsifiedSGD
Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599
-
SemEval2016
Code for SemEval-2016 winning classifier "SwissCheese at SemEval-2016 Task 4: Sentiment Classification Using an Ensemble of Convolutional Neural Networks with Distant Supervision"
-
opt-shortcourse
Short Course on Optimization for Machine Learning - Slides and Practical Lab - Pre-doc Summer School on Learning Systems, July 3 to 7, 2017, Zürich, Switzerland
-
tensorflow
Forked from tensorflow/tensorflowextension of distributed training to sparse linear models (L1 regularizers, using primal CoCoA, instead of dual CoCoA for the L2 case in default tensorflow), by @LiamHe

