The Wayback Machine - https://web.archive.org/web/20220423053351/https://github.com/topics/knowledge-distillation
Skip to content
#

knowledge-distillation

Here are 236 public repositories matching this topic...

A coding-free framework built on PyTorch for reproducible deep learning studies. 🏆20 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.

  • Updated Apr 22, 2022
  • Python
avishreekh
avishreekh commented May 7, 2021

We also need to benchmark the Lottery-tickets Pruning algorithm and the Quantization algorithms. The models used for this would be the student networks discussed in #105 (ResNet18, MobileNet v2, Quantization v2).

Pruning (benchmark upto 40, 50 and 60 % pruned weights)

  • Lottery Tickets

Quantization

  • Static
  • QAT
help wanted good first issue Priority: High

Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.

  • Updated Apr 21, 2022
  • Python

Improve this page

Add a description, image, and links to the knowledge-distillation topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the knowledge-distillation topic, visit your repo's landing page and select "manage topics."

Learn more