Open deep learning compiler stack for cpu, gpu and specialized accelerators
-
Updated
Dec 8, 2022 - Python
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Accelerate AI models inference leveraging best-of-breed optimization techniques
TVM Documentation in Chinese Simplified / TVM 中文文档
AutoKernel 是一个简单易用,低门槛的自动算子优化工具,提高深度学习算法部署效率。
yolort is a runtime stack for yolov5 on specialized accelerators such as tensorrt, libtorch, onnxruntime, tvm and ncnn.
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Open, Modular, Deep Learning Accelerator
Optimizing Mobile Deep Learning on ARM GPU with TVM
A home for the final text of all TVM RFCs.
Benchmark scripts for TVM
TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together
动手学习TVM核心原理教程
TVM Relay IR Visualization Tool (TVM 可视化工具)
Large input size REAL-TIME Face Detector on Cpp. It can also support face verification using MobileFaceNet+Arcface with real-time inference. 480P Over 30FPS on CPU
Add a description, image, and links to the tvm topic page so that developers can more easily learn about it.
To associate your repository with the tvm topic, visit your repo's landing page and select "manage topics."