The Wayback Machine - https://web.archive.org/web/20220602174404/https://github.com/topics/fp16
Here are
12 public repositories
matching this topic...
InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.
Updated
May 22, 2022
Python
Conversion to/from half-precision floating point formats
Stage 1 IEEE 754 half-precision floating-point for JavaScript
Updated
Jun 1, 2022
JavaScript
Round matrix elements to lower precision in MATLAB
Updated
May 12, 2022
MATLAB
Optimised Caffe with OpenCL supporting for less powerful devices such as mobile phones
CPP20 implementation of a 16-bit floating-point type mimicking most of the IEEE 754 behavior. Single file and header-only.
apextrainer is an open source toolbox for fp16 trainer based on Detectron2 and Apex
Updated
Feb 26, 2020
Python
Let's train CIFAR 10 Pytorch with Half-Precision!
Updated
Oct 25, 2019
Python
IEEE 754-style floating-point converter
Updated
Nov 22, 2021
TypeScript
Pytorch implementation of DreamerV2: Mastering Atari with Discrete World Models, based on the original implementation
Updated
Oct 8, 2021
Python
Simple Example of Pytorch -> TensorRT and Inference
Updated
Mar 20, 2021
Jupyter Notebook
Converts a floating-point number or hexadecimal representation of a floating-point numbers into various formats and displays them into binary/hexadecimal.
Improve this page
Add a description, image, and links to the
fp16
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
fp16
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.