Pinned
935 contributions in the last year
Less
More
Activity overview
Contributed to
pytorch/pytorch,
pytorch/tutorials,
facebookresearch/detectron2
and 5 other
repositories
Contribution activity
August 2021
Created 10 commits in 1 repository
Created a pull request in pytorch/pytorch that received 22 comments
[quant][graphmode][fx] Add reference option support for binary ops
Stack from ghstack: #63501 #62861 -> #62698 Summary: We also removed the special handling in match_utils for binary ops Test Plan: python test/te…
+98
−137
•
22
comments
Opened 9 other pull requests in 2 repositories
pytorch/pytorch
3
open
5
closed
- [quant][graphmode][fx] Make maxpool and flatten produce the reference pattern
- [fx2trt] Add dequantize support
- [fx2trt] Add quantize_per_tensor support
- [fx2trt] Add a test for quantized resnet18
- [quant][graphmode][fx][fix] Fix quantization for tuple arguments
- [qunat][graphmode][fx] Add a separate lower_to_native_backend function for relu
- [quant][graphmode][fx][bc-breaking] Support for reference pattern for fixqparam ops in eval mode
- [quant][graphmode] Reference pattern support for elu
pytorch/tutorials
1
open
Reviewed 14 pull requests in 1 repository
pytorch/pytorch
14 pull requests
- [quant][graphmode][fx] Make maxpool and flatten produce the reference pattern
- [qunat][graphmode][fx] Add a separate lower_to_native_backend function for relu
- [quant][graphmode][fx] Add reference option support for binary ops
- Updating the names of these functions
- Bugfix for fuse qconfig comparison
- [quant][fx] Ensure qconfig works for QAT with multiple modules
- [docs][ao] add missing torch.choose_qparams_optimized documentation
- [docs][ao] Add overload information for fake_quantize_per_tensor_affine
- [docs][ao] Add missing docstrings for quantized_max_pool1d and quantized_max_pool2d
- [quant][graphmode][fx][bc-breaking] Support for reference pattern for fixqparam ops in eval mode
- [docs][ao] update quantize_per_tensor to mention overloads
- [quant][graphmode][fx] Attach a weight qparam dict to linear and conv in reference quantized model
- [quant] Input-Weight Equalization - edge cases
- Adding collective quantization API

