Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 50 million developers.
Sign up
Popular repositories
-
-
Forked from pytorch/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
-
-
-
932 contributions in the last year
Contribution activity
September 2020
Created a pull request in pytorch/pytorch that received 5 comments
Combine criterion and new criterion tests in test_jit.
Stack from ghstack: #44471 Fix L1Loss when target.requires_grad is True. #44437 Fix MSELoss when target.requires_grad is True. #44398 Merge criter…
+15
−7
•
5
comments
- Support (single) backwards for binary_cross_entropy target.
- Turn on gradgrad check for BCELoss Criterion Tests.
- Stop using check_criterion_jacobian.
- Stop ignoring errors in cuda nn module tests.
- Always use NewModuleTest instead of ModuleTest.
- Simplify target handling in nn gradcheck.
- Fix SmoothL1Loss when target.requires_grad is True.
- Fix L1Loss when target.requires_grad is True.
- Fix MSELoss when target.requires_grad is True.
- Merge criterion_tests and new_criterion_tests.
- Stop ignoring NotImplementedErrors in cuda CriterionTests.
- [TESTING] Don't skip NotImplementedError in check_cuda for CriterionTests.
- For CriterionTests, have check_gradgrad actually only affect gradgrad checks.
- Rename NewCriterionTest to CriterionTest.
- Merge CriterionTest into NewCriterionTest.
- Allow criterion backwards test on modules requiring extra args (i.e. CTCLoss).
- Actually run backward criterion tests.
- Actually run backward criterion tests.
- Kill dead code in common_nn as part of merging Criterion and NewCriterionTests.
- Use NewCriterionTest in test_cpp_api_parity.py.
- Support (single) backwards for binary_cross_entropy target.
- adding a beta parameter to the smooth_l1 loss fn
- Updates div to perform true division
- Adds multiply and divide aliases
- Update interpolate to use new upsample overloads (#37177)
- Update interpolate to use new upsample overloads
- Makes torch.floor_divide consistent with Python and NumPy
- Deprecates calling linspace and logspace without setting steps explicitly
- Adds inequality testing aliases for better NumPy compatibility
Created an issue in pytorch/pytorch that received 2 comments
TestJitGeneratedModule.test_nn_CTCLoss_lengths_intlists fails if not skipped
In fa158c4, if I remove the check_jits and run:
python test_jit.py TestJitGeneratedModule.test_nn_CTCLoss_lengths_intlists
I get:
=================…
2
comments

