mxnet
Here are 568 public repositories matching this topic...
-
Updated
Jan 2, 2021 - JavaScript
-
Updated
Dec 31, 2020 - Python
-
Updated
Jan 1, 2021 - C++
Bug Report
These tests were run on s390x. s390x is big-endian architecture.
Failure log for helper_test.py
________________________________________________ TestHelperTensorFunctions.test_make_tensor ________________________________________________
self = <helper_test.TestHelperTensorFunctions testMethod=test_make_tensor>
def test_make_tensor(self): # type: () -> None
Current pytorch implementation ignores the argument split_f in the function train_batch_ch13 as shown below.
def train_batch_ch13(net, X, y, loss, trainer, devices):
if isinstance(X, list):
# Required for BERT Fine-tuning (to be covered later)
X = [x.to(devices[0]) for x in X]
else:
X = X.to(devices[0])
...Todo: Define the argument `
-
Updated
Dec 31, 2020 - Python
-
Updated
Dec 5, 2020 - Python
-
Updated
Oct 22, 2020 - Python
Can I know what is the size of the Kinetics 400 dataset used to reproduce the result in this repo?
There are many links in Kinetics that have expired. As as result, everyone might not be using the same Kinetics dataset. As a reference, the statistics of the Kinetics dataset used in PySlowFast can be found here, https://github.com/facebookresearch/video-nonlocal-net/blob/master/DATASET.md. However, I cannot seem to find similar information for gluoncv. Will you guys be sharing the statistics and
-
Updated
Dec 23, 2020 - Jupyter Notebook
-
Updated
Sep 23, 2020 - Jupyter Notebook
resuming training
How do i resume training for text classification?
-
Updated
Oct 24, 2020
-
Updated
Sep 17, 2020 - Python
-
Updated
Dec 29, 2020
I have the same hardware envs, same network, but I could not get the result as you, almost half as you. Any best practices and experience? thanks very much! for bytePS with 1 instance and 8 GPU, I have similar testing result.
[Error Message] Improve error message in SentencepieceTokenizer when arguments are not expected.
Description
While using tokenizers.create with the model and vocab file for a custom corpus, the code throws an error and is not able to generate the BERT vocab file
Error Message
ValueError: Mismatch vocabulary! All special tokens specified must be control tokens in the sentencepiece vocabulary.
To Reproduce
from gluonnlp.data import tokenizers
tokenizers.create('spm', model_p
-
Updated
Dec 27, 2020 - Python
-
Updated
Dec 31, 2020 - Python
-
Updated
Dec 21, 2020 - Python
-
Updated
May 20, 2020 - Java
Description
We currently only check for the underlying prediction net to have the same parameters. This can create issues when constructing a Predictor using the correct prediction net but other parameters, like freq or transformation.
Maybe we can add proper checks the other inputs as well.
References
[this note](https://github.com/awslabs/gluon-ts/blob/726f52e720f6afc72c86e
Yolo Model
Description
Implement a YOLO model and add it to the DJL model zoo
References
-
Updated
Oct 22, 2020 - Jupyter Notebook
-
Updated
Oct 15, 2020 - C++
-
Updated
Dec 31, 2020 - Python
-
Updated
Nov 30, 2020 - Python
Improve this page
Add a description, image, and links to the mxnet topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the mxnet topic, visit your repo's landing page and select "manage topics."


Description
This is a documentation bug. The parameter of API
mxnet.test_utils.check_numeric_gradientis not consistent between signature and Parameter section. There is a parametercheck_epsin the Parameter section, but it is not in the signature.Link to document: https://mxnet.apache.org/versions/1.6/api/python/docs/api/mxnet/test_utils/index.html#mxnet.test_utils.check_numeric_gra