-
Updated
Mar 23, 2022 - Python
transformers
Here are 1,167 public repositories matching this topic...
-
Updated
Mar 22, 2022
-
Updated
Mar 21, 2022 - Jupyter Notebook
-
Updated
Feb 25, 2022 - Python
-
Updated
Mar 23, 2022 - Rust
-
Updated
Mar 13, 2022 - Python
-
Updated
Mar 16, 2022 - Python
-
Updated
Mar 21, 2022 - Python
-
Updated
Feb 25, 2022 - Python
-
Updated
Jul 15, 2021 - Jupyter Notebook
-
Updated
Feb 26, 2022 - Python
文档增加tokenizer类别及样例建议
欢迎您反馈PaddleNLP使用问题,非常感谢您对PaddleNLP的贡献!
在留下您的问题时,辛苦您同步提供如下信息:
- 版本、环境信息
1)PaddleNLP和PaddlePaddle版本:请提供您的PaddleNLP和PaddlePaddle版本号,例如PaddleNLP 2.0.4,PaddlePaddle2.1.1
2)系统环境:请您描述系统类型,例如Linux/Windows/MacOS/,python版本 - 复现信息:如为报错,请给出复现环境、复现步骤
paddle版本2.0.8 paddlenlp版本2.1.0
建议,能否在paddlenlp文档中,整理列出各个模型的tokenizer是基于什么类别的based,如bert tokenizer是word piece的,xlnet tokenizer是sentence piece的,以及对应的输入输出样例
关于一些具体建议
-
Updated
Mar 12, 2022 - Python
-
Updated
Mar 23, 2022 - Scala
-
Updated
Jan 7, 2022 - Python
-
Updated
Mar 15, 2022 - Python
Problem
Some of our transformers & estimators are not thoroughly tested or not tested at all.
Solution
Use OpTransformerSpec and OpEstimatorSpec base test specs to provide tests for all existing transformers & estimators.
Describe the bug
Setting "text-gen-type": "interactive" results in an IndexError: : shape mismatch: indexing tensors could not be broadcast together with shapes [4], [3]. Other generation types work.
To Reproduce
Steps to reproduce the behavior:
- Install, adapt 20B to local environment, add "text-gen-type": "interactive" config
- Run inference
- Enter arbitrary prompt when
-
Updated
Jan 11, 2022 - Python
-
Updated
Nov 6, 2021 - Python
-
Updated
Mar 15, 2022 - Python
-
Updated
Feb 3, 2022 - Python
-
Updated
Feb 22, 2022 - Scala
-
Updated
Mar 22, 2022 - Jupyter Notebook
-
Updated
Dec 12, 2021
-
Updated
Mar 14, 2022 - Python
Hey! Thanks for the work on this.
Wondering how we can use this with mocha? tsconfig-paths has its own tsconfig-paths/register to make this work
https://github.com/dividab/tsconfig-paths#with-mocha-and-ts-node
Basically with mocha we have to run mocha -r ts-node/register -- but that wouldnt have the compiler flag.
Would be worthwhile to have the ability to do it which looks like
-
Updated
Dec 15, 2021 - Python
-
Updated
Feb 26, 2022 - Jupyter Notebook
Improve this page
Add a description, image, and links to the transformers topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the transformers topic, visit your repo's landing page and select "manage topics."


Problem
Currently
FARMReaderwill ask users to raisemax_seq_lengthevery time some samples are longer than the value set to it. However, this can be confusing ifmax_seq_lengthis already set to the maximum value allowed by the model, because raising it further will cause hard-to-read CUDA errors.See #2177.
Solution
We should find a way to query the model for the maximum va