New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNXConfig: Add a configuration for all available models #16308
Comments
|
|
Let me try with |
|
|
Love the initiative here, thanks for opening an issue! Added the |
Thanks for the label, I don't know if it's easy for beginning but it's cool if more people see this and can contribute! |
|
I would like to try with Luke. However, Luke doesn't support any features apart from default AutoModel. It's main feature is LukeForEntityPairClassification for relation extraction. Should I convert luke-base to Onnx or LukeForEntityPairClassification which has a classifier head? |
|
errors |
When you implement the ONNX Config for a model it's working for all kind of task, because the base model and the ones pre-packaged for fine-tuning have the same inputs. So you can base your implementation on the base model and other tasks will work too. |
|
|
Still learning |
Issue descriptionHello, thank you for supporting GPTJ with ONNX. But when I exported an ONNX checkpoint using transformers-4.18.0, I got the issue like below. I found GPTJ with ONNX seems supported when I checked your document transformers-4.18.0 [https://huggingface.co/docs/transformers/serialization#exporting-a-model-for-an-unsupported-architecture] and code [src/transformers/onnx/features.py etc.]. But I still got this issue. And then, I checked the parameter of config.model_type in File "/data/venv/lib/python3.8/site-packages/transformers/onnx/main.py", which is related to two parameters [from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING_NAMES, from ..models.auto.tokenization_auto import TOKENIZER_MAPPING_NAMES]. I did not find GPTJ's config in these configs. It seems not sensible. Environment info
|
Hello @pikaqqqqqq, thanks for reporting the problem. I opened a PR with a quick fix to avoid this problem, check #16780 |
|
|
Hello 👋🏽, I added RoFormer onnx config here #16861, I'm not 100% sure who to ask for review so I'm posting this here. Thanks 🙏🏽 |
|
Hi! I would like try building the ONNX config for |
Hi @Tanmay06 that would be awesome. Don't hesitate to open a PR with your work when you feel it's quite good. You can ping me anytime if you need help! |
|
Hello! I would like to work on ONNX config for |
Nice, don't hesitate to ping me if help is needed |
|
Hi! I would like to work on ONNX config for |
Hi, nice! If you need help you can tag me. |
|
#17027 Here is one for XLNet! |
|
#17029 PR for MobileBert. |
|
#17030 Here is the PR for XLM |
|
#17078 PR for |


This issue is about the working group specially created for this task. If you are interested in helping out, take a look at this organization, or add me on Discord:
ChainYo#3610We are looking for contributing to HuggingFace's ONNX implementation for all available models on the HF's hub. There is already a lot of architectures implemented for converting PyTorch models to ONNX, but we need more! We need them all!
Feel free to join us in this adventure! Join the org by clicking here
Here is a non-exhaustive list of models that all models available:
If there is a✅ next to a model, it means that the model is already supported by ONNX.
🛠️ next to a model means that the PR is in progress and finally if there is nothing next to a model, it means that the model is not yet supported by ONNX and thus we need to add support for it.
If you need some help about implementing an unsupported model, here is a guide from HuggingFace's documentation.
If you want an example of implementation, I did one for CamemBERT some months ago.
The text was updated successfully, but these errors were encountered: