The Wayback Machine - https://web.archive.org/web/20220427183318/https://github.com/huggingface/transformers/issues/16627
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add missing tokenizer test files [still need 2 first time contributors] #16627

Open
9 tasks
SaulLu opened this issue Apr 6, 2022 · 25 comments
Open
9 tasks

Add missing tokenizer test files [still need 2 first time contributors] #16627

SaulLu opened this issue Apr 6, 2022 · 25 comments

Comments

@SaulLu
Copy link
Contributor

@SaulLu SaulLu commented Apr 6, 2022

馃殌 Add missing tokenizer test files

Several tokenizers currently have no associated tests. I think that adding the test file for one of these tokenizers could be a very good way to make a first contribution to transformers.

Tokenizers concerned

not yet claimed

  • LED

  • RetriBert

claimed

with an ongoing PR

with an accepted PR

none

How to contribute?

  1. Claim a tokenizer

    a. Choose a tokenizer from the list of "not yet claimed" tokenizers

    b. Check that no one in the messages for this issue has indicated that they care about this tokenizer

    c. Put a message in the issue that you are handling this tokenizer

  2. Create a local development setup (if you have not already done it)

    I refer you to section "start-contributing-pull-requests" of the Contributing guidelines where everything is explained. Don't be afraid with step 5. For this contribution you will only need to test locally the tests you add.

  3. Follow the instructions on the readme inside the templates/adding_a_missing_tokenization_test folder to generate the template with cookie cutter for the new test file you will be adding. Don't forget to move the new test file at the end of the template generation to the sub-folder named after the model for which you are adding the test file in the tests folder. Some details about questionnaire - assuming that the name of the lowcase model is brand_new_bert:

    • "has_slow_class": Set true there is a tokenization_brand_new_bert.py file in the folder src/transformers/models/brand_new_bert
    • "has_fast_class": Set true there is a tokenization_brand_new_bert_fast.py file the folder src/transformers/models/brand_new_bert.
    • "slow_tokenizer_use_sentencepiece": Set true if the tokenizer defined in the tokenization_brand_new_bert.py file uses sentencepiece. If this tokenizer don't have a ``tokenization_brand_new_bert.py` file set False.
  4. Complete the setUp method in the generated test file, you can take inspiration for how it is done for the other tokenizers.

  5. Try to run all the added tests. It is possible that some tests will not pass, so it will be important to understand why, sometimes the common test is not suited for a tokenizer and sometimes a tokenizer can have a bug. You can also look at what is done in similar tokenizer tests, if there are big problems or you don't know what to do we can discuss this in the PR (step 7.).

  6. (Bonus) Try to get a good understanding of the tokenizer to add custom tests to the tokenizer

  7. Open an PR with the new test file added, remember to fill in the RP title and message body (referencing this PR) and request a review from @LysandreJik and @SaulLu.

Tips

Do not hesitate to read the questions / answers in this issue 馃摪

@tgadeliya
Copy link

@tgadeliya tgadeliya commented Apr 6, 2022

Hi, I would like to add tests for Longformer tokenizer

@anmolsjoshi
Copy link
Contributor

@anmolsjoshi anmolsjoshi commented Apr 6, 2022

@SaulLu I would like to add tests for Flaubert

@Rajathbharadwaj
Copy link

@Rajathbharadwaj Rajathbharadwaj commented Apr 6, 2022

Hey I would like to contribute for Electra,Pointers please!

@SaulLu
Copy link
Contributor Author

@SaulLu SaulLu commented Apr 7, 2022

Thank you all for offering your help!

@Rajathbharadwaj ,sure! what do you need help with? Do you need more details on any of the steps listed in the main post?

@farahdian
Copy link

@farahdian farahdian commented Apr 10, 2022

Hi, first time contributor here-could I add tests for Splinter?

@farahdian
Copy link

@farahdian farahdian commented Apr 11, 2022

Is anyone else encountering this error with the cookiecutter command? my dev environment set up seemed to have went all fine...
Also I had run the command inside the tests/splinter directory

Screenshot 2022-04-11 172638

@SaulLu
Copy link
Contributor Author

@SaulLu SaulLu commented Apr 11, 2022

@faiazrahman , thank you so much for working on this! Regarding your issue, if you're in the tests/splinter folder, can you try to run cookiecutter ../../templates/adding_a_missing_tokenization_test/ ?

You should have a newly created folder cookiecutter-template-BrandNewBERT inside tests/splinter. 馃檪

If that's the case, you'll need after to do something like:

mv cookiecutter-template-BrandNewBERT/test_tokenization_brand_new_bert.py .
rm -r cookiecutter-template-BrandNewBERT/

Keep me posted 馃槃

@farahdian
Copy link

@farahdian farahdian commented Apr 12, 2022

Thanks so much @SaulLu turns out it was due to not recognizing my installed cookiecutter so i sorted it out there. 馃憤

@SaulLu
Copy link
Contributor Author

@SaulLu SaulLu commented Apr 19, 2022

Hi @anmolsjoshi, @tgadeliya, @Rajathbharadwaj and @farahdian,

Just a quick message to see how things are going for you and if you have any problems. If you do, please share them! 馃

@farahdian
Copy link

@farahdian farahdian commented Apr 19, 2022

Thanks @SaulLu ! I've been exploring the tokenization test files in the repo just trying to figure out which ones would be a good basis for writing a tokenization test for splinter... if you have any guidance on this it would be super helpful!

@Rajathbharadwaj
Copy link

@Rajathbharadwaj Rajathbharadwaj commented Apr 19, 2022

Hey @SaulLu my apologies, been a bit busy. I'll get started ASAP, however, I still didn't understand where exactly I should run the cookie cutter

Help on this would be helpful 馃槃

@SaulLu
Copy link
Contributor Author

@SaulLu SaulLu commented Apr 19, 2022

Hi @farahdian ,

Thank you very much for the update! To know where you stand, have you done step 3)? Is it for step 4) that you are looking for a similar tokenizer? 馃檪

@SaulLu
Copy link
Contributor Author

@SaulLu SaulLu commented Apr 19, 2022

Hi @Rajathbharadwaj ,

Thank you for the update too!

I still didn't understand where exactly I should run the cookie cutter

You can run the cookie cutter command anywhere as long as the command is followed by the path to the folder adding_a_missing_tokenization_test in the transformers repo that you have cloned locally.

When you run the command, it will create a new folder at the location you are. In this folder you will find a base for the python test file that you need to move inside the tests/electra folder of the transformers local clone. Once this file is moved, you can delete the folder that was created by the cookie cutter command.

Below is an example of the sequence of bash commands I would personally use:

(base) username@hostname:~$ cd ~/repos
(base) username@hostname:~/repos$ git clone git@github.com:huggingface/transformers.git
[Install my development setup]
(transformers-dev) username@hostname:~/repos$ cookiecutter transformers/templates/adding_a_missing_tokenization_test/
[Answer the questionnaire]
(transformers-dev) username@hostname:~/repos$ mv cookiecutter-template-Electra/test_tokenization_electra.py transformers/tests/Electra
(transformers-dev) username@hostname:~/repos$ rm -r cookiecutter-template-Electra/

Hope that'll help you 馃槃

@farahdian
Copy link

@farahdian farahdian commented Apr 19, 2022

Appreciate your patience @SaulLu ! Yup I've done step 3 and generated a test tokenization file with cookiecutter. Now onto working on the setUp method 馃槃

@SaulLu
Copy link
Contributor Author

@SaulLu SaulLu commented Apr 19, 2022

@farahdian , this is indeed a very good question: finding the closest tokenizer to draw inspiration from and identifying the important difference with that tokenizer is the most interesting part.

For that there are several ways to start:

  1. Identify the high level features of the tokenizer by looking at the contents of the model's "reference" checkpoint files (listed inside the PRETRAINED_VOCAB_FILES_MAP global variables in the tokenizer's files) on the hub. A similar model would most likely store the tokenizer vocabulary in the same way (with only a vocab file, with both a vocab and a merges files, with a sentencepiece binary file or with only a tokenizer.json file).
  2. Read the high level explanation of the model in the transformers documentation (e.g. for Splinter)
  3. Read the paper corresponding to the model
  4. Look at the implementation in transformers lib
  5. Look at the original implementation of the model (often mentioned in the paper)
  6. Look at the discussions on the PR in which the model was added

For the model you're in charge @farahdian:

  • Transformers's doc mention that:

    Use SplinterTokenizer (rather than BertTokenizer), as it already contains this special token. Also, its default behavior is to use this token when two sequences are given (for example, in the run_qa.py script).

  • Splinter's paper mention that:

    Splinter-base shares the same architecture (transformer encoder (Vaswani et al., 2017)), vocabulary (cased wordpieces), and number of parameters (110M) with SpanBERT-base (Joshi et al., 2020).

  • And SpanBERT's paper mention that:

    We reimplemented BERT鈥檚 model and pre-training method in fairseq (Ott et al., 2019). We used the model configuration of BERT large as in Devlin et al. (2019) and also pre-trained all our models on the same corpus: BooksCorpus and English Wikipedia using cased Wordpiece tokens.

  • and the vocabulary files of bert-base-cased (vocab file) and of splinter-base (vocab file) look very similar

Given these mentions, it seems that Splinter's tokenizer is very similar to Bert's one. It would be interesting to confirm this impression and to understand all the differences between SplinterTokenizer and BertTokenizer so that it is well reflected in the test 馃檪

@Rajathbharadwaj
Copy link

@Rajathbharadwaj Rajathbharadwaj commented Apr 19, 2022

Hi @Rajathbharadwaj ,

Thank you for the update too!

I still didn't understand where exactly I should run the cookie cutter

You can run the cookie cutter command anywhere as long as the command is followed by the path to the folder adding_a_missing_tokenization_test in the transformers repo that you have cloned locally.

When you run the command, it will create a new folder at the location you are. In this folder you will find a base for the python test file that you need to move inside the tests/electra folder of the transformers local clone. Once this file is moved, you can delete the folder that was created by the cookie cutter command.

Below is an example of the sequence of bash commands I would personally use:

(base) username@hostname:~$ cd ~/repos
(base) username@hostname:~/repos$ git clone git@github.com:huggingface/transformers.git
[Install my development setup]
(transformers-dev) username@hostname:~/repos$ cookiecutter transformers/templates/adding_a_missing_tokenization_test/
[Answer the questionnaire]
(transformers-dev) username@hostname:~/repos$ mv cookiecutter-template-Electra/test_tokenization_electra.py transformers/tests/Electra
(transformers-dev) username@hostname:~/repos$ rm -r cookiecutter-template-Electra/

Hope that'll help you smile

Thank you so much @SaulLu
I understood now, however, I am skeptical about slow_tokenizer_use_sentencepiece question, but I set it to True as it had the tokenization_electra.py file but I didn't understand

"Set true if the tokenizer defined in the tokenization_brand_new_bert.py file uses sentencepiece"

So did I select correctly? Or should I set it to False? Apologies for asking so many questions 馃槃

However now I've started adding tests for Electra will keep you posted if I run into something I don't understand.

Thanks for helping once again!

@tgadeliya
Copy link

@tgadeliya tgadeliya commented Apr 19, 2022

Hi @SaulLu,
I think my case the easiest one, because Longformer model uses actually the same tokenizer as RoBERTa with no differences. So, I adapted tests(small refactor and changes) from RoBERTa tokenizer and prepare branch with tests. Nevertheless, I really want to dive deeper and study code of TokenizerTesterMixin and if after that I will find some untested behaviour, I will add new tests.
But I think I have one doubt, that you can resolve. Are you anticipating from Longformer tests to have different toy tokenizer example than in RoBERTa tests? Or maybe I should write my own tests from scratch?

@SaulLu
Copy link
Contributor Author

@SaulLu SaulLu commented Apr 19, 2022

@Rajathbharadwaj , I'm happy to help! Especially as your questions will surely be useful for other people

however, I am skeptical about slow_tokenizer_use_sentencepiece question, but I set it to True as it had the tokenization_electra.py file but I didn't understand
"Set true if the tokenizer defined in the tokenization_brand_new_bert.py file uses sentencepiece"
So did I select correctly? Or should I set it to False? Apologies for asking so many questions smile

Some XxxTokenizer (without the Fast at the end, implemented in the tokenization_xxx.py file), use a backend based on the sentencepiece library. For example T5Tokenizer uses a backend based on sentencepiece: you can see this import at the beginning of the tokenization_t5.py file:


and you can see that the backend is instantiated here:
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
self.sp_model.Load(vocab_file)

On the contrary, BertTokenizer for example does not use a sentencepiece backend.

I hope this helped you!

@SaulLu SaulLu changed the title Add missing tokenizer test files Add missing tokenizer test files [still need 5 first time contributors] Apr 19, 2022
@SaulLu
Copy link
Contributor Author

@SaulLu SaulLu commented Apr 19, 2022

Hi @tgadeliya ,

Thanks for the update!

But I think I have one doubt, that you can resolve. Are you anticipating from Longformer tests to have different toy tokenizer example than in RoBERTa tests? Or maybe I should write my own tests from scratch?

In your case, I wouldn't be surprised if Longformer uses the same tokenizer as RoBERTa. In this case, it seems legitimate to use the same toy tokenizer. Maybe the only check you can do to confirm this hypothesis is comparing the vocabularies of the 'main" checkpoints of both models:

!wget https://huggingface.co/allenai/longformer-base-4096/raw/main/merges.txt
!wget https://huggingface.co/allenai/longformer-base-4096/raw/main/vocab.json
!wget https://huggingface.co/roberta-base/raw/main/merges.txt
!wget https://huggingface.co/roberta-base/raw/main/vocab.json

!diff merges.txt merges.txt.1
!diff vocab.json vocab.json.1

Turn out the result confirms it!

@leondz
Copy link

@leondz leondz commented Apr 20, 2022

Hi, I'm happy to take MobileBert

@SaulLu SaulLu changed the title Add missing tokenizer test files [still need 5 first time contributors] Add missing tokenizer test files [still need 4 first time contributors] Apr 20, 2022
@elusenji
Copy link

@elusenji elusenji commented Apr 20, 2022

I'd like to work on ConvBert.

@SaulLu SaulLu changed the title Add missing tokenizer test files [still need 4 first time contributors] Add missing tokenizer test files [still need 3 first time contributors] Apr 21, 2022
@elusenji
Copy link

@elusenji elusenji commented Apr 22, 2022

Identify the high level features of the tokenizer by looking at the contents of the model's "reference" checkpoint files (listed inside the PRETRAINED_VOCAB_FILES_MAP global variables in the tokenizer's files) on the hub. A similar model would most likely store the tokenizer vocabulary in the same way (with only a vocab file, with both a vocab and a merges files, with a sentencepiece binary file or with only a tokenizer.json file).

@SaulLu I'm having trouble identifying ConvBert's 'reference' checkpoint files on the hub. Would you kindly provide more guidance on this?

@SaulLu
Copy link
Contributor Author

@SaulLu SaulLu commented Apr 22, 2022

Hi @elusenji ,

In the src/transformers/models/convbert/tokenization_convbert.py file you can find the global variable PRETRAINED_VOCAB_FILES_MAP:

PRETRAINED_VOCAB_FILES_MAP = {
"vocab_file": {
"YituTech/conv-bert-base": "https://huggingface.co/YituTech/conv-bert-base/resolve/main/vocab.txt",
"YituTech/conv-bert-medium-small": "https://huggingface.co/YituTech/conv-bert-medium-small/resolve/main/vocab.txt",
"YituTech/conv-bert-small": "https://huggingface.co/YituTech/conv-bert-small/resolve/main/vocab.txt",
}
}

In particular YituTech/conv-bert-base is a reference checkpoint for ConvBert.

Is this what you were having trouble with? 鈽猴笍

@elusenji
Copy link

@elusenji elusenji commented Apr 22, 2022

Yes, this helps!

@danhphan
Copy link

@danhphan danhphan commented Apr 27, 2022

Hi @SaulLu, I am happy to write tests for RemBert. Thanks.

@SaulLu SaulLu changed the title Add missing tokenizer test files [still need 3 first time contributors] Add missing tokenizer test files [still need 2 first time contributors] Apr 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment