The Wayback Machine - https://web.archive.org/web/20201018094450/https://github.com/pytorch/text/issues/461
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How exactly FastText and CharNGram embedding work in Torchtext #461

Open
MFajcik opened this issue Oct 24, 2018 · 1 comment
Open

How exactly FastText and CharNGram embedding work in Torchtext #461

MFajcik opened this issue Oct 24, 2018 · 1 comment
Labels

Comments

@MFajcik
Copy link

@MFajcik MFajcik commented Oct 24, 2018

Hi, I have 2 questions:
Firstly, I know in past I found documentation on how CharNGram embedding works (either somewhere in code or in the docs), but I cannot find it right now.

Secondly I would like to know if FastText uses precomputed word vectors for pretrained vocabulary,
or calling it within build_vocab constructs embeddings from pre-trained ngrams for training data vocabulary. In other words, does build_vocab handle out-of-pretrained_vocabulary words?

Thank you for the information.

@MFajcik MFajcik changed the title How exactly FastText and CharNGram work in Torchtext How exactly FastText and CharNGram embedding work in Torchtext Oct 24, 2018
@zhangguanheng66
Copy link
Collaborator

@zhangguanheng66 zhangguanheng66 commented May 30, 2019

an unit test could be added as a good example.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
2 participants
You can’t perform that action at this time.