Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign up[Feature Request] Allow using custom languages/models for spaCy NLP #648
Labels
Comments
|
We are thinking about changing the way you define the tokenizer to be more flexible, and that would allow you to do what you are looking for. In the mean time, if you are using the API, you can do the following:
After you do this, you can refer to |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment


Is your feature request related to a problem? Please describe.
Other related issues: #408 #251
I trained a Chinese model for spaCy, linked it to
[spacy's package folder]/data/zh(usingspacy link) and want to use that for ludwig. However, when I tried to set the config for ludwig, I received an error, which tell me that there is no way to load the Chinese model.Describe the use case
By allowing using custom languages for spacy, users using other language would be able to process their texts quicker and easier.
Describe the solution you'd like
Here's the current solution...
...which I think could be changed to this...
Describe alternatives you've considered
I've considered not to use spacy but to use a custom script to simply split sentences to words using some processors like "jieba". However, by using this method I would lose nearly all benefits from NLP.
Additional context
I think that's all :)
I don't know whether my advice could be accepted. But if it got solved I would be very thankful.
BTW since I'm not a native English speaker, there may be some mistakes. Please don't mind it :p