natural-language-understanding
Here are 353 public repositories matching this topic...
Google has started using BERT in its search engine. I imagine it creates embeddings for the query on the search engine, and then find a kind of similarity measure with the potential candidate websites/pages, finally ranking them in search results.
I am curious how do they create embeddings for the documents (the potential candidate websites/pages) if any? Or am I interpreting it wrong?
Prerequisites
Please fill in by replacing
[ ]with[x].
- Are you running the latest
bert-as-service? - Did you follow the installation and the usage instructions in
README.md? - Did you check the [FAQ list in
README.md](https://github.com/hanxiao/bert-as-se
The diagram in documentation suggest yes, but num_fc_layers and fc_layers are not listed as available parameters as they are for e.g., parallel cnn or stacked cnn.
It does not seem like it is supported based on a few experiments however I am using the RNN encoder inside a sequence combiner, so possibly this is causing problems.
for example, this does not seem to add any fc_layers:
co
Description
Add a ReadMe file in the GitHub folder.
Explain usage of the Templates
Other Comments
Principles of NLP Documentation
Each landing page at the folder level should have a ReadMe which explains -
○ Summary of what this folder offers.
○ Why and how it benefits users
○ As applicable - Documentation of using it, brief description etc
Scenarios folder:
○
What should the format of files be for training the BERT Wordpiece Tokenizer? Specifically for training the tokenizer on WikiText-103 or similar Wikipedia dumps?
Per discussion in #3032
The openpsi README.md describes a version of openpsi that no longer exists. It's inaccurate in 120 different ways ... I recognize it, as something that I wrote long ago. As far as I know, there is no adequate documentation for openpsi that
-- explains what it is
-- explains how it works; this includes a review of all of the major components, including control, de
Parsing raw wiki text into structured format is hard, especially handling wiki documents that are not well-formed. This issue is for tracking problematic cases where the extraction is wrong or missing.
-
Updated
Mar 7, 2020 - Python
On home page of website: https://nlp.johnsnowlabs.com/ I read "Full Python, Scala, and Java support"
Unfortunately it's 3 days now I'm trying to use Spark NLP in Java without any success.
- I cannot find Java API (JavaDoc) of the framework.
- not event a single example in Java is available
- I do not know Scala, I do not know how to convert things like:
val testData = spark.createDataFrame(
-
Updated
Mar 8, 2020 - Python
After launching the first versions of this library and listening to a lot of the feature requests, we concluded that working with transformers really needed better access to machine learning models than the API was making easy. The implementation was also awkward in several ways, especially the subclassing of components, different factory names, and the need to subclass the Language class.
To
-
Updated
Mar 8, 2020 - Python
-
Updated
Mar 6, 2020 - Python
-
Updated
Mar 8, 2020 - Jupyter Notebook
-
Updated
Feb 20, 2020 - Python
-
Updated
Mar 5, 2020 - Python
-
Updated
Mar 5, 2020 - Java
-
Updated
Mar 8, 2020 - Python
-
Updated
Jan 14, 2020 - Python
-
Updated
Mar 8, 2020
-
Updated
Feb 17, 2020 - Python
-
Updated
Feb 27, 2020 - C++
-
Updated
Feb 9, 2020 - Python
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
Xingxing Zhang, Furu Wei, Ming Zhou
to appear in ACL 2019
https://arxiv.org/abs/1905.06566
Hello @Gugic ,
Please let me know how can I connect API AI , In my rails app with Databases, For examples we need to fetch project status, Then How's its possible to deal with API.I have used https://github.com/api-ai/apiai-ruby-client,
Create some samples intents and response and get the response with this gem api,But need to fetch the record with our database (rails application database).
-
Updated
Mar 7, 2020 - Jupyter Notebook
-
Updated
Mar 4, 2020
-
Updated
Mar 8, 2020 - Python
-
Updated
Mar 4, 2020 - Python
Improve this page
Add a description, image, and links to the natural-language-understanding topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the natural-language-understanding topic, visit your repo's landing page and select "manage topics."


It would very useful to have documentation on how to train different models, not necessarily with the use of transformers, but with use external libs (like original BERT, fairseq, etc)
Maybe another repository with readmes or docs with recipes from those who already pretrain their model in order to reproduce procedure for other languages or domain.
There are many exter