Natural language processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 14,407 public repositories matching this topic...
-
Updated
May 9, 2021 - Python
-
Updated
Jun 2, 2021 - Python
-
Updated
Jun 9, 2021 - Python
-
Updated
Jun 9, 2021 - Python
-
Updated
Jun 12, 2017
Refer to doc2vec.py, infer_vector function seems to be using epochs for the number of iterations and steps is not in used.
However, in the similarity_unseen_docs function, steps is used when calling the infer_vector function.
-
Updated
Jun 3, 2021
-
Updated
May 2, 2021
-
Updated
Jun 9, 2021 - Python
-
Updated
Jun 8, 2021 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict command opens the file and reads lines for the Predictor. This fails when it tries to load data from my compressed files.
-
Updated
Dec 22, 2020 - Python
-
Updated
Jun 9, 2021 - JavaScript
-
Updated
May 9, 2021 - Python
-
Updated
Jun 9, 2021 - TypeScript
-
Updated
May 16, 2021 - Jupyter Notebook
-
Updated
May 2, 2021 - Jupyter Notebook
-
Updated
Jun 9, 2021 - Java
-
Updated
Oct 22, 2020 - Python
-
Updated
Oct 22, 2020
-
Updated
Sep 23, 2020 - Jupyter Notebook
Hi I would like to propose a better implementation for 'test_indices':
We can remove the unneeded np.array casting:
Cleaner/New:
test_indices = list(set(range(len(texts))) - set(train_indices))
Old:
test_indices = np.array(list(set(range(len(texts))) - set(train_indices)))
-
Updated
Jun 10, 2021 - Python
Created by Alan Turing
- Wikipedia
- Wikipedia



At the moment we cannot return a list of attention weight outputs in Flax as we can do in PyTorch.
In PyTorch, there is a
output_attentionsboolean in the forward call of every function, see here which when set to True collects