spacy
Here are 514 public repositories matching this topic...
I propose this topic as feature request, but it's also a documentation issue, as lack of details in user guide paragraph: https://rasa.com/docs/rasa/core/actions/#custom-actions.
What specified in paragraph Execute Actions in Other Code is obscure to me, and details at the API documentation link [Action Server](]https://rasa.com/docs/rasa/api/acti
Code:
nlp = spacy.load('en_core_web_sm')
coref = neuralcoref.NeuralCoref(nlp.vocab)
nlp.add_pipe(coref, name='neuralcoref')
doc = nlp("She loves him.")
print(len(doc._.coref_clusters), 'clusters')
doc = nlp("My sister has a dog. She loves him.")
print(len(doc._.coref_clusters), 'clusters')
doc = doc[7].sent.as_doc()
print(len(doc._.coref_clusters), 'clusters')Outpu
Breaks Displacy
-
Updated
Jun 5, 2020 - Python
-
Updated
Jun 18, 2020 - Python
Duplicate from issue #192 (solved in 2018) but cannot apply solution on textacy 0.10
Hi, I'm using Textacy on Google Colab environment, running a default hosted runtim, installed through !pip3 install textacy
I've run into the same problem as the OP in #192 , I'm following the official document examples, version installed is the latest (textacy-0.10.0).
When trying dir(textacy) this the o
I wanted to use pytextrank together with spacy_udpipe to get keywords from texts in other languages (see https://stackoverflow.com/questions/59824405/spacy-udpipe-with-pytextrank-to-extract-keywords-from-non-english-text) but I realized, that udpipe-spacy somehow "overrides" the original spacy's pipeline so the noun_chunks are not generated (btw: the noun_chunks are created in lang/en/syntax_itera
-
Updated
May 29, 2020 - Python
Add the ability to answer weather forecast questions. DO NOT USE any API key because Dragonfire is an application that runs on the client's machine.
This is also on page 356.
from nltk.corpus import sentiwordnet as swn
good = swn.senti_synsets('good', 'n')[0]
Traceback (most recent call last):
File "", line 1, in
TypeError: 'filter' object is not subscriptable
Hi,
When we try to tokenize the following sentence:
If we use spacy
a = spacy.load('en_core_web_lg')
doc = a("I like the link http://www.idph.iowa.gov/ohds/oral-health-center/coordinator")
list(doc)
We got
[I, like, the, link, http://www.idph.iowa.gov, /, ohds, /, oral, -, health, -, center, /, coordinator]
But if we use the Spacy transformer tokenizer:
Saving and Loading
First of all, thanks for creating scispacy, I think it's an amazing tool and very useful!!
I was running into an issue when saving the output of scispacy. I tried to pickle a Doc object, as explained on the website of spaCy, as follows:
import spacy
import pickle
nlp = spacy.load("en_core_sci_sm")
# add the Abbreviation Detector
abbrev
-
Updated
Jun 17, 2020 - Python
Would it be possible to make this compatible with spacy v2.1 ? Trying to use this for geoparsing but I am using spacy v2.1 and the newer "en_core_web_lg" model for other downstream tasks. Since Mordecai is not compatible with spacy 2.1 yet, it tries to downgrade to spacy 2.0 and requires the older models at the moment.
spaCy version: 2.1.9
spaCy-stanza version: 0.2.1
import stanza
from spacy_stanza import StanzaLanguage
stanza.download('ru')
snlp = stanza.Pipeline(lang="ru")
nlp = StanzaLanguage(snlp)
text = "Мама мыла раму"Using stanza, i get this:
for sentence in snlp(text).senteces:
for word in sentence.words:
print(word.feats)
# Animacy=Anim|Case=Nom|Gender=-
Updated
Feb 6, 2020 - Python
-
Updated
Mar 31, 2020 - Jupyter Notebook
-
Updated
Apr 8, 2018 - JavaScript
Hello,
Using the code from https://github.com/explosion/spacy-notebooks/blob/master/notebooks/conference_notebooks/modern_nlp_in_python.ipynb (I think, it is not loading right now)
I was attempting to run the tutorial on my own collection of documents with the following three functions:
def punct_space(token):
"""
helper function to eliminate tokens
that are pure punct
I have been trying to use a custom NER model for spacy-services with the name 'endec'.When I try to use it directly by loading it in displacy/app.py, I get the following error:
`Traceback (most recent call last):
File "app.py", line 4, in
get_model('endec')
File "/home/prgs/spacy-services/displacy/displacy_service/server.py", line 36, in get_model
_models[model_name] =
-
Updated
Jan 3, 2020 - Python
-
Updated
Jan 5, 2020 - Python
-
Updated
Mar 3, 2019 - Python
-
Updated
Jun 14, 2020 - Jupyter Notebook
Improve this page
Add a description, image, and links to the spacy topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the spacy topic, visit your repo's landing page and select "manage topics."


I was going though the existing enhancement issues again and though it'd be nice to collect ideas for spaCy plugins and related projects. There are always people in the community who are looking for new things to build, so here's some inspiration✨ For existing plugins and projects, check out the spaCy universe.
If you have questions about the projects I suggested,