-
Updated
Dec 29, 2021 - Python
transfer-learning
Here are 2,338 public repositories matching this topic...
_handle_duplicate_documents and _drop_duplicate_documents in the elastic search document store will always report self.index as the index with the conflict, which is obviously incorrect.
Edit: Upon further investigation, this is actually a lot worse. Using multiple indices with the ElasticSearch DocumentStore is completely broken due to the fact, that this is used in `_handle_duplicate_do
-
Updated
Dec 19, 2021
-
Updated
Dec 30, 2021 - Python
-
Updated
Dec 9, 2021 - Python
-
Updated
Jul 9, 2021 - Python
-
Updated
Dec 13, 2021
-
Updated
Oct 6, 2021
-
Updated
Nov 23, 2021 - Python
I'm playing around with this wonderful code but I'm running into a curious issue when I try to train the model with my own data.
I replicated the personachat_self_original.json file structure and added my own data. I deleted dataset_cache_OpenAIGPTTokenizer file but when I try to train, I get this error:
INFO:train.py:Pad inputs and convert to Tensor
Traceback (most recent call last)
-
Updated
Jun 14, 2021 - Jupyter Notebook
-
Updated
Dec 31, 2021 - Python
-
Updated
Aug 25, 2021 - Python
-
Updated
Dec 22, 2021 - Python
-
Updated
Dec 27, 2021 - Jupyter Notebook
-
Updated
Oct 21, 2021
-
Updated
Dec 16, 2021 - Python
-
Updated
Jan 1, 2019 - Python
-
Updated
Sep 16, 2020 - Python
We have a lot of antiquated docstrings that don't render well into ReadTheDocs. A kind of grunge (but incredibly useful) task would be to refactor these docstrings into proper ReadTheDocs format. This would allow us to render them effectively...
-
Updated
Jul 26, 2019 - Python
-
Updated
Jan 2, 2022 - Jupyter Notebook
-
Updated
Dec 31, 2021 - Python
Per this comment in #12
-
Updated
Oct 16, 2019 - Python
-
Updated
Dec 7, 2020 - Jupyter Notebook
-
Updated
Dec 9, 2021
Improve this page
Add a description, image, and links to the transfer-learning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the transfer-learning topic, visit your repo's landing page and select "manage topics."


Could FeatureTools be implemented as an automated preprocessor to Autogluon, adding the ability to handle multi-entity problems (i.e. Data split across multiple normalised database tables)? So if you supply Autogluon with a list of Dataframes instead of a single Dataframe it would first invoke FeatureTools: