reinforcement-learning
Here are 6,681 public repositories matching this topic...
-
Updated
Jul 14, 2021
-
Updated
Jun 27, 2021 - Python
-
Updated
Jul 16, 2021 - C#
-
Updated
Jun 6, 2021 - Python
-
Updated
Jun 23, 2021 - Python
-
Updated
May 21, 2021
-
Updated
Jan 15, 2021 - Jupyter Notebook
-
Updated
Jul 15, 2021 - C++
-
Updated
Jul 29, 2020 - Python
-
Updated
Jul 14, 2021 - Python
-
Updated
Nov 1, 2020 - Python
Bidirectional RNN
Is there a way to train a bidirectional RNN (like LSTM or GRU) on trax nowadays?
-
Updated
Jul 8, 2021
-
Updated
Jun 27, 2021 - Python
-
Updated
Apr 1, 2021 - Jupyter Notebook
-
Updated
Jun 16, 2021 - Python
-
Updated
Jul 14, 2021 - Jupyter Notebook
-
Updated
Nov 20, 2020 - Python
-
Updated
Jul 13, 2021 - Jupyter Notebook
-
Updated
Dec 14, 2019 - Jupyter Notebook
-
Updated
May 7, 2021 - JavaScript
-
Updated
Jul 14, 2021
-
Updated
May 1, 2021 - Jupyter Notebook
-
Updated
Jul 10, 2021
-
Updated
Jun 30, 2020 - Jupyter Notebook
-
Updated
Jul 16, 2021 - Jupyter Notebook
The following applies to DDPG and TD3, and possibly other models. The following libraries were installed in a virtual environment:
numpy==1.16.4
stable-baselines==2.10.0
gym==0.14.0
tensorflow==1.14.0
Episode rewards do not seem to be updated in model.learn() before callback.on_step(). Depending on which callback.locals variable is used, this means that:
- episode rewards may n
-
Updated
Jun 21, 2019 - C++
Improve this page
Add a description, image, and links to the reinforcement-learning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the reinforcement-learning topic, visit your repo's landing page and select "manage topics."


This discuss thread shows how to run tune with different resource requirements for each trial. This is useful in settings where some models in the search require GPU and others do not.
We can achieve this by taking advantage o