reinforcement-learning
Here are 7,238 public repositories matching this topic...
-
Updated
Nov 15, 2021
-
Updated
Nov 13, 2021 - C#
-
Updated
Nov 5, 2021 - Python
-
Updated
Oct 28, 2021 - Python
-
Updated
Sep 25, 2021 - Python
-
Updated
Oct 19, 2021 - HTML
-
Updated
Jan 15, 2021 - Jupyter Notebook
-
Updated
Nov 14, 2021 - C++
-
Updated
Aug 13, 2021 - Python
-
Updated
Nov 1, 2020 - Python
-
Updated
Oct 29, 2021 - Python
Bidirectional RNN
Is there a way to train a bidirectional RNN (like LSTM or GRU) on trax nowadays?
-
Updated
Oct 29, 2021
-
Updated
Oct 1, 2021 - Python
-
Updated
Apr 1, 2021 - Jupyter Notebook
-
Updated
Nov 12, 2021 - Jupyter Notebook
-
Updated
Oct 19, 2021 - Python
-
Updated
Nov 6, 2021 - Jupyter Notebook
-
Updated
Nov 20, 2020 - Python
-
Updated
Nov 13, 2021 - Jupyter Notebook
-
Updated
Dec 14, 2019 - Jupyter Notebook
-
Updated
Nov 11, 2021
-
Updated
May 7, 2021 - JavaScript
-
Updated
Nov 12, 2021 - Jupyter Notebook
-
Updated
Nov 12, 2021
-
Updated
Nov 13, 2021 - Python
The following applies to DDPG and TD3, and possibly other models. The following libraries were installed in a virtual environment:
numpy==1.16.4
stable-baselines==2.10.0
gym==0.14.0
tensorflow==1.14.0
Episode rewards do not seem to be updated in model.learn() before callback.on_step(). Depending on which callback.locals variable is used, this means that:
- episode rewards may n
-
Updated
Jun 30, 2020 - Jupyter Notebook
Improve this page
Add a description, image, and links to the reinforcement-learning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the reinforcement-learning topic, visit your repo's landing page and select "manage topics."


When users starts Serve cluster with a set of options (http options, checkpoint path) and then connects to it with a different set of options, we should either update it, or error out.