reinforcement-learning
Here are 8,166 public repositories matching this topic...
-
Updated
May 1, 2022
-
Updated
May 20, 2022 - C#
-
Updated
Apr 15, 2022 - Python
-
Updated
May 10, 2022 - Python
-
Updated
Apr 10, 2022 - HTML
-
Updated
May 22, 2022 - C++
-
Updated
Jan 15, 2021 - Jupyter Notebook
-
Updated
May 23, 2022 - Jupyter Notebook
-
Updated
Jan 29, 2022 - Python
-
Updated
Nov 1, 2020 - Python
-
Updated
Apr 23, 2022 - Python
Bidirectional RNN
Is there a way to train a bidirectional RNN (like LSTM or GRU) on trax nowadays?
-
Updated
May 20, 2022 - Jupyter Notebook
-
Updated
May 16, 2022 - Jupyter Notebook
-
Updated
Feb 3, 2022
-
Updated
May 22, 2022 - Python
-
Updated
May 4, 2022 - Python
-
Updated
Jan 5, 2022 - Python
-
Updated
Apr 17, 2022 - Jupyter Notebook
-
Updated
Mar 21, 2022
-
Updated
May 21, 2022 - Jupyter Notebook
-
Updated
Dec 14, 2019 - Jupyter Notebook
-
Updated
May 7, 2021 - JavaScript
-
Updated
Feb 15, 2022 - Jupyter Notebook
-
Updated
May 23, 2022 - Python
-
Updated
Mar 18, 2022
The following applies to DDPG and TD3, and possibly other models. The following libraries were installed in a virtual environment:
numpy==1.16.4
stable-baselines==2.10.0
gym==0.14.0
tensorflow==1.14.0
Episode rewards do not seem to be updated in model.learn() before callback.on_step(). Depending on which callback.locals variable is used, this means that:
- episode rewards may n
🐛 Bug
The documentation of DQN agent (https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) specifies that log_interval parameter is "The number of timesteps before logging". However, when set to 1 (or any other value) the logging is not made at that pace but is instead made every log_interval episode (and not timesteps). In the example below this is made every 200 timesteps.
Improve this page
Add a description, image, and links to the reinforcement-learning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the reinforcement-learning topic, visit your repo's landing page and select "manage topics."


Contributing to RLlib
Ray RLlib is more than a framework for distributed applications but also an active community of developers, researchers, and folks that love machine learning. This project adheres to the Ray guidelines for contributing.