The Wayback Machine - https://web.archive.org/web/20220523111901/https://github.com/topics/reinforcement-learning
Skip to content
#

reinforcement-learning

Here are 8,166 public repositories matching this topic...

annotated_deep_learning_paper_implementations

🧑‍🏫 50! Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

  • Updated May 23, 2022
  • Jupyter Notebook
stable-baselines
calerc
calerc commented Nov 23, 2020

The following applies to DDPG and TD3, and possibly other models. The following libraries were installed in a virtual environment:

numpy==1.16.4
stable-baselines==2.10.0
gym==0.14.0
tensorflow==1.14.0

Episode rewards do not seem to be updated in model.learn() before callback.on_step(). Depending on which callback.locals variable is used, this means that:

  • episode rewards may n
good first issue question
stable-baselines3
YannBerthelot
YannBerthelot commented Jan 18, 2022

🐛 Bug

The documentation of DQN agent (https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) specifies that log_interval parameter is "The number of timesteps before logging". However, when set to 1 (or any other value) the logging is not made at that pace but is instead made every log_interval episode (and not timesteps). In the example below this is made every 200 timesteps.

bug documentation good first issue

Improve this page

Add a description, image, and links to the reinforcement-learning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the reinforcement-learning topic, visit your repo's landing page and select "manage topics."

Learn more