The Wayback Machine - https://web.archive.org/web/20220329033051/https://github.com/topics/drl-trading-agents
Skip to content
#

drl-trading-agents

Here are 2 public repositories matching this topic...

scaro86
scaro86 commented Mar 21, 2022

Hi,
I am currently using your FinRL_PortfolioAllocation_NeurIPS_2020 code and I have some strange behavior at the beginning of training. Sometimes the first episode reward mean value is super high and then drops during the training as shown on the tensorboard plot. This high value is never reached again. Any idea why this is happening ?

Edit: I'm training PPO agent from stablebaselines3 wit

bug good first issue
cryptocoinserver
cryptocoinserver commented Jan 17, 2022

A great repo and paper: https://github.com/golsun/deep-RL-trading

This could be useful for FinRL maybe as helper / environment function. Training first on rather simple idealized synthetic prices before feeding real data might be beneficial to learn the agent the "basics". Also it's great for testing.

  • Sine wave
  • Trend curves
  • Random walk
suggestion good first issue

Improve this page

Add a description, image, and links to the drl-trading-agents topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the drl-trading-agents topic, visit your repo's landing page and select "manage topics."

Learn more