#
drl-trading-agents
Here are 2 public repositories matching this topic...
cryptocoinserver
commented
Jan 17, 2022
A great repo and paper: https://github.com/golsun/deep-RL-trading
This could be useful for FinRL maybe as helper / environment function. Training first on rather simple idealized synthetic prices before feeding real data might be beneficial to learn the agent the "basics". Also it's great for testing.
- Sine wave
- Trend curves
- Random walk
Improve this page
Add a description, image, and links to the drl-trading-agents topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the drl-trading-agents topic, visit your repo's landing page and select "manage topics."


Hi,
I am currently using your FinRL_PortfolioAllocation_NeurIPS_2020 code and I have some strange behavior at the beginning of training. Sometimes the first episode reward mean value is super high and then drops during the training as shown on the tensorboard plot. This high value is never reached again. Any idea why this is happening ?
Edit: I'm training PPO agent from stablebaselines3 wit