The reward gets stuck on a single value during training or randomly fluctuates (Reinforcement Learning)

13 次查看(过去 30 天)
I train the reinforcement learning system, and on the reward plot I have some failures during which the reward does not change. This doesn’t look normal, especially when compared with examples (Biped Robot, etc.) I believe that some rlDDPGAgentOptions settings are responsible for this, but it seems that I changed all the possible settings, but even after several thousand episodes, the system does not learn. What can be the reason for this behavior of this graph during training?

采纳的回答

Ari Biswas
Ari Biswas 2020-5-5
It could mean that the training is experiencing a local minima. You can try out a few things:
1. Change the OU noise options to favor more exploration so that the robot can explore more states and get new rewards.
2. Design a different reward function that is not too dependent on sparse rewards. From the graph (flatlines) it looks like you have a sparse reward for a state that the agent is continuously visiting.
In most cases, designing better reward functions will improve training. That being said, 350 episodes might be too early to expect good results. I would let it run for a few 1000 episodes at least before coming to a conclusion that something needs to change.
  4 个评论
Abd Al-Rahman Al-Remal
Hi,
When you say to change the noise options to favour more exploration: how would this be implemented? i.e what parameters should be changed and in what manner?
My case is slightly different than OP's however as my agent just stays at the same reward value consistently (I've never tested it for more than 100 episodes or so however).
Many thanks!

请先登录,再进行评论。

更多回答(0 个)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by