The reward gets stuck on a single value during training or randomly fluctuates (Reinforcement Learning)
13 次查看(过去 30 天)
显示 更早的评论
I train the reinforcement learning system, and on the reward plot I have some failures during which the reward does not change. This doesn’t look normal, especially when compared with examples (Biped Robot, etc.) I believe that some rlDDPGAgentOptions settings are responsible for this, but it seems that I changed all the possible settings, but even after several thousand episodes, the system does not learn. What can be the reason for this behavior of this graph during training?
0 个评论
采纳的回答
Ari Biswas
2020-5-5
It could mean that the training is experiencing a local minima. You can try out a few things:
1. Change the OU noise options to favor more exploration so that the robot can explore more states and get new rewards.
2. Design a different reward function that is not too dependent on sparse rewards. From the graph (flatlines) it looks like you have a sparse reward for a state that the agent is continuously visiting.
In most cases, designing better reward functions will improve training. That being said, 350 episodes might be too early to expect good results. I would let it run for a few 1000 episodes at least before coming to a conclusion that something needs to change.
4 个评论
Abd Al-Rahman Al-Remal
2021-6-12
Hi,
When you say to change the noise options to favour more exploration: how would this be implemented? i.e what parameters should be changed and in what manner?
My case is slightly different than OP's however as my agent just stays at the same reward value consistently (I've never tested it for more than 100 episodes or so however).
Many thanks!
Ari Biswas
2021-6-13
编辑:Ari Biswas
2021-6-13
For a DDPG agent you can tune the StandardDeviation and StandardDeviationDecayRate parameters. Please see the documentation for instructions.
更多回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!