DDPG Agent: Not stabilizing creating an unstable model

3 次查看(过去 30 天)
Dear MATLAB,
Am training a DDPG agent on randomly set straight lines (levels) and later testing on a benchmark waveform. Shouldn't the training stablize over time and create a stable model? At 960 episodes the saved agent seems to perform better than at 2180 episodes. Both agents saved for avg.rewards over 50 episodes and > 25 K. Also the difference between model saved at 940 versus 960 episodes seems drastic.
In the picture below are the Episode Manager showing the avg.rewards (over 50 episodes) going up and down several times. One would expect it to look like the dark green line, stablizing over time? What change can I make to create a stable model?
Action space: 1.0 to 10.0, continuos
Test wave-form: 2000 seconds long
Training sample time and simulation length: Ts: 1 and Tf=250
Hyper-parameters: Learning Rates Critic = 1e-03, Actor = 1e-04 | Gamma (discount) = 0.95, Batch size = 64
Neurons: Obsv. path: FC1 = 64, FC2 = 24 and Actor path FC1 = 24
DDPG Noise Variance = 0.1, VarianceDecayRate = 1e-5 (Have tried Noise Variance 0.45 too and decay at 1e-3, 1e-4 etc.)
(For a higher res. image please see attached)
V.9.94.4_MATLAB_16-Dec-2019.jpg

回答(1 个)

Rajesh Siraskar
Rajesh Siraskar 2019-12-20
Based on several rounds of training, my personal observation is that RL will converge initially to an optimal expected value.
Any training beyond that simply seems to not help. I think it is important to stop when we realize that it has reached the optimum.
  1 个评论
Emmanouil Tzorakoleftherakis
+1 on that. It could for example be the case that you reach a point in training where you have a decent policy, but exploration of the agent leads the search somewhere else (pros and cons of sample-based gradients).

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Deep Learning Toolbox 的更多信息

产品


版本

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by