Training a DDPG, and observation values are zero. How do I initialize the first episode to have initial values to the action?

3 次查看(过去 30 天)
Hello,
I am training a DDPG agent with four actions. My observations are zero for more than 1000 episodes. I suspect because the action values have been zero, that is affecting the observations. How do I set the action values for the first episode to some values at start.
Actions are torque input with min and max (200) and later multiplied with gain 100. Is there something, I need to do to properly to get the observations to not stay as zeros.
  4 个评论
Bay Jay
Bay Jay 2023-7-2
I have a followup question,
This is what I know: during training, the episode ends at the end of the simulation time, tf.
If you have an RL problem, and there are no isdone condition because you just want agent to learn the "optimal" solution to maximize a - reward, but you want the RL to know that the only termination condition is the specific set time, tf. (Tf =5, is fixed and does not change). How do you set the isdone condition. Do you connect a time clock to the isDone or you just leave it unconnected. If it is left unconnected, how does the agent know that that time is the terminating condition? Any recommendation to ensure I am properly training the agent would be appreciated.
Emmanouil Tzorakoleftherakis
Not very clear why you would want the agent to learn when the termination time of the episode? After training you can always choose to 'unplug' the agent as you see fit.

请先登录,再进行评论。

回答(0 个)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by