reinforcement learning and DDPG agent problem
2 次查看(过去 30 天)
显示 更早的评论
I used a deep reinforcement learning toolbox to path planning of a robot, including the DDPG algorithm. My scenario is that the robot starts from a random position and reaches the random goal location. After training, the result is a fixed path! And with changing the goal position, the path does not change. It is as if the network has learned only one path. The Drop-out layer is used in the network structure.
Does anyone have any idea what went wrong?
0 个评论
采纳的回答
Emmanouil Tzorakoleftherakis
2020-9-18
Looks like training was not successful. There could be many things at fault here - some suggestions:
1) Make sure you are randomizing the target locations at the beginning of each episode. It would help if you add visualization to actually verify targets move/debug the agent's behavior during training
2) The agent may not have enough information available to make decisions. Make sure the observations provide enough info to the agent
3) What does the episode manager plot look like when training stops? You may need to train the agent for more time
4) Why are you using a dropout layer? Unless your observations are images, this layer islikely not required (at least I don't think I have seen it in any shipping examples in Reinforcement Learning Toolbox). So your neural network architecture may also have something to do with this behavior.
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!