I keep getting an error When I train an agent with DDPG: Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 667)
5 次查看(过去 30 天)
显示 更早的评论
Incorrect use rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 667)
Invalid input argument type or size such as observation, reward, isdone or loggedSignals.
Incorrect use rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 667 )
Unable to compute gradient from representation.
Incorrect use rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 667 )
dLdX size of 'backward' in layer 'rl.layer.scalinglayer' is not correct。It should be 1x250, but it's actually 2x250.
0 个评论
回答(1 个)
Yash
2024-2-20
Hi,
I am assuming you are using R2020a or lower versions of MATLAB. Similar error as to what you are facing is identified in the external bug report 2217614 which can be accessed here: https://in.mathworks.com/support/bugreports/details/2217614
When you try to train DDPG with an actor or critic set up to work on GPU, it causes an error. This happens because the actor or critic does its calculations on the GPU when you set its "UseDevice" option to "GPU". To avoid this problem, use an actor or critic that is set up to work on CPU when training DDPG or TD3 agents. As a fix, you can update your MATLAB version to R2020a Update 2 or higher versions.
Furtther. the last line of your error indicates that there's a mismatch in the expected size of the gradient data (dLdX) during the backward pass of a neural network training process. The error originates from the 'rl.layer.scalinglayer' within the neural network architecture. The expected size of the gradient data is 1x250, but the actual size being passed is 2x250. You need to verify that the output of the 'rl.layer.scalinglayer' is meant to be 1x250. If not, adjust the network architecture accordingly.
Hope this helps.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!