I keep getting an error When I train an agent with DDPG: Error using rl.env.Sim​ulinkEnvWi​thAgent>lo​calHandleS​imoutError​s (line 667)

1 次查看(过去 30 天)
Incorrect use rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 667)
Invalid input argument type or size such as observation, reward, isdone or loggedSignals.
Incorrect use rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 667 )
Unable to compute gradient from representation.
Incorrect use rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 667 )
dLdX size of 'backward' in layer 'rl.layer.scalinglayer' is not correct。It should be 1x250, but it's actually 2x250.

回答(1 个)

Yash
Yash 2024-2-20
Hi,
I am assuming you are using R2020a or lower versions of MATLAB. Similar error as to what you are facing is identified in the external bug report 2217614 which can be accessed here: https://in.mathworks.com/support/bugreports/details/2217614
When you try to train DDPG with an actor or critic set up to work on GPU, it causes an error. This happens because the actor or critic does its calculations on the GPU when you set its "UseDevice" option to "GPU". To avoid this problem, use an actor or critic that is set up to work on CPU when training DDPG or TD3 agents. As a fix, you can update your MATLAB version to R2020a Update 2 or higher versions.
Furtther. the last line of your error indicates that there's a mismatch in the expected size of the gradient data (dLdX) during the backward pass of a neural network training process. The error originates from the 'rl.layer.scalinglayer' within the neural network architecture. The expected size of the gradient data is 1x250, but the actual size being passed is 2x250. You need to verify that the output of the 'rl.layer.scalinglayer' is meant to be 1x250. If not, adjust the network architecture accordingly.
Hope this helps.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by