''Error using horzcat '' when use Reinforcement Learning Toolbox to create a environment object for the Simulink model
1 次查看(过去 30 天)
显示 更早的评论
Something wrong when I try to use Reinforcement Learning Toolbox to create a environment object for the Simulink model;
First I used the following syntax to load ActionObj and ObservationObj into the workspace
open_system('rlSimplePendulumModel')
env = rlPredefinedEnv('SimplePendulumModel-Discrete');
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);
Then I creat a DQN agent just using the syntax in the help documents of Reinforcement Learning Toolbox, then delete the object env in workshape, saved rlSimplePendulumModel.slx to current folder;
I really want to know how to create a environment object for the Simulink model finally when I executived this syntax
env = rlSimulinkEnv(rlSimplePendulumModel,[rlSimplePendulumModel '/RL Agent'],obsInfo,actInfo);
It returned a wrong statement saying : Error using horzcat. Dimensions of arrays being concatenated are not consistent.
I couldn't figure out what mistake I had made. Should I consider it as a bug caused by the Reinforcement Learning Toolbox?
I also used other predefined environment provided by the Toolbox and got the same mistake.
0 个评论
回答(1 个)
Harsha Priya Daggubati
2020-3-18
Hi,
Can you try saving your model name in another variable and use it instead of the model name. I guess the error is with concatenation being done here.
[rlSimplePendulumModel '/RL Agent']
Try using it this way-
mdl = open_system('rlSimplePendulumModel');
env = rlSimulinkEnv(mdl,[mdl '/RL Agent'],obsInfo,actInfo);
Hope this helps!
0 个评论
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!