How to run the simulink model when implementing custom RL training?
1 次查看(过去 30 天)
显示 更早的评论
Yihao Wan
2023-5-25
评论: Emmanouil Tzorakoleftherakis
2023-5-25
Hello, I am developing a custom training of RL DQN agent based on the link, however, how should I adapt it to the simulink environment?
Especially for the code below, when applying an action to the environment, the step is not applicable for a simulink model. How should I solve this issue? Thanks in advance.
% Apply the action to the environment
% and obtain the resulting observation and reward.
[nextObs,reward,isdone] = step(env,action{1});
0 个评论
采纳的回答
Emmanouil Tzorakoleftherakis
2023-5-25
The way to do it would be to use runEpisode
2 个评论
Emmanouil Tzorakoleftherakis
2023-5-25
The example you are showing is model-based RL, it's different from what you mentioned at the beginning.
更多回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!