Create and Train DQN Agent with just a State Path and Not Action Path
1 次查看(过去 30 天)
显示 更早的评论
Every example I have seen of a DQN on MATLAB is with two inputs, the state and action. However, it is possible for DQN RL to be done with just one input, the state but there are no examples for that case. How can that be done on MATLAB? My Input would basically be a binary vector and my output would be that I can do two actions?
Basically I am trying to recreate this: http://cwnlab.eecs.ucf.edu/wp-content/uploads/2019/12/2019_MLSP_ANCS_NAZMUL.pdf
0 个评论
采纳的回答
Emmanouil Tzorakoleftherakis
2020-7-6
Hello,
This page shows how this can be done in 20a. We will have examples that show this workflow in the next release.
Hope that helps.
9 个评论
Emmanouil Tzorakoleftherakis
2020-7-6
This sounds doable.You may even be able to do this without custom loops using built-in agents (something like centralized multi-agent RL). You can use a single agent and at each step extract the appropriate action and apply it to the appropriate part of the environment. The tricky part is (typical of multi-agent RL) to pick the right amount of observation to make sure your process is Markov. This will likely require observations from each 'subagent' etc.
更多回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!