Why my RL Agent action still passing the upper and lower limit ?
14 次查看(过去 30 天)
显示 更早的评论
I am using Policy Gradient Agent, I want that my action only in range 0 - 100 and i already set up my UpperLimit to 100, and LowerLimit to 0. But as you can see -scope display 3-, my action still can passing the limit. How can i fix that ?
2 个评论
Emmanouil Tzorakoleftherakis
2021-6-9
which one is the action here? How does your actor network look like?
denny
2021-12-7
I have solve my similar problem.
actInfo =rlNumericSpec([ 1],'UpperLimit',0.0771,'LowerLimit',-0.0405)
it means the minimum value is -0.0405, the maximum value is -0.0405+0.0771*2.
but your output is -1000 to 1000, I also donot know it.
回答(2 个)
Azmi Yagli
2023-9-5
编辑:Azmi Yagli
2023-9-5
If you look at rlNumericSpec, you can see this on LoweLimit or UpperLimit section.
DDPG, TD3 and SAC agents use this property to enforce lower limits on the action. When using other agents, if you need to enforce constraints on the action, you must do so within the environment.
So if you use other algorithms you can use saturation, but it didn't work for me.
You can try discretize actions of your agent so it can have boundaries.
Or you can give negative reward, if your agent exceeds limits for action.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!