Action Clipping and Scaling in TD3 in Reinforcement Learning

7 次查看(过去 30 天)
Hello,
I am trying to tune my TD3 agent to solve my custom environment. The environment has two actions in the following range: the first one in [0 10] and the second one in [0 2*PI) (rlNumericSpace).
I am following this example architecture---
https://in.mathworks.com/help/reinforcement-learning/ug/train-td3-agent-for-pmsm-control.html
Now I have the following questions.
  1. Since tanh is [-1 1], should I use the scaling layer at the actor network's end? maybe with the following values
scalingLayer('Name','ActorScaling1','Scale',[5;pi],'Bias',[5;pi])];
2. How to setup Exploration noise and Target policy noise? I mean, what should be their variance values? Well, not precisely tuned, but a competent range given I have more than one action and the provided action range is not in [-1 1] ?
3. How do I clip those values to fit inside the action bound? I dont see any such option in rlTD3AgentOptions
I see all the TD3 examples (and most RL examples in general) action's range is b/n [-1 1]. I am confused about modifying the parameters when the action space is not within [-1 1], like in my case.
Thanks.

采纳的回答

Emmanouil Tzorakoleftherakis
Hello,
In general, for DDPG and TD3, it is good practice to include the scalingLayer as the last actor layer to scale/shift the actor actions within desired range.
To your questions:
1) You should use the scalingLayer yes. To specify different scale/bias values for your two outputs, have a look at this example.
2) This section provides some tips on how to set up exploration variance, e.g. "It is common to have Variance*sqrt(Ts) be between 1% and 10% of your action range".
3) The upper and lower limit options in rlNumericSpec as well as the scalingLayer will ensure your actions are within desired range before exploration noise is added. After adding noise however, it is possible that your actions will go out of range which is also why it's often necessary to account for that on the environment side. If you are using Simulink, add for example a saturation block. In MATLAB add an if statement and clip the actions if out of range.
Hope that helps
  3 个评论
Emmanouil Tzorakoleftherakis
In the step function yes. You can just add an if statemeng, or use "max" or "min"

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Deep Learning Toolbox 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by