Deep Reinforcement Learning Reward Function for Reference Tracking
2 次查看(过去 30 天)
显示 更早的评论
Hi All,
Would like some advice on training a RL agent reward function for good reference tracking.
My environment is a PMLSM, which i have now simplified to a simply second order system to allow me to debug the RL code and permit tuning the parameters in the reward function..
I have been using the RL PMSM example, TD3Agent example ‘mcb_pmsm_foc_sim_RL’ using the script ‘TrainTD3AgentForPMSMControlExample’.
My second order system just uses 1 action (Vq) and three observations (iq, iq_Error and iq_Error Int), with a simple reard function . With initial and. Training the agent with 2000 episodes did get iq to follow iq_ref, however the ouput behaviuor was underdamped and oscillatory.
iq Ref and iq actual
Actions Vq
Reward
Can this underdamped step response be tuned via the reward function or via tuning any of the TD3 agent hyperparameters?
Also any pointers to any literature to an alternative reward function for good reference tracking for iq current control and also speed/position control of a PMSM?
My research so far has only highlighted the following reward. , where a is a constant and σ standard deviation, any suggestions or comments?
Please comment of guide to any literature with regards to using an TD3 RL agent, for PMSM reference tracking and reference tracking reward functions.
Many thanks
Patrick
0 个评论
回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!