The agent can learn the policy through the external action port in the RL Agent so that the agent mimics the output of the reference signal

3 次查看(过去 30 天)
I created a DDPG agent that I wanted to learn from the output of an existing controller before training it later. So, I input the reference signal through the external action port, and set the use external action to 1 for training, when training, the output of the agent is the reference signal, but after the training. When I set the use external action to 0 for verification, the output of the agent is not the same as the reference signal, and the difference is a bit big. Does the external action port work with my idea? What should I do to realize my idea?
The figure below shows that when the external action is set to 0, the output of the trained agent is a red curve, and the reference signal is a green curve

回答(1 个)

Emmanouil Tzorakoleftherakis
It seems the agent started learning how to imitate the existing controller but needs more time. What does the Episode Manager look like? What is your reward signal?
  2 个评论
凡
2024-2-26
This is the Episode Manager,My bonus signal is: -4*u^2-du/dt,u is an observational measurement,My control goal is to make u 0. My project is to replace the PID controller with an agent,In PID control,u is the input quantity,So I want the agent to mimic the output of the PID at the beginning

请先登录,再进行评论。

产品


版本

R2023a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by