How to TRAIN further a previously trained agent?

54 次查看(过去 30 天)
Hi,
My agent was programmed to stop after reaching an average reward of X. How do I load and extend the training further?
I did enable saving of the experiences and it has created the agent file
Rajesh

采纳的回答

Rajesh Siraskar
Rajesh Siraskar 2019-12-11
Hi Sourav, I figured it out after reading the documentation moer carefully!
I need to also set the ResetExperienceBufferBeforeTraining flag if I need to use previously saved experiences
This is my working code snippet. I must say this is a great feature and I really missed knowing about it!
USE_PRE_TRAINED_MODEL = true; % Set to true, to use pre-trained
% Set agent option parameter:
agentOpts.ResetExperienceBufferBeforeTraining = not(USE_PRE_TRAINED_MODEL);
if USE_PRE_TRAINED_MODEL
% Load experiences from pre-trained agent
sprintf('- Continue training pre-trained model: %s', PRE_TRAINED_MODEL_FILE);
load(PRE_TRAINED_MODEL_FILE,'saved_agent');
agent = saved_agent;
else
% Create a fresh new agent
agent = rlDDPGAgent(actor, critic, agentOpts);
end
% Train the agent
trainingStats = train(agent, env, trainOpts);
  4 个评论
Anh Tran
Anh Tran 2020-2-21
Rajesh is correct. Currently the noise model resets when you train again. We are looking into how you can truly 'resume' training. As a workaround, you can set the noise variance option to a lower value than that of your previous train session.
轩
2024-6-14
Useful discussion and thank you all very much !

请先登录,再进行评论。

更多回答(3 个)

Anh Tran
Anh Tran 2020-2-21
I will answer again, hopefully clear your confusion.
% Train the agent
trainingStats = train(agent, env, trainOpts);
After this line, even though the 'agent' is not returned as an output, its learnable parameters are updated. Learnable parameters, e.g. the weights and biases of the actor/critic neural networks, determines the logic behind the agent (and how it chooses action given an observation).
Now if you execute sim() or train() after this line, the 'agent' will simulate or continue training with the latest parameters.
Rajesh's workflow is very close to resume training (reuse the experiences gathered in the past, start from latest parameters). I revised the code with additional comments. Currently the noise model resets when you train again. You can consider setting the noise variance option to a lower value (still need to be > 0 because we want the agent to always explore) than that of your previous train session.
% Set to true, to resume training from a saved agent
resumeTraining = true;
% Set ResetExperienceBufferBeforeTraining to false to keep experience from the previous session
agentOpts.ResetExperienceBufferBeforeTraining = ~(resumeTraining);
if resumeTraining
% Load the agent from the previous session
sprintf('- Resume training of: %s', PRE_TRAINED_MODEL_FILE);
load(PRE_TRAINED_MODEL_FILE,'saved_agent');
agent = saved_agent;
else
% Create a fresh new agent
agent = rlDDPGAgent(actor, critic, agentOpts);
end
% Train the agent
trainingStats = train(agent, env, trainOpts);
  2 个评论
Stav Bar-Sheshet
Stav Bar-Sheshet 2020-6-4
Hi, this is an excellent thread!
What I'm curios about is if you continue training doest the state of the optimizer is saved and continues from the same point?
Sayak Mukherjee
Sayak Mukherjee 2021-2-23
for restarting the run with saved agent, the saved agent shaould have 'SaveExperienceBufferWithAgent' parameter set to true, right?

请先登录,再进行评论。


Jonas Woeste
Jonas Woeste 2022-6-11
Got it to work in Matlab 2022a where its a touch different:
Clue is to save the trainOpts variable after training, which then will technically be a training result object. After restoring this, increase the MaxEpisodes for further training...
% Do the agent, env stuff...
% Load pretrained agent
if isfile('trained_agent.mat')
load("trained_agent.mat","trainOpts")
% increase the max epochs to go on training
cur_episodes = trainOpts.TrainingOptions.MaxEpisodes;
trainOpts.TrainingOptions.MaxEpisodes = cur_episodes + num_epochs;
end
% Train
trainOpts = train(agent,env,trainOpts);
% Save
save("trained_agent.mat","trainOpts")
Please someone update the documentation about this. There its still suggesting to save the agents object...

Sourav Bairagya
Sourav Bairagya 2019-12-10
In this case, you can resume your training with the previous experience buffer as a starting point.
You have to set the 'SaveExperienceBufferWithAgent' agent option to 'true'.
For some agents, such as those with large experience buffers and image-based observations, the memory required for saving their experience buffer is large. In these cases, you must ensure that there is enough memory available for the saved agents.
For more informations you can leverage this link:
  5 个评论
Jonas Woeste
Jonas Woeste 2022-6-10
Its not being saved, as the saved file is of size ~25kB regardless of trained epochs. A hint for a working practice for saving and continuing on trained agents would be nice.
轩
2024-6-14
It seems that the option is under the structure agent.AgentOptions.InfoToSave

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Training and Simulation 的更多信息

产品


版本

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by