Plot training information from a previous training session
By default, the
train function shows
the training progress and results in the Episode Manager during training. If you configure
training to not show the Episode Manager or you close the Episode Manager after training, you
can view the training results using the
which opens the Episode Manager. You can also use
to view the training results for agents saved during training.
For this example, assume that you have trained the agent in the Train Reinforcement Learning Agent in MDP Environment example and subsequently closed the Episode Manager.
Load the training information returned by the
load mdpTrainingStats trainingStats
Reopen the Episode Manager for this training session.
For this example, load the environment and agent for the Train Reinforcement Learning Agent in MDP Environment example.
Specify options for training the agent. Configure the
SaveAgentValue options to save all agents with a reward greater than or equal to 13.
trainOpts = rlTrainingOptions; trainOpts.MaxStepsPerEpisode = 50; trainOpts.MaxEpisodes = 50; trainOpts.Plots = "none"; trainOpts.SaveAgentCriteria = "EpisodeReward"; trainOpts.SaveAgentValue = 13;
Train the agent. During training, when an episode has a reward greater than or equal to 13, a copy of the agent is saved in a
rng('default') % for reproducibility trainingStats = train(qAgent,env,trainOpts);
Load the training results for one of the saved agents. This command loads both the agent and a structure that contains the corresponding training results.
View the training results from the saved agent result structure.
The Episode Manager shows the training progress up to the episode in which the agent was saved.
trainResults— Training episode data
Training episode data, specified as a structure or structure array returned by the
agentResults— Saved agent results
Saved agent results, specified as a structure previously saved by the
train function saves agents when you specify the
SaveAgentValue options in
rlTrainingOptions object used during training.
When you load a saved agent, the agent and its training results are added to the
MATLAB® workspace as
savedAgentResultStruct, respectively. To plot the training data for
this agent, use the following command.
For multi-agent training,
structure fields with training results for all the trained agents.