runEpisode
Syntax
Description
specifies nondefault simulation options using one or more name-value arguments.output
= runEpisode(___,Name=Value
)
Examples
Simulate Environment and Agent
Create a reinforcement learning environment and extract its observation and action specifications.
env = rlPredefinedEnv("CartPole-Discrete");
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);
To approximate the Q-value function within the critic, use a neural network. Create a network as an array of layer objects.
net = [...
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(24)
reluLayer
fullyConnectedLayer(24)
reluLayer
fullyConnectedLayer(2)
softmaxLayer];
Convert the network to a dlnetwork
object and display the number of learnable parameters (weights).
net = dlnetwork(net); summary(net)
Initialized: true Number of learnables: 770 Inputs: 1 'input' 4 features
Create a discrete categorical actor using the network.
actor = rlDiscreteCategoricalActor(net,obsInfo,actInfo);
Check your actor with a random observation.
act = getAction(actor,{rand(obsInfo.Dimension)})
act = 1x1 cell array
{[-10]}
Create a policy object from the actor.
policy = rlStochasticActorPolicy(actor);
Create an experience buffer.
buffer = rlReplayMemory(obsInfo,actInfo);
Set up the environment for running multiple simulations. For this example, configure the training to log any errors rather than send them to the command window.
setup(env,StopOnError="off")
Simulate multiple episodes using the environment and policy. After each episode, append the experiences to the buffer. For this example, run 100 episodes.
for i = 1:100 output = runEpisode(env,policy,MaxSteps=300); append(buffer,output.AgentData.Experiences) end
Clean up the environment.
cleanup(env)
Sample a mini-batch of experiences from the buffer. For this example, sample 10 experiences.
batch = sample(buffer,10);
You can then learn from the sampled experiences and update the policy and actor.
Input Arguments
env
— Reinforcement learning environment
environment object | ...
Reinforcement learning environment, specified as one of the following objects.
rlFunctionEnv
— Environment defined using custom functionsSimulinkEnvWithAgent
— Simulink® environment created usingrlSimulinkEnv
orcreateIntegratedEnv
rlMDPEnv
— Markov decision process environmentrlNeuralNetworkEnvironment
— Environment with deep neural network transition modelsPredefined environment created using
rlPredefinedEnv
Custom environment created from a template (
rlCreateEnvTemplate
)
policy
— Policy
policy object | array of policy objects
Policy object, specified as one of the following objects.
rlDeterministicActorPolicy
rlAdditiveNoisePolicy
rlEpsilonGreedyPolicy
rlMaxQPolicy
rlStochasticActorPolicy
If env
is a Simulink environment configured for multi-agent training, specify
policy
as an array of policy objects. The order of the policies
in the array must match the agent order used to create env
.
For more information on a policy object, at the MATLAB® command line, type help
followed by the policy object
name.
agent
— Reinforcement learning agent
agent object | array of agent objects
Reinforcement learning agent, specified as one of the following objects.
Custom agent — For more information, see Create Custom Reinforcement Learning Agents.
If env
is a Simulink environment configured for multi-agent training, specify
agent
as an array of agent objects. The order of the agents in
the array must match the agent order used to create env
.
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Example: MaxSteps=1000
MaxSteps
— Maximum simulation steps
500
(default) | positive integer
Maximum simulation steps, specified as a positive integer.
ProcessExperienceFcn
— Function for processing experiences
function handle | cell array of function handles
Function for processing experiences and updating the policy or agent based on each experience as it occurs during the simulation, specified as a function handle with the following signature.
[updatedPolicy,updatedData] = myFcn(experience,episodeInfo,policy,data)
Here:
experience
is a structure that contains a single experience. For more information on the structure fields, seeoutput.Experiences
.episodeInfo
contains data about the current episode and corresponds tooutput.EpisodeInfo
.policy
is the policy or agent object being simulated.data
contains experience processing data. For more information, seeProcessExperienceData
.updatedPolicy
is the updated policy or agent.updatedData
is the updated experience processing data, which is used as thedata
input when processing the next experience.
If env
is a Simulink environment configured for multi-agent training, specify
ProcessExperienceFcn
as a cell array of function handles. The
order of the function handles in the array must match the agent order used to create
env
.
ProcessExperienceData
— Experience processing data
any MATLAB data type | cell array
Experience processing data, specified as any MATLAB data, such as an array or structure. Use this data to pass additional parameters or information to the experience processing function.
You can also update this data within the experience processing function to use
different parameters when processing the next experience. The data values that you
specify when you call runEpisode
are used to process the first
experience in the simulation.
If env
is a Simulink environment configured for multi-agent training, specify
ProcessExperienceData
as a cell array. The order of the array
elements must match the agent order used to create env
.
CleanupPostSim
— Option to clean up environment
true
(default) | false
Option to clean up the environment after the simulation, specified as
true
or false
. When
CleanupPostSim
is true
,
runEpisode
calls cleanup(env)
when the
simulation ends.
To run multiple episodes without cleaning up the environment, set
CleanupPostSim
to false
. You can then call
cleanup(env)
after running your simulations.
If env
is a SimulinkEnvWithAgent
object and
the associated Simulink model is configured to use fast restart, then the model remains in a
compiled state between simulations when CleanUpPostSim
is
false
.
LogExperiences
— Option to log experiences
true
(default) | false
Option to log experiences for each policy or agent, specified as
true
or false
. When
LogExperiences
is true
, the experiences of
the policy or agent are logged in output.Experiences
.
Output Arguments
output
— Simulation output
structure | Future
object
Simulation output, returned as a structure with the fields
AgentData
and SimulationInfo
.
The AgentData
field is a structure array containing data for each
agent or policy. Each AgentData
structure has the following
fields.
Field | Description |
---|---|
Experiences | Logged experience of the policy or agent, returned as a structure array. Each experience contains the following fields.
|
Time | Simulation times of experiences, returned as a vector. |
EpisodeInfo | Episode information, returned as a structure with the following fields.
|
ProcessExperienceData | Experience processing data |
Agent | Policy or agent used in the simulation |
The SimulationInfo
field is one of the following:
For MATLAB environments — Structure containing the field
SimulationError
. This structure contains any errors that occurred during simulation.For Simulink environments —
Simulink.SimulationOutput
object containing simulation data. Recorded data includes any signals and states that the model is configured to log, simulation metadata, and any errors that occurred.
If env
is configured to run simulations on parallel workers,
then output
is a Future
object,
which supports deferred outputs for environment simulations that run on workers.
Tips
You can speed up episode simulation by using parallel computing. To do so, use the
setup
function and set theUseParallel
argument totrue
.setup(env,UseParallel=true)
Version History
Introduced in R2022a
See Also
Objects
Functions
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)