Options for PPO agent
rlPPOAgentOptions object to specify options for
proximal policy optimization (PPO) agents. To create a PPO agent, use
For more information on PPO agents, see Proximal Policy Optimization Agents.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
opt = rlPPOAgentOptions
rlPPOAgentOptions object for use as an argument when creating a PPO
agent using all default settings. You can modify the object properties using dot
ExperienceHorizon — Number of steps the agent interacts with the environment before learning
512 (default) | positive integer
Number of steps the agent interacts with the environment before learning from its experience, specified as a positive integer.
ExperienceHorizon value must be greater than or equal to
MiniBatchSize — Mini-batch size
128 (default) | positive integer
Mini-batch size used for each learning epoch, specified as a positive integer. When the agent uses a recurrent neural network,
MiniBatchSize is treated as the training trajectory length.
MiniBatchSize value must be less than or equal to the
ClipFactor — Clip factor
0.2 (default) | positive scalar less than
Clip factor for limiting the change in each policy update step, specified as a
positive scalar less than
EntropyLossWeight — Entropy loss weight
0.01 (default) | scalar value between
Entropy loss weight, specified as a scalar value between
1. A higher entropy loss weight value promotes agent exploration by
applying a penalty for being too certain about which action to take. Doing so can help
the agent move out of local optima.
When gradients are computed during training, an additional gradient component is computed for minimizing this loss function. For more information, see Entropy Loss.
NumEpoch — Number of epochs
3 (default) | positive integer
Number of epochs for which the actor and critic networks learn from the current experience set, specified as a positive integer.
AdvantageEstimateMethod — Method for estimating advantage values
"gae" (default) |
Method for estimating advantage values, specified as one of the following:
"gae"— Generalized advantage estimator
"finite-horizon"— Finite horizon estimation
For more information on these methods, see the training algorithm information in Proximal Policy Optimization Agents.
GAEFactor — Smoothing factor for generalized advantage estimator
0.95 (default) | scalar value between
Smoothing factor for generalized advantage estimator, specified as a scalar value between
1, inclusive. This option applies only when the
AdvantageEstimateMethod option is
NormalizedAdvantageMethod — Method for normalizing advantage function
"none" (default) |
Method for normalizing advantage function values, specified as one of the following:
"none"— Do not normalize advantage values
"current"— Normalize the advantage function using the mean and standard deviation for the current mini-batch of experiences.
"moving"— Normalize the advantage function using the mean and standard deviation for a moving window of recent experiences. To specify the window size, set the
In some environments, you can improve agent performance by normalizing the advantage function during training. The agent normalizes the advantage function by subtracting the mean advantage value and scaling by the standard deviation.
AdvantageNormalizingWindow — Window size for normalizing advantage function
1e6 (default) | positive integer
Window size for normalizing advantage function values, specified as a positive integer. Use this option when the
NormalizedAdvantageMethod option is
ActorOptimizerOptions — Actor optimizer options
CriticOptimizerOptions — Critic optimizer options
Critic optimizer options, specified as an
rlOptimizerOptions object. It allows you to specify training parameters of
the critic approximator such as learning rate, gradient threshold, as well as the
optimizer algorithm and its parameters. For more information, see
SampleTime — Sample time of agent
1 (default) | positive scalar |
Sample time of agent, specified as a positive scalar or as
-1. Setting this
-1 allows for event-based simulations.
Within a Simulink® environment, the RL Agent block
in which the agent is specified to execute every
of simulation time. If
block inherits the sample time from its parent subsystem.
Within a MATLAB® environment, the agent is executed every time the environment advances. In
SampleTime is the time interval between consecutive
elements in the output experience returned by
-1, the time interval between
consecutive elements in the returned output experience reflects the timing of the event
that triggers the agent execution.
DiscountFactor — Discount factor
0.99 (default) | positive scalar less than or equal to 1
Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.
|Proximal policy optimization reinforcement learning agent|
Create PPO Agent Options Object
Create a PPO agent options object, specifying the experience horizon.
opt = rlPPOAgentOptions('ExperienceHorizon',256)
opt = rlPPOAgentOptions with properties: ExperienceHorizon: 256 MiniBatchSize: 128 ClipFactor: 0.2000 EntropyLossWeight: 0.0100 NumEpoch: 3 AdvantageEstimateMethod: "gae" GAEFactor: 0.9500 NormalizedAdvantageMethod: "none" AdvantageNormalizingWindow: 1000000 ActorOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] CriticOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] SampleTime: 1 DiscountFactor: 0.9900 InfoToSave: [1x1 struct]
You can modify options using dot notation. For example, set the agent sample time to
opt.SampleTime = 0.5;