Options for DDPG agent
rlDDPGAgentOptions object to specify options for deep
deterministic policy gradient (DDPG) agents. To create a DDPG agent, use
For more information, see Deep Deterministic Policy Gradient Agents.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
creates an options
object for use as an argument when creating a DDPG agent using all default options. You
can modify the object properties using dot notation.
opt = rlDDPGAgentOptions
NoiseOptions— Noise model options
Noise model options, specified as an
object. For more information on the noise model, see Noise Model.
For an agent with multiple actions, if the actions have different ranges and units, it is likely that each action requires different noise model parameters. If the actions have similar ranges and units, you can set the noise parameters for all actions to the same value.
For example, for an agent with two actions, set the variance of each action to a different value while using the same decay rate for both variances.
opt = rlDDPGAgentOptions; opt.ExplorationModel.Variance = [0.1 0.2]; opt.ExplorationModel.VarianceDecayRate = 1e-4;
TargetSmoothFactor— Smoothing factor for target actor and critic updates
1e-3(default) | positive scalar less than or equal to 1
Smoothing factor for target actor and critic updates, specified as a positive scalar less than or equal to 1. For more information, see Target Update Methods.
TargetUpdateFrequency— Number of steps between target actor and critic updates
1(default) | positive integer
Number of steps between target actor and critic updates, specified as a positive integer. For more information, see Target Update Methods.
ResetExperienceBufferBeforeTraining— Flag for clearing the experience buffer
Flag for clearing the experience buffer before training, specified as a logical value.
SaveExperienceBufferWithAgent— Flag for saving the experience buffer
Flag for saving the experience buffer data when saving the agent, specified as a
logical value. This option applies both when saving candidate agents during training and
when saving agents using the
For some agents, such as those with a large experience buffer and image-based
observations, the memory required for saving their experience buffer is large. In such
cases, to not save the experience buffer data, set
If you plan to further train your saved agent, you can start training with the
previous experience buffer as a starting point. In this case, set
MiniBatchSize— Size of random experience mini-batch
64(default) | positive integer
Size of random experience mini-batch, specified as a positive integer. During each training episode, the agent randomly samples experiences from the experience buffer when computing gradients for updating the critic properties. Large mini-batches reduce the variance when computing gradients but increase the computational effort.
NumStepsToLookAhead— Number of steps ahead
1(default) | positive integer
Number of future rewards used to estimate the value of the policy, specified as a positive integer. See , (Chapter 7), for more detail.
ExperienceBufferLength— Experience buffer size
10000(default) | positive integer
Experience buffer size, specified as a positive integer. During training, the agent updates the actor and critic using a mini-batch of experiences randomly sampled from the buffer.
SampleTime— Sample time of agent
1(default) | positive scalar
Sample time of agent, specified as a positive scalar.
Within a Simulink environment, the agent gets executed every
SampleTime seconds of simulation time.
Within a MATLAB environment, the agent gets executed every time the environment
SampleTime is the time interval between
consecutive elements in the output experience returned by
DiscountFactor— Discount factor
0.99(default) | positive scalar less than or equal to 1
Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.
|Deep deterministic policy gradient reinforcement learning agent|
This example shows how to create a DDPG agent option object.
rlDDPGAgentOptions object that specifies the mini-batch size.
opt = rlDDPGAgentOptions('MiniBatchSize',48)
opt = rlDDPGAgentOptions with properties: NoiseOptions: [1x1 rl.option.OrnsteinUhlenbeckActionNoise] TargetSmoothFactor: 1.0000e-03 TargetUpdateFrequency: 1 ResetExperienceBufferBeforeTraining: 1 SaveExperienceBufferWithAgent: 0 MiniBatchSize: 48 NumStepsToLookAhead: 1 ExperienceBufferLength: 10000 SampleTime: 1 DiscountFactor: 0.9900
You can modify options using dot notation. For example, set the agent sample time to
opt.SampleTime = 0.5;
OrnsteinUhlenbeckActionNoise object has the following numeric
|Initial value of action for noise model|
|Noise model mean|
|Constant specifying how quickly the noise model output is attracted to the mean|
|Decay rate of the variance|
|Noise model variance|
At each sample time step, the noise model is updated using the following formula, where
Ts is the agent sample time.
x(k) = x(k-1) + MeanAttractionConstant.*(Mean - x(k-1)).*Ts + Variance.*randn(size(Mean)).*sqrt(Ts)
At each sample time step, the variance decays as shown in the following code.
decayedVariance = Variance.*(1 - VarianceDecayRate); Variance = max(decayedVariance,VarianceMin);
You can calculate how many samples it will take for the variance to be halved using this simple formula.
halflife = log(0.5)/log(1-VarianceDecayRate);
For continuous action signals, it is important to set the noise variance appropriately
to encourage exploration. It is common to have
between 1% and 10% of your action range.
If your agent converges on local optima too quickly, promote agent exploration by
increasing the amount of noise; that is, by increasing the variance. Also, to increase
exploration, you can reduce the
Behavior changed in R2020a
Target update method settings for DDPG agents have changed. The following changes require updates to your code:
TargetUpdateMethod option has been removed. Now, DDPG agents
determine the target update method based on the
TargetSmoothFactor option values.
The default value of
TargetUpdateFrequency has changed from
To use one of the following target update methods, set the
properties as indicated.
|Smoothing||Less than |
|Periodic||Greater than |
|Periodic smoothing (new method in R2020a)||Greater than ||Less than |
The default target update configuration, which is a smoothing update with a
TargetSmoothFactor value of
0.001, remains the
This table shows some typical uses of
rlDDPGAgentOptions and how
to update your code to use the new option configuration.
opt = rlDDPGAgentOptions('TargetUpdateMethod',"smoothing");
opt = rlDDPGAgentOptions;
opt = rlDDPGAgentOptions('TargetUpdateMethod',"periodic");
opt = rlDDPGAgentOptions; opt.TargetUpdateFrequency = 4; opt.TargetSmoothFactor = 1;
opt = rlDDPGAgentOptions; opt.TargetUpdateMethod = "periodic"; opt.TargetUpdateFrequency = 5;
opt = rlDDPGAgentOptions; opt.TargetUpdateFrequency = 5; opt.TargetSmoothFactor = 1;
 Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning. Cambridge, Mass: The MIT Press, 2018.