Options for DQN agent
rlDQNAgentOptions object to specify options for deep
Q-network (DQN) agents. To create a DQN agent, use
For more information, see Deep Q-Network (DQN) Agents.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
creates an options
object for use as an argument when creating a DQN agent using all default settings. You
can modify the object properties using dot notation.
opt = rlDQNAgentOptions
UseDoubleDQN — Flag for using double DQN
true (default) | false
Flag for using double DQN for value function target updates, specified as a logical
value. For most application set
"on". For more information, see Deep Q-Network (DQN) Agents.
EpsilonGreedyExploration — Options for epsilon-greedy exploration
Options for epsilon-greedy exploration, specified as an
EpsilonGreedyExploration object with the following
|Probability threshold to either randomly select an action or select the action that maximizes the state-action value function. A larger value of |
|Minimum value of |
At the end of each training time step, if
Epsilon is greater than
EpsilonMin, then it is updated using the following
Epsilon = Epsilon*(1-EpsilonDecay)
If your agent converges on local optima too quickly, you can promote agent exploration by
To specify exploration options, use dot notation after creating the
opt. For example, set the epsilon value to
opt.EpsilonGreedyExploration.Epsilon = 0.9;
CriticOptimizerOptions — Critic optimizer options
Critic optimizer options, specified as an
rlOptimizerOptions object. It allows you to specify training parameters of
the critic approximator such as learning rate, gradient threshold, as well as the
optimizer algorithm and its parameters. For more information, see
TargetSmoothFactor — Smoothing factor for target critic updates
1e-3 (default) | positive scalar less than or equal to 1
Smoothing factor for target critic updates, specified as a positive scalar less than or equal to 1. For more information, see Target Update Methods.
TargetUpdateFrequency — Number of steps between target critic updates
1 (default) | positive integer
Number of steps between target critic updates, specified as a positive integer. For more information, see Target Update Methods.
ResetExperienceBufferBeforeTraining — Option for clearing the experience buffer
true (default) |
Option for clearing the experience buffer before training, specified as a logical value.
SequenceLength — Maximum batch-training trajectory length when using RNN
1 (default) | positive integer
Maximum batch-training trajectory length when using a recurrent neural network for
the critic, specified as a positive integer. This value must be greater than
1 when using a recurrent neural network for the critic and
MiniBatchSize — Size of random experience mini-batch
64 (default) | positive integer
Size of random experience mini-batch, specified as a positive integer. During each training episode, the agent randomly samples experiences from the experience buffer when computing gradients for updating the critic properties. Large mini-batches reduce the variance when computing gradients but increase the computational effort.
When using a recurrent neural network for the critic,
MiniBatchSize is the number of experience trajectories in a
batch, where each trajectory has length equal to
NumStepsToLookAhead — Number of future rewards used to estimate the value of the policy
1 (default) | positive integer
Number of future rewards used to estimate the value of the policy, specified as a positive integer. For more information, see chapter 7 of .
N-step Q learning is not supported when using a recurrent neural network for the
critic. In this case,
NumStepsToLookAhead must be
ExperienceBufferLength — Experience buffer size
10000 (default) | positive integer
Experience buffer size, specified as a positive integer. During training, the agent computes updates using a mini-batch of experiences randomly sampled from the buffer.
SampleTime — Sample time of agent
1 (default) | positive scalar |
Sample time of agent, specified as a positive scalar or as
-1. Setting this
-1 allows for event-based simulations.
Within a Simulink® environment, the RL Agent block
in which the agent is specified to execute every
of simulation time. If
block inherits the sample time from its parent subsystem.
Within a MATLAB® environment, the agent is executed every time the environment advances. In
SampleTime is the time interval between consecutive
elements in the output experience returned by
-1, the time interval between
consecutive elements in the returned output experience reflects the timing of the event
that triggers the agent execution.
DiscountFactor — Discount factor
0.99 (default) | positive scalar less than or equal to 1
Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.
|Deep Q-network (DQN) reinforcement learning agent|
Create DQN Agent Options Object
This example shows how to create a DQN agent options object.
rlDQNAgentOptions object that specifies the agent mini-batch size.
opt = rlDQNAgentOptions('MiniBatchSize',48)
opt = rlDQNAgentOptions with properties: UseDoubleDQN: 1 EpsilonGreedyExploration: [1x1 rl.option.EpsilonGreedyExploration] CriticOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] TargetSmoothFactor: 1.0000e-03 TargetUpdateFrequency: 1 ResetExperienceBufferBeforeTraining: 1 SequenceLength: 1 MiniBatchSize: 48 NumStepsToLookAhead: 1 ExperienceBufferLength: 10000 SampleTime: 1 DiscountFactor: 0.9900 InfoToSave: [1x1 struct]
You can modify options using dot notation. For example, set the agent sample time to
opt.SampleTime = 0.5;
 Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning. Cambridge, Mass: The MIT Press, 2018.
Version HistoryIntroduced in R2019a
R2020a: Target update method settings for DQN agents have changed
Target update method settings for DQN agents have changed. The following changes require updates to your code:
TargetUpdateMethodoption has been removed. Now, DQN agents determine the target update method based on the
The default value of
TargetUpdateFrequencyhas changed from
To use one of the following target update methods, set the
properties as indicated.
|Smoothing||Less than |
|Periodic||Greater than |
|Periodic smoothing (new method in R2020a)||Greater than ||Less than |
The default target update configuration, which is a smoothing update with a
TargetSmoothFactor value of
0.001, remains the
This table shows some typical uses of
and how to update your code to use the new option configuration.