rlPGAgentOptions
Options for PG agent
Description
Use an rlPGAgentOptions
object to specify options when creating a
policy gradient (PG) agent. To create a PG agent, use rlPGAgent
For more information on PG agents, see REINFORCE Policy Gradient (PG) Agent.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
Creation
Description
creates an
opt
= rlPGAgentOptionsrlPGAgentOptions
object for use as an argument when creating a PG agent
using all default settings. You can modify the object properties using dot
notation.
creates the options object opt
= rlPGAgentOptions(Name=Value
)opt
and sets its properties using one
or more name-value arguments. For example,
rlPGAgentOptions(DiscountFactor=0.95)
creates an options object with a
discount factor of 0.95
. You can specify multiple name-value
arguments.
Properties
Sample time of the agent, specified as a positive scalar or as -1
.
Within a MATLAB® environment, the agent is executed every time the environment advances,
so, SampleTime
does not affect the timing of the agent execution.
If SampleTime
is set to -1
, in MATLAB environments, the time interval between consecutive elements in the
returned output experience is considered equal to 1
.
Within a Simulink® environment, the RL Agent block
that uses the agent object executes every SampleTime
seconds of
simulation time. If SampleTime
is set to -1
the
block inherits the sample time from its input signals. Set
SampleTime
to -1
when the block is a child
of an event-driven subsystem.
Set SampleTime
to a positive scalar when the block is not a child
of an event-driven subsystem. Doing so ensures that the block executes at appropriate
intervals when input signal sample times change due to model variations. If
SampleTime
is a positive scalar, this value is also the time
interval between consecutive elements in the output experience returned by sim
or
train
,
regardless of the type of environment.
If SampleTime
is set to -1
, in Simulink environments, the time interval between consecutive elements in the
returned output experience reflects the timing of the events that trigger the RL Agent block
execution.
This property is shared between the agent and the agent options object within the agent. If you change this property in the agent options object, it also changes in the agent, and vice versa.
Example: SampleTime=-1
Discount factor applied to future rewards during training, specified as a nonnegative scalar less than or equal to 1.
Example: DiscountFactor=0.9
Entropy loss weight, specified as a scalar value between 0
and
1
. A higher entropy loss weight value promotes agent exploration by
applying a penalty for being too certain about which action to take. Doing so can help the
agent move out of local optima.
When gradients are computed during training, an additional gradient component is computed for minimizing this loss function.
Example: EntropyLossWeight=0.01
Option to use baseline for learning, specified as a logical value. When
UseBaseline
is true
, you must specify a critic
network as the baseline function approximator.
In general PG agents work better without a baseline for simpler problems and when using a small actor network.
Example: UseBaseline=false
Actor optimizer options, specified as an rlOptimizerOptions
object. It allows you to specify training parameters of
the actor approximator such as learning rate, gradient threshold, as well as the
optimizer algorithm and its parameters. For more information, see rlOptimizerOptions
and rlOptimizer
.
Example: ActorOptimizerOptions =
rlOptimizerOptions(LearnRate=2e-3)
Critic optimizer options, specified as an rlOptimizerOptions
object. It allows you to specify training parameters of
the critic approximator such as learning rate, gradient threshold, as well as the
optimizer algorithm and its parameters. For more information, see rlOptimizerOptions
and rlOptimizer
.
Example: CriticOptimizerOptions =
rlOptimizerOptions(LearnRate=5e-3)
Options to save additional agent data, specified as a structure containing a
field named Optimizer
.
You can save an agent object using one of these methods:
Use the
save
command.Specify
saveAgentCriteria
andsaveAgentValue
in anrlTrainingOptions
object.Specify an appropriate logging function within a
FileLogger
object.
When you save an agent using any method, the fields in the
InfoToSave
structure determine whether the
corresponding data saves with the agent. For example, if you set the
PolicyState
field to true
,
then the policy state saves along with the agent.
You can modify the InfoToSave
property only after you
create the agent options object.
Example: options.InfoToSave.Optimizer=true
Option to save the actor and critic optimizers,
specified as a logical value. If you set the
Optimizer
field to
false
, then the actor and
critic optimizers (which are hidden properties of
the agent and can contain internal states) are not
saved along with the agent, therefore saving disk
space and memory. However, when the optimizers
contain internal states, the state of the saved
agent is not identical to the state of the original
agent.
Example: true
Object Functions
rlPGAgent | Policy gradient (PG) reinforcement learning agent |
Examples
This example shows how to create and modify a PG agent options object.
Create a PG agent options object, specifying the discount factor.
opt = rlPGAgentOptions(DiscountFactor=0.9)
opt = rlPGAgentOptions with properties: SampleTime: 1 DiscountFactor: 0.9000 EntropyLossWeight: 0 UseBaseline: 1 ActorOptimizerOptions: [1×1 rl.option.rlOptimizerOptions] CriticOptimizerOptions: [1×1 rl.option.rlOptimizerOptions] InfoToSave: [1×1 struct]
You can modify options using dot notation. For example, set the agent sample time to 0.5
.
opt.SampleTime = 0.5;
Version History
Introduced in R2019aThe property UseDeterministicExploitation
of the
rlPGAgentOptions
object will be removed in a future release. Use the
UseExplorationPolicy
property of rlPGAgent
instead.
Previously, you set UseDeterministicExploitation
as follows.
Force the agent to always select the action with maximum likelihood, thereby using a greedy deterministic policy for simulation and deployment.
agent.AgentOptions.UseDeterministicExploitation = true;
Allow the agent to select its action by sampling its probability distribution for simulation and policy deployment, thereby using a stochastic policy that explores the observation space.
agent.AgentOptions.UseDeterministicExploitation = false;
Starting in R2022a, set UseExplorationPolicy
as follows.
Force the agent to always select the action with maximum likelihood, thereby using a greedy deterministic policy for simulation and deployment.
agent.UseExplorationPolicy = false;
Allow the agent to select its action by sampling its probability distribution for simulation and policy deployment, thereby using a stochastic policy that explores the observation space.
agent.UseExplorationPolicy = true;
Similarly to UseDeterministicExploitation
,
UseExplorationPolicy
affects only simulation and deployment; it does
not affect training.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)