rlDiscreteCategoricalActor
Stochastic categorical actor with a discrete action space for reinforcement learning agents
Since R2022a
Description
This object implements a function approximator to be used as a stochastic actor
within a reinforcement learning agent with a discrete action space. A discrete categorical
actor takes an environment observation as input and returns as output a random action sampled
from a categorical (also known as Multinoulli) probability distribution, thereby implementing
a parametrized stochastic policy. After you create an
rlDiscreteCategoricalActor
object, use it to create a suitable agent, such
as rlACAgent
or rlPGAgent
. For more
information on creating actors and critics, see Create Policies and Value Functions.
Creation
Syntax
Description
creates a stochastic actor with a discrete action space, using the deep neural network
actor
= rlDiscreteCategoricalActor(net
,observationInfo
,actionInfo
)net
as underlying approximation model. For this actor,
actionInfo
must specify a discrete action space. The network
input layers are automatically associated with the environment observation channels
according to the dimension specifications in observationInfo
. The
network must have a single output layer with as many elements as the number of possible
discrete actions, as specified in actionInfo
(each element of the
output layer must return the probability of one action). This function sets the
ObservationInfo
and ActionInfo
properties of
actor
to the inputs observationInfo
and
actionInfo
, respectively.
creates a discrete space stochastic actor using a custom basis function as underlying
approximation model. The first input argument is a two-element cell array whose first
element is the handle actor
= rlDiscreteCategoricalActor({basisFcn
,W0
},observationInfo
,actionInfo
)basisFcn
to a custom basis function and whose
second element is the initial weight matrix W0
. This function sets
the ObservationInfo
and ActionInfo
properties
of actor
to the inputs observationInfo
and
actionInfo
, respectively.
specifies names of the observation input layers (for network-based approximators) or
sets the actor
= rlDiscreteCategoricalActor(___,Name=Value
)UseDevice
property using one or more name-value arguments.
Specifying the input layer names allows you explicitly associate the layers of your
network approximator with specific environment channels. For all types of approximators,
you can specify the device where computations for actor
are
executed, for example UseDevice="gpu"
.
Input Arguments
Properties
Object Functions
rlACAgent | Actor-critic (AC) reinforcement learning agent |
rlPGAgent | Policy gradient (PG) reinforcement learning agent |
rlPPOAgent | Proximal policy optimization (PPO) reinforcement learning agent |
rlSACAgent | Soft actor-critic (SAC) reinforcement learning agent |
getAction | Obtain action from agent, actor, or policy object given environment observations |
evaluate | Evaluate function approximator object given observation (or observation-action) input data |
gradient | (Not recommended) Evaluate gradient of function approximator object given observation and action input data |
accelerate | (Not recommended) Option to accelerate computation of gradient for approximator object based on neural network |
getLearnableParameters | Obtain learnable parameter values from agent, function approximator, or policy object |
setLearnableParameters | Set learnable parameter values of agent, function approximator, or policy object |
setModel | Set approximation model in function approximator object |
getModel | Get approximation model from function approximator object |
Examples
Version History
Introduced in R2022a