rlDeterministicActorRepresentation
(Not recommended) Deterministic actor representation for reinforcement learning agents
rlDeterministicActorRepresentation is not recommended. Use rlContinuousDeterministicActor instead. For more information, see rlDeterministicActorRepresentation is not recommended.
Description
This object implements a function approximator to be used as a deterministic actor
within a reinforcement learning agent with a continuous action space. A
deterministic actor takes observations as inputs and returns as outputs the action that
maximizes the expected cumulative long-term reward, thereby implementing a deterministic
policy. After you create an rlDeterministicActorRepresentation object, use it
to create a suitable agent, such as an rlDDPGAgent agent. For
more information on creating representations, see Create Policies and Value Functions.
Creation
Syntax
Description
creates a deterministic actor using the deep neural network actor = rlDeterministicActorRepresentation(net,observationInfo,actionInfo,'Observation',obsName,'Action',actName)net as
approximator. This syntax sets the ObservationInfo
and ActionInfo
properties of actor to the inputs
observationInfo and actionInfo, containing the
specifications for observations and actions, respectively. actionInfo
must specify a continuous action space, discrete action spaces are not supported.
obsName must contain the names of the input layers of
net that are associated with the observation specifications. The
action names actName must be the names of the output layers of
net that are associated with the action specifications.
creates a deterministic actor using a custom basis function as underlying approximator.
The first input argument is a two-elements cell in which the first element contains the
handle actor = rlDeterministicActorRepresentation({basisFcn,W0},observationInfo,actionInfo)basisFcn to a custom basis function, and the second element
contains the initial weight matrix W0. This syntax sets the ObservationInfo and ActionInfo properties of actor respectively to the
inputs observationInfo and actionInfo.
creates a deterministic actor using the additional options set
actor = rlDeterministicActorRepresentation(___,options)options, which is an rlRepresentationOptions object. This syntax sets the Options property of actor to
theoptions input argument. You can use this syntax with any of the
previous input-argument combinations.
Input Arguments
Properties
Object Functions
rlDDPGAgent | Deep deterministic policy gradient (DDPG) reinforcement learning agent |
rlTD3Agent | Twin-delayed deep deterministic (TD3) policy gradient reinforcement learning agent |
getAction | Obtain action from agent, actor, or policy object given environment observations |