Main Content

rlACAgent

Actor-critic reinforcement learning agent

Description

Actor-critic (AC) agents implement actor-critic algorithms such as A2C and A3C, which are model-free, online, on-policy reinforcement learning methods. The actor-critic agent optimizes the policy (actor) directly and uses a critic to estimate the return or future rewards. The action space can be either discrete or continuous.

For more information, see Actor-Critic Agents. For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

Creation

Description

Create Agent from Observation and Action Specifications

example

agent = rlACAgent(observationInfo,actionInfo) creates an actor-critic agent for an environment with the given observation and action specifications, using default initialization options. The actor and critic in the agent use default deep neural networks built from the observation specification observationInfo and the action specification actionInfo. The ObservationInfo and ActionInfo properties of agent are set to the observationInfo and actionInfo input arguments, respectively.

example

agent = rlACAgent(observationInfo,actionInfo,initOpts) creates an actor-critic agent for an environment with the given observation and action specifications. The agent uses default networks in which each hidden fully connected layer has the number of units specified in the initOpts object. Actor-critic agents do not support recurrent neural networks. For more information on the initialization options, see rlAgentInitializationOptions.

Create Agent from Actor and Critic

example

agent = rlACAgent(actor,critic) creates an actor-critic agent with the specified actor and critic, using the default options for the agent.

Specify Agent Options

example

agent = rlACAgent(___,agentOptions) creates an actor-critic agent and sets the AgentOptions property to the agentOptions input argument. Use this syntax after any of the input arguments in the previous syntaxes.

Input Arguments

expand all

Agent initialization options, specified as an rlAgentInitializationOptions object. Actor-critic agents do not support recurrent neural networks.

Actor that implements the policy, specified as an rlDiscreteCategoricalActor or rlContinuousGaussianActor function approximator object. For more information on creating actor approximators, see Create Policies and Value Functions.

Critic that estimates the discounted long-term reward, specified as an rlValueFunction object. For more information on creating critic approximators, see Create Policies and Value Functions.

Properties

expand all

Observation specifications, specified as a reinforcement learning specification object or an array of specification objects defining properties such as dimensions, data type, and names of the observation signals.

If you create the agent by specifying an actor and critic, the value of ObservationInfo matches the value specified in the actor and critic objects.

You can extract observationInfo from an existing environment or agent using getObservationInfo. You can also construct the specifications manually using rlFiniteSetSpec or rlNumericSpec.

Action specifications, specified as a reinforcement learning specification object defining properties such as dimensions, data type, and names of the action signals.

For a discrete action space, you must specify actionInfo as an rlFiniteSetSpec object.

For a continuous action space, you must specify actionInfo as an rlNumericSpec object.

If you create the agent by specifying an actor and critic, the value of ActionInfo matches the value specified in the actor and critic objects.

You can extract actionInfo from an existing environment or agent using getActionInfo. You can also construct the specification manually using rlFiniteSetSpec or rlNumericSpec.

Agent options, specified as an rlACAgentOptions object.

Option to use exploration policy when selecting actions, specified as a one of the following logical values.

  • true — Use the base agent exploration policy when selecting actions in sim and generatePolicyFunction. In this case, the agent selects its actions by sampling its probability distribution, the policy is therefore stochastic and the agent explores its observation space.

  • false — Use the base agent greedy policy (the action with maximum likelihood) when selecting actions in sim and generatePolicyFunction. In this case, the simulated agent and generated policy behave deterministically.

Note

This option affects only simulation and deployment; it does not affect training.

Sample time of agent, specified as a positive scalar or as -1. Setting this parameter to -1 allows for event-based simulations. The value of SampleTime matches the value specified in AgentOptions.

Within a Simulink® environment, the RL Agent block in which the agent is specified to execute every SampleTime seconds of simulation time. If SampleTime is -1, the block inherits the sample time from its parent subsystem.

Within a MATLAB® environment, the agent is executed every time the environment advances. In this case, SampleTime is the time interval between consecutive elements in the output experience returned by sim or train. If SampleTime is -1, the time interval between consecutive elements in the returned output experience reflects the timing of the event that triggers the agent execution.

Object Functions

trainTrain reinforcement learning agents within a specified environment
simSimulate trained reinforcement learning agents within specified environment
getActionObtain action from agent, actor, or policy object given environment observations
getActorGet actor from reinforcement learning agent
setActorSet actor of reinforcement learning agent
getCriticGet critic from reinforcement learning agent
setCriticSet critic of reinforcement learning agent
generatePolicyFunctionGenerate function that evaluates policy of an agent or policy object

Examples

collapse all

Create an environment with a discrete action space, and obtain its observation and action specifications. For this example, load the environment used in the example Create Agent Using Deep Network Designer and Train Using Image Observations. This environment has two observations: a 50-by-50 grayscale image and a scalar (the angular velocity of the pendulum). The action is a scalar with five possible elements (a torque of either -2, -1, 0, 1, or 2 Nm applied to a swinging pole).

env = rlPredefinedEnv("SimplePendulumWithImage-Discrete");

Obtain observation and action specifications

obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);

The agent creation function initializes the actor and critic networks randomly. Ensure reproducibility by fixing the seed of the random generator.

rng(0)

Create an actor-critic agent from the environment observation and action specifications.

agent = rlACAgent(obsInfo,actInfo);

To check your agent, use getAction to return the action from random observations.

getAction(agent,{rand(obsInfo(1).Dimension),rand(obsInfo(2).Dimension)})
ans = 1x1 cell array
    {[-2]}

You can now test and train the agent within the environment. You can also use getActor and getCritic to extract the actor and critic, respectively, and getModel to extract the approximator model (by default a deep neural network) from the actor or critic.

Create an environment with a continuous action space and obtain its observation and action specifications. For this example, load the environment used in the example Train DDPG Agent to Swing Up and Balance Pendulum with Image Observation. This environment has two observations: a 50-by-50 grayscale image and a scalar (the angular velocity of the pendulum). The action is a scalar representing a torque ranging continuously from -2 to 2 Nm.

% load predefined environment
env = rlPredefinedEnv("SimplePendulumWithImage-Continuous");

% obtain observation and action specifications
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);

Create an agent initialization option object, specifying that each hidden fully connected layer in the network must have 128 neurons (instead of the default number, 256). Actor-critic agents do not support recurrent networks, so setting the UseRNN option to true generates an error when the agent is created.

initOpts = rlAgentInitializationOptions(NumHiddenUnit=128);

The agent creation function initializes the actor and critic networks randomly. You can ensure reproducibility by fixing the seed of the random generator.

rng(0)

Create an actor-critic agent from the environment observation and action specifications.

agent = rlACAgent(obsInfo,actInfo,initOpts);

Extract the deep neural networks from both the agent actor and critic.

actorNet = getModel(getActor(agent));
criticNet = getModel(getCritic(agent));

Display the layers of the critic network, and verify that each hidden fully connected layer has 128 neurons

criticNet.Layers
ans = 
  11x1 Layer array with layers:

     1   'concat'         Concatenation     Concatenation of 2 inputs along dimension 1
     2   'relu_body'      ReLU              ReLU
     3   'fc_body'        Fully Connected   128 fully connected layer
     4   'body_output'    ReLU              ReLU
     5   'input_1'        Image Input       50x50x1 images
     6   'conv_1'         2-D Convolution   64 3x3x1 convolutions with stride [1  1] and padding [0  0  0  0]
     7   'relu_input_1'   ReLU              ReLU
     8   'fc_1'           Fully Connected   128 fully connected layer
     9   'input_2'        Feature Input     1 features
    10   'fc_2'           Fully Connected   128 fully connected layer
    11   'output'         Fully Connected   1 fully connected layer

Plot actor and critic networks

plot(layerGraph(actorNet))

Figure contains an axes object. The axes object contains an object of type graphplot.

plot(layerGraph(criticNet))

Figure contains an axes object. The axes object contains an object of type graphplot.

To check your agent, use getAction to return the action from a random observation.

getAction(agent,{rand(obsInfo(1).Dimension),rand(obsInfo(2).Dimension)})
ans = 1x1 cell array
    {[0.9228]}

You can now test and train the agent within the environment.

Create an environment with a discrete action space and obtain its observation and action specifications. For this example, load the environment used in the example Train DQN Agent to Balance Cart-Pole System. This environment has a four-dimensional observation vector (cart position and velocity, pole angle, and pole angle derivative), and a scalar action with two possible elements (a force of either -10 or +10 N applied on the cart).

env = rlPredefinedEnv("CartPole-Discrete");

Obtain observation and action specifications.

obsInfo = getObservationInfo(env)
obsInfo = 
  rlNumericSpec with properties:

     LowerLimit: -Inf
     UpperLimit: Inf
           Name: "CartPole States"
    Description: "x, dx, theta, dtheta"
      Dimension: [4 1]
       DataType: "double"

actInfo = getActionInfo(env)
actInfo = 
  rlFiniteSetSpec with properties:

       Elements: [-10 10]
           Name: "CartPole Action"
    Description: [0x0 string]
      Dimension: [1 1]
       DataType: "double"

The agent creation function initializes the actor and critic networks randomly. You can ensure reproducibility by fixing the seed of the random generator.

rng(0)

For actor-critic agents, the critic estimates a value function, therefore it must take the observation signal as input and return a scalar value.

To approximate the value function within the critic, use a deep neural network. Define the network as an array of layer objects. Get the dimensions of the observation space from the environment specification objects.

cnet = [
    featureInputLayer(prod(obsInfo.Dimension))
    fullyConnectedLayer(50)
    reluLayer
    fullyConnectedLayer(1)];

Convert the network to a dlnetwork object, and display the number of weights.

cnet = dlnetwork(cnet);
summary(cnet)
   Initialized: true

   Number of learnables: 301

   Inputs:
      1   'input'   4 features

Create the critic. Actor-critic agents use an rlValueFunction object to implement the critic.

critic = rlValueFunction(cnet,obsInfo);

Check your critic with a random observation input.

getValue(critic,{rand(obsInfo.Dimension)})
ans = single
    -0.1411

Create a deep neural network to be used as approximation model within the actor. For actor-critic agents, the actor executes a stochastic policy, which for discrete action spaces is implemented by a discrete categorical actor. In this case the network must take the observation signal as input and return a probability for each action. Therefore the output layer must have as many elements as the number of possible actions.

anet = [
    featureInputLayer(prod(obsInfo.Dimension))
    fullyConnectedLayer(50)
    reluLayer
    fullyConnectedLayer(numel(actInfo.Dimension))];

Convert the network to a dlnetwork object, and display the number of weights.

anet = dlnetwork(anet);
summary(anet)
   Initialized: true

   Number of learnables: 352

   Inputs:
      1   'input'   4 features

Create the actor. Actor-critic agents use an rlDiscreteCategoricalActor object to implement the actor for discrete action spaces.

actor = rlDiscreteCategoricalActor(anet,obsInfo,actInfo);

Check your actor with a random observation input.

getAction(actor,{rand(obsInfo.Dimension)})
ans = 1x1 cell array
    {[-10]}

Create the AC agent using the actor and the critic.

agent = rlACAgent(actor,critic)
agent = 
  rlACAgent with properties:

            AgentOptions: [1x1 rl.option.rlACAgentOptions]
    UseExplorationPolicy: 1
         ObservationInfo: [1x1 rl.util.rlNumericSpec]
              ActionInfo: [1x1 rl.util.rlFiniteSetSpec]
              SampleTime: 1

Specify some options for the agent, including training options for the actor and critic.

agent.AgentOptions.NumStepsToLookAhead=32;
agent.AgentOptions.DiscountFactor=0.99;
agent.AgentOptions.CriticOptimizerOptions.LearnRate=8e-3;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold=1;
agent.AgentOptions.ActorOptimizerOptions.LearnRate=8e-3;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold=1;

Check your agent with a random observation.

getAction(agent,{rand(obsInfo.Dimension)})
ans = 1x1 cell array
    {[-10]}

You can now test and train the agent within the environment.

Create an environment with a continuous action space, and obtain its observation and action specifications. For this example, load the double integrator continuous action space environment used in the example Train DDPG Agent to Control Double Integrator System.

env = rlPredefinedEnv("DoubleIntegrator-Continuous");

Obtain observation and action specifications.

obsInfo = getObservationInfo(env)
obsInfo = 
  rlNumericSpec with properties:

     LowerLimit: -Inf
     UpperLimit: Inf
           Name: "states"
    Description: "x, dx"
      Dimension: [2 1]
       DataType: "double"

actInfo = getActionInfo(env)
actInfo = 
  rlNumericSpec with properties:

     LowerLimit: -Inf
     UpperLimit: Inf
           Name: "force"
    Description: [0x0 string]
      Dimension: [1 1]
       DataType: "double"

In this example, the action is a scalar value representing a force ranging from -2 to 2 Newton. To make sure that the output from the agent is in this range, you perform an appropriate scaling operation. Store these limits so you can easily access them later.

% Make sure action space upper and lower limits are finite
actInfo.LowerLimit=-2;
actInfo.UpperLimit=2;

The actor and critic networks are initialized randomly. You can ensure reproducibility by fixing the seed of the random generator.

rng(0)

For actor-critic agents, the critic estimates a value function, therefore it must take the observation signal as input and return a scalar value. To approximate the value function within the critic, use a deep neural network.

Define the network as an array of layer objects, and get the dimensions of the observation space from the environment specification object.

cNet = [
    featureInputLayer(prod(obsInfo.Dimension))
    fullyConnectedLayer(50)
    reluLayer
    fullyConnectedLayer(1)];

Convert the network to a dlnetwork object and display the number of weights.

cNet = dlnetwork(cNet);
summary(cNet)
   Initialized: true

   Number of learnables: 201

   Inputs:
      1   'input'   2 features

Create the critic using cNet. Actor-critic agents use an rlValueFunction object to implement the critic.

critic = rlValueFunction(cNet,obsInfo);

Check your critic with a random input observation.

getValue(critic,{rand(obsInfo.Dimension)})
ans = single
    -0.0969

To approximate the policy within the actor, use a deep neural network. For actor-critic agents, the actor executes a stochastic policy, which for continuous action spaces is implemented by a continuous Gaussian actor. In this case the network must take the observation signal as input and return both a mean value and a standard deviation value for each action. Therefore it must have two output layers (one for the mean values the other for the standard deviation values), each having as many elements as the dimension of the action space.

Note that standard deviations must be nonnegative and mean values must fall within the range of the action. Therefore the output layer that returns the standard deviations must be a softplus or ReLU layer, to enforce nonnegativity, while the output layer that returns the mean values must be a scaling layer, to scale the mean values to the output range.

Define each network path as an array of layer objects. Get the dimensions of the observation and action spaces from the environment specification objects, and specify a name for the input and output layers, so you can later explicitly associate them with the appropriate channel.

% Input path
inPath = [ 
    featureInputLayer(prod(obsInfo.Dimension),Name="netObsIn")
    fullyConnectedLayer(prod(actInfo.Dimension),Name="infc") 
    ];

% Mean value path
meanPath = [ 
    tanhLayer(Name="tanhMean");
    fullyConnectedLayer(50)
    reluLayer
    fullyConnectedLayer(prod(actInfo.Dimension));
    scalingLayer( ...
    Name="netMout", ...
    Scale=actInfo.UpperLimit)  % scale to range
    ];

% Standard deviation path
sdevPath = [ 
    tanhLayer(Name="tanhStdv");
    fullyConnectedLayer(50)
    reluLayer
    fullyConnectedLayer(prod(actInfo.Dimension));
    softplusLayer(Name="netSDout")  % nonnegative
    ];

% Add layers to network object
aNet = layerGraph;
aNet = addLayers(aNet,inPath);
aNet = addLayers(aNet,meanPath);
aNet = addLayers(aNet,sdevPath);

% Connect layers
aNet = connectLayers(aNet,"infc","tanhMean/in");
aNet = connectLayers(aNet,"infc","tanhStdv/in");

% Plot network
plot(aNet)

Figure contains an axes object. The axes object contains an object of type graphplot.

Convert the network to a dlnetwork object and display the number of learnable parameters (weights).

aNet = dlnetwork(aNet);
summary(aNet)
   Initialized: true

   Number of learnables: 305

   Inputs:
      1   'netObsIn'   2 features

Create the actor. Actor-critic agents use an rlContinuousGaussianActor object to implement the actor for continuous action spaces.

actor = rlContinuousGaussianActor(aNet, obsInfo, actInfo, ...
    ActionMeanOutputNames="netMout",...
    ActionStandardDeviationOutputNames="netSDout",...
    ObservationInputNames="netObsIn");

Check your actor with a random input observation.

getAction(actor,{rand(obsInfo.Dimension)})
ans = 1x1 cell array
    {[-1.2332]}

Create the AC agent using the actor and the critic.

agent = rlACAgent(actor,critic);

Specify agent options, including training options for its actor and critic.

agent.AgentOptions.NumStepsToLookAhead = 32;
agent.AgentOptions.DiscountFactor=0.99;

agent.AgentOptions.CriticOptimizerOptions.LearnRate=8e-3;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold=1;

agent.AgentOptions.ActorOptimizerOptions.LearnRate=8e-3;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold=1;

Check your agent using a random input observation.

getAction(agent,{rand(obsInfo.Dimension)})
ans = 1x1 cell array
    {[-1.5401]}

You can now test and train the agent within the environment.

For this example load the predefined environment used for the Train DQN Agent to Balance Cart-Pole System example. This environment has a four-dimensional observation vector (cart position and velocity, pole angle, and pole angle derivative), and a scalar action with two possible elements (a force of either -10 or +10 N applied on the cart).

env = rlPredefinedEnv("CartPole-Discrete");

Get observation and action information.

obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);

The agent creation function initializes the actor and critic networks randomly. Ensure reproducibility by fixing the seed of the random generator.

rng(0)

For actor-critic agents, the critic estimates a value function, therefore it must take the observation signal as input and return a scalar value.

To approximate the value function within the critic, use a recurrent deep neural network. Define the network as an array of layer objects, and get the dimensions of the observation space from the environment specification object. To create a recurrent network, use a sequenceInputLayer as the input layer and include an lstmLayer as one of the other network layers.

cNet = [
    sequenceInputLayer(prod(obsInfo.Dimension))
    lstmLayer(10)
    reluLayer
    fullyConnectedLayer(1)];

Convert the network to a dlnetwork object and display the number of learnable parameters (weights).

cNet = dlnetwork(cNet);
summary(cNet)
   Initialized: true

   Number of learnables: 611

   Inputs:
      1   'sequenceinput'   Sequence input with 4 dimensions

Create the critic using cNet. Actor-critic agents use an rlValueFunction object to implement the critic.

critic = rlValueFunction(cNet,obsInfo);

Check the critic with a random input observation.

getValue(critic,{rand(obsInfo.Dimension)})
ans = single
    -0.0344

Since the critic has a recurrent network, the actor must also use a recurrent network too. For actor-critic agents, the actor executes a stochastic policy, which for discrete action spaces is implemented by a discrete categorical actor. In this case the network must take the observation signal as input and return a probability for each action. Therefore the output layer must have as many elements as the number of possible actions.

aNet = [
    sequenceInputLayer(prod(obsInfo.Dimension))
    lstmLayer(20)
    reluLayer
    fullyConnectedLayer(numel(actInfo.Elements))];

Convert the network to a dlnetwork object and display the number of weights.

aNet = dlnetwork(aNet);
summary(aNet)
   Initialized: true

   Number of learnables: 2k

   Inputs:
      1   'sequenceinput'   Sequence input with 4 dimensions

Create the actor using aNet. Actor-critic agents use an rlDiscreteCategoricalActor object to implement the actor for discrete action spaces.

actor = rlDiscreteCategoricalActor(aNet,obsInfo,actInfo);

Check the actor with a random input observation.

getAction(actor,{rand(obsInfo.Dimension)})
ans = 1x1 cell array
    {[10]}

Set some training options for the critic.

criticOpts = rlOptimizerOptions( ...
    LearnRate=8e-3,GradientThreshold=1);

Set some training options for the actor.

actorOpts = rlOptimizerOptions( ...
    LearnRate=8e-3,GradientThreshold=1);

Specify agent options, and create an AC agent using the actor, the critic, and the agent options object. Since the agent uses recurrent neural networks, NumStepsToLookAhead is treated as the training trajectory length.

agentOpts = rlACAgentOptions( ...
    NumStepsToLookAhead=32, ...
    DiscountFactor=0.99, ...
    CriticOptimizerOptions=criticOpts, ...
    ActorOptimizerOptions=actorOpts);
agent = rlACAgent(actor,critic,agentOpts);

To check your agent, return the action from a random observation.

getAction(agent,{rand(obsInfo.Dimension)})
ans = 1x1 cell array
    {[10]}

You can now test and train the agent within the environment.

To train an agent using the asynchronous advantage actor-critic (A3C) method, you must set the agent and parallel training options appropriately.

When creating the AC agent, set the NumStepsToLookAhead value to be greater than 1. Common values are 64 and 128.

agentOpts = rlACAgentOptions(NumStepsToLookAhead=64);

Use agentOpts when creating your agent. Alternatively, create your agent first and then modify its options, including the actor and critic options later using dot notation.

Configure the training algorithm to use asynchronous parallel training.

trainOpts = rlTrainingOptions(UseParallel=true);
trainOpts.ParallelizationOptions.Mode = "async";

Configure the workers to return gradient data to the host. Also, set the number of steps before the workers send data back to the host to match the number of steps to look ahead.

trainOpts.ParallelizationOptions.DataToSendFromWorkers = ...
    "gradients";
trainOpts.ParallelizationOptions.StepsUntilDataIsSent = ...
    agentOpts.NumStepsToLookAhead;

Use trainOpts when training your agent.

For an example on asynchronous advantage actor-critic agent training, see Train AC Agent to Balance Cart-Pole System Using Parallel Computing.

Tips

  • For continuous action spaces, the rlACAgent object does not enforce the constraints set by the action specification, so you must enforce action space constraints within the environment.

Version History

Introduced in R2019a