Main Content

Actor-Critic (AC) Agent

Actor-critic (AC) agents implement actor-critic algorithms such as A2C and A3C, which are on-policy policy-gradient reinforcement learning methods for environments with a discrete or continuous action space. The actor-critic agent directly optimizes a stochastic policy and uses a value function critic to estimate the value of the policy [1]. The AC agent is similar to a REINFORCE policy gradient agent with a baseline, with the difference that AC uses a fully bootstrapping critic (that is it used to update the value estimate of a state based on the value estimates of subsequent states). You can use the, which uses to implement actor-critic algorithms, such as A2C and A3C. For continuous action spaces, this agent does not enforce constraints set in the action specification; therefore, if you need to enforce action constraints, you must do so within the environment.

In Reinforcement Learning Toolbox™, a REINFORCE actor-critic agent is implemented by an rlACAgent object.

Note

AC agents do not generally have functional advantages with respect to more recent agents such as PPO and are provided mostly for educational purposes.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

Actor-critic agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Discrete or continuousDiscrete or continuous

Actor-critic agents use the following actor and critics.

CriticActor

Value function critic V(S), which you create using rlValueFunction

Stochastic policy actor π(S), which you create using rlDiscreteCategoricalActor (for discrete action spaces) or rlContinuousGaussianActor (for continuous action spaces)

During training, an actor-critic agent:

  • Estimates probabilities of taking each action in the action space and randomly selects actions based on the probability distribution.

  • Interacts with the environment for multiple steps using the current policy before updating the actor and critic properties.

If the UseExplorationPolicy option of the agent is set to false the action with maximum likelihood is always used in sim and generatePolicyFunction. As a result, the simulated agent and generated policy behave deterministically.

If the UseExplorationPolicy is set to true the agent selects its actions by sampling its probability distribution. As a result the policy is stochastic and the agent explores its observation space.

This option affects only simulation and deployment; it does not affect training.

Actor and Critic Function Approximators

To estimate the policy and value function, an actor-critic agent maintains two function approximators.

  • Actor π(A|S;θ) — The actor, with parameters θ, outputs the conditional probability of taking each action A when in state S as one of the following:

    • Discrete action space — The probability of taking each discrete action. The sum of these probabilities across all actions is 1.

    • Continuous action space — The mean and standard deviation of the Gaussian probability distribution for each continuous action.

  • Critic V(S;ϕ) — The critic, with parameters ϕ, takes observation S and returns the corresponding expectation of the discounted long-term reward.

For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.

During training, the agent tunes the parameter values in θ. After training, the parameters remain at their tuned value and the trained actor function approximator is stored in π(A|S).

Agent Creation

You can create an actor-critic agent with default actor and critics based on the observation and action specifications from the environment. To do so, perform the following steps.

  1. Create observation specifications for your environment. If you already have an environment object, you can obtain these specifications using getObservationInfo.

  2. Create action specifications for your environment. If you already have an environment object, you can obtain these specifications using getActionInfo.

  3. If needed, specify the number of neurons in each learnable layer of the default network or whether to use an LSTM layer. To do so, create an agent initialization option object using rlAgentInitializationOptions.

  4. If needed, specify agent options using an rlACAgentOptions object.

  5. Create the agent using an rlACAgent object.

Alternatively, you can create actor and critic and use these objects to create your agent. In this case, ensure that the input and output dimensions of the actor and critic match the corresponding action and observation specifications of the environment.

  1. Create an actor using an rlDiscreteCategoricalActor (for discrete action spaces) or an rlContinuousGaussianActor (for continuous action spaces) object.

  2. Create a critic using an rlValueFunction object.

  3. Specify agent options using an rlACAgentOptions object (alternatively, you can skip this step and then modify the agent options later using dot notation).

  4. Create the agent using an rlACAgent object.

For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.

Training Algorithm

Actor-critic agents use the following training algorithm. To configure the training algorithm, specify options using an rlACAgentOptions object.

  1. Initialize the actor π(A|S;θ) with random parameter values θ.

  2. Initialize the critic V(S;ϕ) with random parameter values ϕ.

  3. Generate N experiences by following the current policy. The episode experience sequence is

    Sts,Ats,Rts+1,Sts+1,,Sts+N1,Ats+N1,Rts+N,Sts+N

    Here, St is a state observation, At is an action taken from that state, St+1 is the next state, and Rt+1 is the reward received for moving from St to St+1.

    When in state St, the agent computes the probability of taking each action in the action space using π(A|St;θ) and randomly selects action At based on the probability distribution.

    ts is the starting time step of the current set of N experiences. At the beginning of the training episode, ts = 1. For each subsequent set of N experiences in the same training episode, ts = ts + N.

    For each training episode that does not contain a terminal state, N is equal to the NumStepsToLookAhead option value. Otherwise, N is less than NumStepsToLookAhead and SN is the terminal state.

  4. For each episode step t = ts, ts+1, …, ts+N, compute the return Gt, which is the sum of the reward for that step and the discounted future reward. If Sts+N is not a terminal state, the discounted future reward includes the discounted state value function, computed using the critic network V.

    Gt=k=t+1ts+N(γkt1Rk)+bγts+NtV(Sts+N;ϕ)

    Here, b is 0 if Sts+N is a terminal state and 1 otherwise.

    To specify the discount factor γ, use the DiscountFactor option.

  5. Compute the advantage function Dt.

    Dt=GtV(St;ϕ)

  6. Accumulate the gradients for the actor network by following the policy gradient to maximize the expected discounted cumulative long-term reward.

    dθ=t=1Nθμlnπ(A|St;θ)Dt

  7. Accumulate the gradients for the critic network by minimizing the mean squared error loss between the estimated value function V (St;ϕ) and the computed target return Gt across all N experiences. If the EntropyLossWeight option is greater than zero, then additional gradients are accumulated to minimize the entropy loss function.

    dϕ=t=1Nϕ(GtV(St;ϕ))2

  8. Update the actor parameters by applying the gradients.

    θ=θ+αdθ

    Here, α is the learning rate of the actor. Specify the learning rate when you create the actor by setting the LearnRate option in the rlActorOptimizerOptions property within the agent options object.

  9. Update the critic parameters by applying the gradients.

    ϕ=ϕ+βdϕ

    Here, β is the learning rate of the critic. Specify the learning rate when you create the critic by setting the LearnRate option in the rlCriticOptimizerOptions property within the agent options object.

  10. Repeat steps 3 through 9 for each training episode until training is complete.

For simplicity, the actor and critic updates in this algorithm description show a gradient update using basic stochastic gradient descent. The actual gradient update method depends on the optimizer you specify in the rlOptimizerOptions object assigned to the rlCriticOptimizerOptions property.

References

[1] Mnih, Volodymyr, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. “Asynchronous Methods for Deep Reinforcement Learning.” ArXiv:1602.01783 [Cs], February 4, 2016. https://arxiv.org/abs/1602.01783.

[2] Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning. Cambridge, Mass: The MIT Press, 2018.

See Also

Objects

Related Examples

More About