Main Content

Trust Region Policy Optimization (TRPO) Agents

Trust Region Policy Optimization (TRPO) is a model-free, online, on-policy, policy gradient reinforcement learning algorithm. TRPO alternates between sampling data through environmental interaction and updating the policy parameters by solving a constrained optimization problem. The KL-divergence between the old policy and the new policy is used as a constraint during optimization. As a result, this algorithm prevents significant performance drops compared to standard policy gradient methods by keeping the updated policy within a trust region close to the current policy [1].

Note

For TRPO agents, you can only use actors or critics with deep network that support calculating higher order derivatives. Actors and critics that use recurrent networks, custom basis functions, or tables are not supported.

PPO is a simplified version of TRPO. Specifically, PPO has less hyperparameters and therefore is easier to tune, and less computationally expensive than TRPO. For more information on PPO agents, see Proximal Policy Optimization (PPO) Agents.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

TRPO agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Discrete or continuousDiscrete or continuous

TRPO agents use the following actor and critic.

CriticActor

Value function critic V(S), which you create using rlValueFunction

Stochastic policy actor π(S), which you create using rlDiscreteCategoricalActor (for discrete action spaces) or rlContinuousGaussianActor (for continuous action spaces)

During training, a TRPO agent:

  • Estimates probabilities of taking each action in the action space and randomly selects actions based on the probability distribution.

  • Interacts with the environment for multiple steps using the current policy before using mini-batches to update the actor and critic properties over multiple epochs.

If the UseExplorationPolicy option of the agent is set to false the action with maximum likelihood is always used in sim and generatePolicyFunction. As a result, the simulated agent and generated policy behave deterministically.

If the UseExplorationPolicy is set to true the agent selects its actions by sampling its probability distribution. As a result the policy is stochastic and the agent explores its observation space.

This option affects only simulation and deployment; it does not affect training.

Actor and Critic Function Approximators

To estimate the policy and value function, a TRPO agent maintains two function approximators.

  • Actor π(A|S;θ) — The actor, with parameters θ, outputs the conditional probability of taking each action A when in state S as one of the following:

    • Discrete action space — The probability of taking each discrete action. The sum of these probabilities across all actions is 1.

    • Continuous action space — The mean and standard deviation of the Gaussian probability distribution for each continuous action.

  • Critic V(S;ϕ) — The critic, with parameters ϕ, takes observation S and returns the corresponding expectation of the discounted long-term reward.

For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.

During training, the agent tunes the parameter values in θ. After training, the parameters remain at their tuned value and the trained actor function approximator is stored in π(A|S).

Agent Creation

You can create and train TRPO agents at the MATLAB® command line or using the Reinforcement Learning Designer app. For more information on creating agents using Reinforcement Learning Designer, see Create Agents Using Reinforcement Learning Designer.

At the command line, you can create a TRPO agent with default actor and critic based on the observation and action specifications from the environment. To do so, perform the following steps.

  1. Create observation specifications for your environment. If you already have an environment interface object, you can obtain these specifications using getObservationInfo.

  2. Create action specifications for your environment. If you already have an environment interface object, you can obtain these specifications using getActionInfo.

  3. If needed, specify the number of neurons in each learnable layer. To do so, create an agent initialization options object using rlAgentInitializationOptions.

  4. Specify agent options using an rlTRPOAgentOptions object.

  5. Create the agent using an rlTRPOAgent object.

Alternatively, you can create actor and critic and use these objects to create your agent. In this case, ensure that the input and output dimensions of the actor and critic match the corresponding action and observation specifications of the environment.

  1. Create an actor using rlDiscreteCategoricalActor object (for discrete action spaces) or rlContinuousGaussianActor object (for continuous action spaces).

  2. Create a critic using an rlValueFunction object.

  3. If needed, specify agent options using an rlTRPOAgentOptions object.

  4. Create the agent using the rlTRPOAgent function.

TRPO agents do not support actors and critics that use recurrent deep neural networks as function approximators. TRPO agents also do not support deep neural networks that use a quadraticLayer.

For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.

Trust Region Policy Optimization

Trust region policy optimization finds the actor parameters that minimize the following actor loss function.

Lactor(θ)=1Mi=1M(π(Ai|Si;θ)π(Ai|Si;θold)Di+wi(θ,Si))

Here:

  • M is the mini-batch size.

  • Di is an advantage function.

  • πi(Ai|Si;θ) is the probability of taking action Ai following the current policy. This value is a specific value of the probability (discrete action) or of the probability density function (continuous action).

  • π(Ai|Si;θold) is the probability of taking action Ai following the old policy.

  • wi(θ,Si) is an entropy loss term, where w is the entropy loss weight and i(θ,Si) is the entropy. For more information, see Entropy Loss.

This minimization is subject to the following constraint.

1Mi=1MDKL(θold,θ,Si)δ

Here:

  • DKL(θold,θ,Si) is the Kullback-Leibler (KL) divergence between the old policy π(A|Si;θold) and current policy π(A|Si;θ). DKL measures how much the probability distributions of the old and new policies differ. DKL is zero when the two distributions are identical.

  • δ is the limit for DKL and controls how much the new policy can deviate from the old policy.

For agents with discrete action spaces, DKL is computed as follows, where P is the number of actions.

DKL(θold,θ,Si)=k=1Pπ(Ak|Si;θold)ln(π(Ak|Si;θold)π(Ak|Si;θ))

For agents with continuous action spaces, DKL is computed as follows.

DKL(θold,θ,Si)=1Pk=1P(ln(σθ,k)ln(σθold,k)+σθold,k2+(μθold,kμθ,k)22σθ,k20.5)

Here:

  • μθ,k and σθ,k are the mean and standard deviation for the kth action output by the current actor policy π(Ak|Si;θ).

  • μθold,k and σθold,k are the mean and standard deviation for the kth action output by the old policy π(Ak|Si;θold).

To approximate this optimization problem, the TRPO agent uses a linear approximation of Lactor(θ) and a quadratic approximation of DKL(θold,θ,Si). The approximations are computed by taking the Taylor series expansion around θ.

minθLactor(θ)g(θθold)=θLactor(θ)|θ=θold(θθold)subjectto12(θoldθ)TH(θoldθ)δwhereH=θ21Mi=1MDKL(θold,θ,Si)|θ=θold

The analytical solution to this approximate optimization problem is as follows.

θ=θold+α2δxTH1xx

Here, x=H-1g and α is a coefficient for ensuring that the policy improves and satisfies the constraint.

Training Algorithm

TRPO agents use the following training algorithm. To configure the training algorithm, specify options using an rlTRPOAgentOptions object.

  1. Initialize the actor π(A|S;θ) with random parameter values θ.

  2. Initialize the critic V(S;ϕ) with random parameter values ϕ.

  3. Generate N experiences by following the current policy. The experience sequence is

    Sts,Ats,Rts+1,Sts+1,,Sts+N1,Ats+N1,Rts+N,Sts+N

    Here, St is a state observation, At is an action taken from that state, St+1 is the next state, and Rt+1 is the reward received for moving from St to St+1.

    When in state St, the agent computes the probability of taking each action in the action space using π(A|St;θ) and randomly selects action At based on the probability distribution.

    ts is the starting time step of the current set of N experiences. At the beginning of the training episode, ts = 1. For each subsequent set of N experiences in the same training episode, tsts + N.

    For each experience sequence that does not contain a terminal state, N is equal to the ExperienceHorizon option value. Otherwise, N is less than ExperienceHorizon and SN is the terminal state.

  4. For each episode step t = ts, ts+1, …, ts+N-1, compute the return and advantage function using the method specified by the AdvantageEstimateMethod option.

    • Finite Horizon (AdvantageEstimateMethod = "finite-horizon") — Compute the return Gt, which is the sum of the reward for that step and the discounted future reward [2].

      Gt=k=t+1ts+N(γkt1Rk)+bγts+NtV(Sts+N;ϕ)

      Here, b is 0 if Sts+N is a terminal state and 1 otherwise. That is, if Sts+N is not a terminal state, the discounted future reward includes the discounted state value function, computed using the critic network V.

      Compute the advantage function Dt.

      Dt=GtV(St;ϕ)

    • Generalized Advantage Estimator (AdvantageEstimateMethod = "gae") — Compute the advantage function Dt, which is the discounted sum of temporal difference errors [3].

      Dt=k=tts+N1(γλ)ktδkδk=Rk+1+bγV(Sk+1;ϕ)V(Sk;ϕ)

      Here, b is 0 if Sts+N is a terminal state and 1 otherwise. λ is a smoothing factor specified using the GAEFactor option.

      Compute the return Gt.

      Gt=Dt+V(St;ϕ)

    To specify the discount factor γ for either method, use the DiscountFactor option.

  5. Learn from mini-batches of experiences over K epochs. To specify K, use the NumEpoch option. For each learning epoch:

    1. Sample a random mini-batch data set of size M from the current set of experiences. To specify M, use the MiniBatchSize option. Each element of the mini-batch data set contains a current experience and the corresponding return and advantage function values.

    2. Update the critic parameters by minimizing the loss Lcritic across all sampled mini-batch data.

      Lcritic(ϕ)=12Mi=1M(GiV(Si;ϕ))2

    3. Normalize the advantage values Di based on recent unnormalized advantage values.

      • If the NormalizedAdvantageMethod option is 'none', do not normalize the advantage values.

        D^iDi

      • If the NormalizedAdvantageMethod option is 'current', normalize the advantage values based on the unnormalized advantages in the current mini-batch.

        D^iDimean(D1,D2,,DM)std(D1,D2,,DM)

      • If the NormalizedAdvantageMethod option is 'moving', normalize the advantage values based on the unnormalized advantages for the N most recent advantages, including the current advantage value. To specify the window size N, use the AdvantageNormalizingWindow option.

        D^iDimean(D1,D2,,DN)std(D1,D2,,DN)

    4. Update the actor parameters by solving the constrained optimization problem.

      1. Compute the policy gradient.

        g=θLactor(θ)=θ1Mi=1M(π(Ai|Si;θ)π(Ai|Si;θold)D^i+wi(θ,Si))

      2. Apply the conjugate gradient (CG) method to find an approximate solution to the following equation, where H is the Hessian of the KL-divergence between the old and new policies.

        xH1g

        To configure the termination conditions for the CG algorithm, use the NumIterationsConjugateGradient and ConjugateGradientResidualTolerance options. To stabilize the numerical computation for the CG algorithm, use the ConjugateGradientDamping option.

      3. Using a line search algorithm, find the largest α that satisfies the following constraints.

        θ=θold+α2δxTH1xxLactor(θ)Lactor(θold)<01Mi=1MDKL(θold,θ,Si)δα{1,12,122,,12n1}

        Here, δ is the KL-divergence limit, which you set using the KLDivergenceLimit option. n is the number of line search iterations, which you set using the NumIterationsLineSearch option.

      4. If a valid value of α exists, update the parameters of the actor network to θ. If a valid value of α does not exist, do not update the actor parameters.

  6. Repeat steps 3 through 5 until the training episode reaches a terminal state.

Entropy Loss

To promote agent exploration, you can add an entropy loss term wi(θ,Si) to the actor loss function, where w is the entropy loss weight and i(θ,Si) is the entropy.

The entropy value is higher when the agent is more uncertain about which action to take next. Therefore, maximizing the entropy loss term (minimizing the negative entropy loss) increases the agent uncertainty, thus encouraging exploration. To promote additional exploration, which can help the agent move out of local optima, you can specify a larger entropy loss weight.

For a discrete action space, the agent uses the following entropy value. In this case, the actor outputs the probability of taking each possible discrete action.

i(θ,Si)=k=1Pπ(Ak|Si;θ)lnπ(Ak|Si;θ)

Here:

  • P is the number of possible discrete actions.

  • π(Ak|Si;θ) is the probability of taking action Ak when in state Si following the current policy.

For a continuous action space, the agent uses the following entropy value. In this case, the actor outputs the mean and standard deviation of the Gaussian distribution for each continuous action.

i(θ,Si)=12k=1Cln(2πeσk,i2)

Here:

  • C is the number of continuous actions output by the actor.

  • σk,i is the standard deviation for action k when in state Si following the current policy.

References

[1] Schulman, John, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. "Trust Region Policy Optimization." Proceedings of the 32nd International Conference on Machine Learning, pp. 1889-1897. 2015.

[2] Mnih, Volodymyr, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. “Asynchronous Methods for Deep Reinforcement Learning.” ArXiv:1602.01783 [Cs], February 4, 2016. https://arxiv.org/abs/1602.01783.

[3] Schulman, John, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. “High-Dimensional Continuous Control Using Generalized Advantage Estimation.” ArXiv:1506.02438 [Cs], October 20, 2018. https://arxiv.org/abs/1506.02438.

See Also

Objects

Related Examples

More About