The soft actor-critic (SAC) algorithm is a model-free, online, off-policy, actor-critic reinforcement learning method. The SAC algorithm computes an optimal policy that maximizes both the long-term expected reward and the entropy of the policy. The policy entropy is a measure of policy uncertainty given the state. A higher entropy value promotes more exploration. Maximizing both the expected cumulative long term reward and the entropy balances and exploitation and exploration of the environment.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
The implementation of the SAC agent in Reinforcement Learning Toolbox™ software uses two Q-value function critics, which prevents overestimation of the value function. Other implementations of the SAC algorithm use an additional value function critic.
SAC agents can be trained in environments with the following observation and action spaces.
Observation Space | Action Space |
---|---|
Discrete or continuous | Continuous |
SAC agents use the following actor and critic representations.
Critic | Actor |
---|---|
Q-value function critic Q(S,A), which you create using
| Stochastic policy actor π(S), which you create using
|
During training, a SAC agent:
Updates the actor and critic properties at regular intervals during learning.
Estimates the mean and standard deviation of a Gaussian probability distribution for the continuous action space, then randomly selects actions based on the distribution.
Updates an entropy weight term that balances the expected return and the entropy of the policy.
Stores past experience using a circular experience buffer. The agent updates the actor and critic using a mini-batch of experiences randomly sampled from the buffer.
If the UseDeterministicExploitation
option in rlSACAgentOptions
is
set to true
the action with maximum likelihood is always used in sim
and generatePolicyFunction
. This causes the simulated agent and the generated policy
to behave deterministically.
To estimate the policy and value function, a SAC agent maintains the following function approximators:
Stochastic actor μ(S) — The actor takes observation S and returns the action probability density function. The agent randomly selects actions based on this density function.
One or two Q-value critics Qk(S,A) — The critics take observation S and action A as inputs and return the corresponding expectation of the value function, which includes both the long-term reward and entropy.
One or two target critics Q'k(S,A) — To improve the stability of the optimization, the agent periodically updates the target critics based on the latest parameter values of the critics. The number of target critics matches the number of critics.
When you use two critics, Q1(S,A) and Q2(S,A), each critic can have a different structure. When the critics have the same structure, they must have different initial parameter values.
For each critic, Qk(S,A) and Q'k(S,A) have the same structure and parameterization.
When training is complete, the trained optimal policy is stored in actor μ(S).
The actor in a SAC agent generates mean and standard deviation outputs. To select an action, the actor first randomly selects an unbounded action from a Gaussian distribution with these parameters. During training, the SAC agent uses the unbounded probability distribution to compute the entropy of the policy for the given observation.
If the action space of the SAC agent is bounded, the actor generates bounded actions by applying tanh and scaling operations to the unbounded action.
You can create a SAC agent with default actor and critic representations based on the observation and action specifications from the environment. To do so, perform the following steps.
Create observation specifications for your environment. If you already have an
environment interface object, you can obtain these specifications using getObservationInfo
.
Create action specifications for your environment. If you already have an
environment interface object, you can obtain these specifications using getActionInfo
.
If needed, specify the number of neurons in each learnable layer or whether to use a
recurrent neural network. To do so, create an agent initialization option object using
rlAgentInitializationOptions
.
If needed, specify agent options using an rlSACAgentOptions
object.
Create the agent using an rlSACAgent
object.
Alternatively, you can create actor and critic representations and use these representations to create your agent. In this case, ensure that the input and output dimensions of the actor and critic representations match the corresponding action and observation specifications of the environment.
Create a stochastic actor using an rlStochasticActorRepresentation
object. For SAC agents, the actor network
must not contain a tanhLayer
and scalingLayer
in the
mean output path.
Create one or two critics using rlQValueRepresentation
objects.
Specify agent options using an rlSACAgentOptions
object.
Create the agent using an rlSACAgent
object.
SAC agents do not support actors and critics that use recurrent deep neural networks as function approximators.
For more information on creating actors and critics for function approximation, see Create Policy and Value Function Representations.
SAC agents use the following training algorithm, in which they periodically update their
actor and critic models and entropy weight. To configure the training algorithm, specify
options using an rlSACAgentOptions
object. Here, K = 2 is the number of critics and k is the critic
index.
Initialize each critic Qk(S,A) with random parameter values θQk, and initialize each target critic with the same random parameter values: .
Initialize the actor μ(S) with random parameter values θμ.
Perform a warm start by taking a sequence of actions following the initial random
policy in μ(S). For each action, store the
experience in the experience buffer. To specify the number of warm up actions, use the
NumWarmStartSteps
option.
For each training time step:
For the current observation S, select action A using the policy in μ(S).
Execute action A. Observe the reward R and next observation S'.
Store the experience (S,A,R,S') in the experience buffer.
Sample a random mini-batch of M experiences
(Si,Ai,Ri,S'i)
from the experience buffer. To specify M, use the
MiniBatchSize
option.
Every DC time steps, update the
parameters of each critic by minimizing the loss
Lk across all sampled experiences. To
specify DC, use the
CriticUpdateFrequency
option.
If S'i is a terminal state, the value function target yi is equal to the experience reward Ri. Otherwise, the value function target is the sum of Ri, the minimum discounted future reward from the critics, and the weighted entropy H.
Here:
A'i is the bounded action derived from the unbounded output of the actor μ(S'i).
γ is the discount factor, which you specify using the
DiscountFactor
option.
H is the policy entropy, which is computed for the unbounded output of the actor.
α is the entropy tuning weight, which the SAC agent tunes during training.
Every DA time steps, update the actor
parameters by minimizing the following objective function. To set
DA, use the
PolicyUpdateFrequency
option.
Every DA time steps, also update the entropy weight by minimizing the following loss function.
Here, H' is the target entropy, which you specify using the
EntropyWeightOptions.TargetEntropy
option.
Every DT steps, update the target
critics depending on the target update method. To specify
DT, use the
TargetUpdateFrequency
option. For more information, see Target Update Methods.
Repeat steps 4 through 8 NG times,
where NG is the number of gradient steps,
which you specify using the NumGradientStepsPerUpdate
option.
SAC agents update their target critic parameters using one of the following target update methods.
Smoothing — Update the target critic parameters at every time step using smoothing
factor τ. To specify the smoothing factor, use the
TargetSmoothFactor
option.
Periodic — Update the target critic parameters periodically without smoothing
(TargetSmoothFactor = 1
). To specify the update period, use the
TargetUpdateFrequency
parameter.
Periodic smoothing — Update the target parameters periodically with smoothing.
To configure the target update method, create an rlSACAgentOptions
object, and set the TargetUpdateFrequency
and
TargetSmoothFactor
parameters as shown in the following table.
Update Method | TargetUpdateFrequency | TargetSmoothFactor |
---|---|---|
Smoothing (default) | 1 | Less than 1 |
Periodic | Greater than 1 | 1 |
Periodic smoothing | Greater than 1 | Less than 1 |
[1] Haarnoja, Tuomas, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, et al. "Soft Actor-Critic Algorithms and Application." Preprint, submitted January 29, 2019. https://arxiv.org/abs/1812.05905.
rlSACAgent
| rlSACAgentOptions