rlTD3Agent
Twin-delayed deep deterministic (TD3) policy gradient reinforcement learning agent
Since R2020a
Description
The twin-delayed deep deterministic (TD3) policy gradient algorithm is an off-policy actor-critic method for environments with a continuous action-space. A TD3 agent learns a deterministic policy while also using two critics to estimate the value of the optimal policy. It features a target actor and target critics as well as an experience buffer. TD3 agents supports offline training (training from saved data, without an environment).
Use rlTD3Agent
to create one of the following types of agents.
Twin-delayed deep deterministic policy gradient (TD3) agent with two Q-value functions. This agent prevents overestimation of the value function by learning two Q value functions and using the minimum values for policy updates.
Delayed deep deterministic policy gradient (delayed DDPG) agent with a single Q value function. This agent is a DDPG agent with target policy smoothing and delayed policy and target updates.
For more information, see Twin-Delayed Deep Deterministic (TD3) Policy Gradient Agent. For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
Creation
Syntax
Description
Create Agent from Observation and Action Specifications
creates a TD3 agent for an environment with the given observation and action
specifications, using default initialization options. The actor and critics in the agent
use default deep neural networks built from the observation specification
agent
= rlTD3Agent(observationInfo
,actionInfo
)observationInfo
and the action specification
actionInfo
. The ObservationInfo
and
ActionInfo
properties of agent
are set to
the observationInfo
and actionInfo
input
arguments, respectively.
creates a deep deterministic policy gradient agent for an environment with the given
observation and action specifications. The agent uses default networks configured using
options specified in the agent
= rlTD3Agent(observationInfo
,actionInfo
,initOpts
)initOpts
object. For more information on
the initialization options, see rlAgentInitializationOptions
.
Create Agent from Actor and Critic
Specify Agent Options
creates a TD3 agent and sets the agent
= rlTD3Agent(___,agentOptions
)AgentOptions
property to the agentOptions
input argument. Use this syntax after
any of the input arguments in the previous syntaxes.
Input Arguments
Properties
Object Functions
train | Train reinforcement learning agents within a specified environment |
sim | Simulate trained reinforcement learning agents within specified environment |
getAction | Obtain action from agent, actor, or policy object given environment observations |
getActor | Extract actor from reinforcement learning agent |
setActor | Set actor of reinforcement learning agent |
getCritic | Extract critic from reinforcement learning agent |
setCritic | Set critic of reinforcement learning agent |
generatePolicyFunction | Generate MATLAB function that evaluates policy of an agent or policy object |
Examples
Version History
Introduced in R2020a
See Also
Apps
Functions
getAction
|getActor
|getCritic
|getModel
|generatePolicyFunction
|generatePolicyBlock
|getActionInfo
|getObservationInfo
Objects
rlTD3AgentOptions
|rlAgentInitializationOptions
|rlQValueFunction
|rlContinuousDeterministicActor
|rlContinuousGaussianActor
|rlDDPGAgent
|rlSACAgent
|rlPPOAgent