Train PG Agent to Balance Discrete Cart-Pole System
This example shows how to train a policy gradient (PG) agent to balance a discrete action space cart-pole system modeled in MATLAB®. For more information on PG agents, see REINFORCE Policy Gradient (PG) Agent.
For an example that trains a PG agent with a baseline, see Train PG Agent with Custom Networks to Control Discrete Double Integrator.
The example code may involve computation of random numbers at various stages such as initialization of the agent, creation of the actor and critic, resetting the environment during simulations, generating observations (for stochastic environments), generating exploration actions, and sampling min-batches of experiences for learning. Fixing the random number stream preserves the sequence of the random numbers every time you run the code and improves reproducibility of results. You will fix the random number stream at various locations in the example.
Fix Random Seed Generator to Improve Reproducibility
The example code may involve computation of random numbers at various stages such as initialization of the agent, creation of the actor and critic, resetting the environment during simulations, initializing the environment state, generating observations (for stochastic environments), generating exploration actions, and sampling min-batches of experiences for learning. Fixing the random number stream preserves the sequence of the random numbers every time you run the code and improves reproducibility of results. You will fix the random number stream at various locations in the example.
Fix the random number stream with the seed 0 and random number algorithm Mersenne Twister. For more information on random number generation see rng
.
previousRngState = rng(0,"twister");
The output previousRngState
is a structure that contains information about the previous state of the stream. You will restore the state at the end of the example.
Discrete Action Space Cart-Pole MATLAB Environment
The reinforcement learning environment for this example is a pole attached to an unactuated joint on a cart, which moves along a frictionless track. The training goal is to make the pendulum stand upright without falling over.
For this environment:
The upward balanced pendulum position is
0
radians, and the downward hanging position ispi
radians.The pendulum starts upright with an initial angle between –0.05 and 0.05 radians.
The force action signal from the agent to the environment is either –10 or 10 N.
The observations from the environment are the position and velocity of the cart, the pendulum angle, and the pendulum angle derivative.
The episode terminates if the pole is more than 12 degrees from vertical or if the cart moves more than 2.4 m from the original position.
A reward of +1 is provided for every time step that the pole remains upright. A penalty of –5 is applied when the pendulum falls.
For more information on this model, see Load Predefined Control System Environments.
Create Environment Object
Create a predefined environment interface for the pendulum.
env = rlPredefinedEnv("CartPole-Discrete")
env = CartPoleDiscreteAction with properties: Gravity: 9.8000 MassCart: 1 MassPole: 0.1000 Length: 0.5000 MaxForce: 10 Ts: 0.0200 ThetaThresholdRadians: 0.2094 XThreshold: 2.4000 RewardForNotFalling: 1 PenaltyForFalling: -5 State: [4x1 double]
The interface has a discrete action space where the agent can apply one of two possible force values to the cart, –10 or 10 N.
Obtain the observation and action information from the environment interface.
obsInfo = getObservationInfo(env); actInfo = getActionInfo(env);
Create PG Agent with Custom Networks
For policy gradient agents, the actor executes a stochastic policy, which for discrete action spaces is approximated by a discrete categorical actor. This actor must take the observation signal as input and return a probability for each action.
As a simple parametrized policy is all you need to stabilize the pole on the cart, define a custom PG agent without any baseline. To implement the parametrized policy within the actor, use a simple neural network with just one hidden layer containing 10 neurons.
Define the network as an array of layer objects, and get the dimension of the observation space and the number of possible actions from the environment specification objects. When you create the network, the initial parameters are initialized with random values. Fix the random number stream so that the agent is always initialized with the same parameter values.
rng(0,"twister");
actorNet = [
featureInputLayer(prod(obsInfo.Dimension))
fullyConnectedLayer(10)
reluLayer
fullyConnectedLayer(numel(actInfo.Elements))
softmaxLayer
];
For more information on creating a deep neural network policy representation, see Create Policies and Value Functions.
Convert to dlnetwork
and display the number of weights.
actorNet = dlnetwork(actorNet); summary(actorNet)
Initialized: true Number of learnables: 72 Inputs: 1 'input' 4 features
Create the actor representation using the specified deep neural network and the environment specification objects. For more information, see rlDiscreteCategoricalActor
.
actor = rlDiscreteCategoricalActor(actorNet,obsInfo,actInfo);
To return the probability distribution of the possible actions as a function of a random observation, and given the current network weights, use evaluate
.
prb = evaluate(actor,{rand(obsInfo.Dimension)}); prb{1}
ans = 2x1 single column vector
0.7229
0.2771
Create the agent using the actor. For more information, see rlPGAgent
.
agent = rlPGAgent(actor);
Check the agent with a random observation input.
getAction(agent,{rand(obsInfo.Dimension)})
ans = 1x1 cell array
{[-10]}
Specify training options for the actor. Alternatively, you can use rlPGAgentOptions
and rlOptimizerOptions
objects.
For this training:
Specify the actor learning rate to be 5e-3. A large learning rate causes drastic updates which may lead to divergent behaviors, while a low value may require many updates before reaching the optimal point.
Use a gradient threshold of 1 to clip the gradients. Clipping the gradients can improve training stability.
agent.AgentOptions.ActorOptimizerOptions = ... rlOptimizerOptions(LearnRate=5e-3, ... GradientThreshold=1);
Train Agent
To train the agent, first specify the training options. For this example, use the following options.
Run each training episode for at most 1000 episodes, with each episode lasting at most 500 time steps.
Display the training progress in the Reinforcement Learning Training Monitor dialog box (set the
Plots
option) and disable the command line display (set theVerbose
option tofalse
).Evaluate the performance of the greedy policy every 20 training episodes, averaging the cumulative reward of 10 simulations.
Stop training when the evaluation score reaches 500. At this point, the agent can balance the cart-pole system in the upright position.
For more information, see rlTrainingOptions
.
% training options trainOpts = rlTrainingOptions(... MaxEpisodes=1000, ... MaxStepsPerEpisode=500, ... Verbose=false, ... Plots="training-progress",... StopTrainingCriteria="EvaluationStatistic",... StopTrainingValue=500); % agent evaluator evl = rlEvaluator(EvaluationFrequency=20, NumEpisodes=10);
Fix the random stream for reproducibility.
rng(0,"twister");
Train the agent using the train
function. Training this agent is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining
to false
. To train the agent yourself, set doTraining
to true
.
doTraining = false; if doTraining % Train the agent. trainingStats = train(agent,env,trainOpts,Evaluator=evl); else % Load the pretrained agent for the example. load("MATLABCartpolePG.mat","agent"); end
A snapshot of the training is shown below. You may receive different results due to randomness in the training process.
Simulate PG Agent
Fix the random stream for reproducibility.
rng(0,"twister");
You can visualize the cart-pole system by using the plot
function.
plot(env)
Use a deterministic policy for simulation.
agent.UseExplorationPolicy = false;
To validate the performance of the trained agent, simulate it within the cart-pole environment. For more information on agent simulation, see rlSimulationOptions
and sim
.
simOptions = rlSimulationOptions(MaxSteps=500); experience = sim(env,agent,simOptions);
The agent can balance the cart-pole system. Display the total reward obtained during the simulation.
totalReward = sum(experience.Reward)
totalReward = 500
Restore the random number stream using the information stored in previousRngState
.
rng(previousRngState);
See Also
Apps
Functions
Objects
Related Examples
- Train AC Agent to Balance Discrete Cart-Pole System
- Train PG Agent with Custom Networks to Control Discrete Double Integrator
- Train Reinforcement Learning Agents