Main Content

getLearnableParameters

Obtain learnable parameter values from agent, function approximator, or policy object

Description

Agent

params = getLearnableParameters(agent) returns the learnable parameter values from the agent object agent.

Actor or Critic

params = getLearnableParameters(fcnAppx) returns the learnable parameter values from the actor or critic function approximator object fcnAppx. This is equivalent to params=fcnAppx.Learnables.

example

Policy

params = getLearnableParameters(policy) returns the learnable parameters values from the policy object policy.

Examples

collapse all

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Compare DDPG Agent to LQR Controller.

load("DoubleIntegDDPG.mat","agent") 

Obtain the critic from the agent.

critic = getCritic(agent);

For approximator objects, you can access the Learnables property using dot notation.

First, display the parameters.

critic.Learnables{1}
ans = 
  1x6 single dlarray

   -5.0017   -1.5513   -0.3424   -0.1116   -0.0506   -0.0047

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

critic.Learnables{1} = critic.Learnables{1}*2;

Display the new parameters.

critic.Learnables{1}
ans = 
  1x6 single dlarray

  -10.0034   -3.1026   -0.6848   -0.2232   -0.1011   -0.0094

Alternatively, you can use getLearnableParameters and setLearnableParameters.

First, obtain the learnable parameters from the critic.

params = getLearnableParameters(critic)
params=2×1 cell array
    {[-10.0034 -3.1026 -0.6848 -0.2232 -0.1011 -0.0094]}
    {[                                               0]}

Modify the parameter values. For this example, simply divide all of the parameters by 2.

modifiedParams = cellfun(@(x) x/2,params,"UniformOutput",false);

Set the parameter values of the critic to the new modified values.

critic = setLearnableParameters(critic,modifiedParams);

Set the critic in the agent to the new modified critic.

setCritic(agent,critic);

Display the new parameter values.

getLearnableParameters(getCritic(agent))
ans=2×1 cell array
    {[-5.0017 -1.5513 -0.3424 -0.1116 -0.0506 -0.0047]}
    {[                                              0]}

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Compare DDPG Agent to LQR Controller.

load("DoubleIntegDDPG.mat","agent") 

Obtain the actor function approximator from the agent.

actor = getActor(agent);

For approximator objects, you can access the Learnables property using dot notation.

First, display the parameters.

actor.Learnables{1}
ans = 
  1x2 single dlarray

  -15.4663   -7.2746

Modify the parameter values. For this example, simply divide all of the parameters by 2.

actor.Learnables{1} = actor.Learnables{1}/2;

Display the new parameters.

actor.Learnables{1}
ans = 
  1x2 single dlarray

   -7.7331   -3.6373

Alternatively, you can use getLearnableParameters and setLearnableParameters.

Obtain the learnable parameters from the actor.

params = getLearnableParameters(actor)
params=2×1 cell array
    {[-7.7331 -3.6373]}
    {[              0]}

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,"UniformOutput",false);

Set the parameter values of the actor to the new modified values.

actor = setLearnableParameters(actor,modifiedParams);

Set the actor in the agent to the new modified actor.

setActor(agent,actor);

Display the new parameter values.

getLearnableParameters(getActor(agent))
ans=2×1 cell array
    {[-15.4663 -7.2746]}
    {[               0]}

Input Arguments

collapse all

Reinforcement learning agent, specified as one of the following objects:

Function approximator object, specified as one of the following:

To create an actor or critic function object, use one of the following methods.

  • Create a function object directly.

  • Obtain the existing critic from an agent using getCritic.

  • Obtain the existing actor from an agent using getActor.

Reinforcement learning policy, specified as one of the following objects:

Output Arguments

collapse all

Learnable parameter values for the function object, returned as a cell array. You can modify these parameter values and set them in the original agent or a different agent using the setLearnableParameters function.

Tips

  • You can also obtain and modify the learnable parameters function approximation objects such as actors and critics by accessing their Learnables property, using dot notation.

Version History

Introduced in R2019a

expand all