Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Compare DDPG Agent to LQR Controller.
Obtain the critic from the agent.
For approximator objects, you can access the Learnables
property using dot notation.
First, display the parameters.
ans =
1x6 single dlarray
-5.0017 -1.5513 -0.3424 -0.1116 -0.0506 -0.0047
Modify the parameter values. For this example, simply multiply all of the parameters by 2
.
Display the new parameters.
ans =
1x6 single dlarray
-10.0034 -3.1026 -0.6848 -0.2232 -0.1011 -0.0094
Alternatively, you can use getLearnableParameters
and setLearnableParameters
.
First, obtain the learnable parameters from the critic.
params=2×1 cell array
{[-10.0034 -3.1026 -0.6848 -0.2232 -0.1011 -0.0094]}
{[ 0]}
Modify the parameter values. For this example, simply divide all of the parameters by 2
.
Set the parameter values of the critic to the new modified values.
Set the critic in the agent to the new modified critic.
Display the new parameter values.
ans=2×1 cell array
{[-5.0017 -1.5513 -0.3424 -0.1116 -0.0506 -0.0047]}
{[ 0]}