how can I display the trained network weights in reinforcement learning agent?

6 次查看(过去 30 天)
Hello,
I trained a DDPG agent by using reinforcement learning in Reinforcement Learning Toolbox.
I wanted to know the trained weight in the agentm, so after the train was finished I checked the agent variables in work space.
However, I couldn't fine any values of the weights in the variables not even 'agent' and 'evn' variable.
I know it is possible to check weights of network in Neural Network Toolbox, but is it able to access to the weights in Reinforcement Learning Toobox?
What should I do?

回答(1 个)

Anh Tran
Anh Tran 2020-2-21
编辑:Anh Tran 2020-2-21
Hi Ru SeokHun,
In MATLAB R2019b and below, there is a 2-step process:
  1. Use getActor, getCriitic functions to gather the actor and critic representations from the trained agent.
  2. Use getLearnableParameterValues function to get the weights and biases of the neural network representation.
See the code below to get the parameters of the trained actor. You can compare these values with those of an untrained agent. Assume you have DDPG agent named 'agent'
% get the agent's actor, which predicts next action given the current observation
actor = getActor(agent);
% get the actor's parameters (neural network weights)
actorParams = getLearnableParameterValues(actor);

类别

Help CenterFile Exchange 中查找有关 Deep Learning Toolbox 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by