PPO Agent - Initialization of actor and critic newtorks

7 次查看(过去 30 天)
Whenever a PPO agent is initialized in Matlab, according to the documentation the parameters of both the actor and the critic are set randomly. However I know that this is not the only possible choice: other initialization schemes are possible (e.g. orthogonal initialization), and this can sometimes improve the future performance of the agent.
  • Is there a reason why the random initialization has been chosen as the default method here?
  • Is it possible to specify a different initialization method easily in the context of Reinforcement learning Toolbox, without starting from scratch?

采纳的回答

Venu
Venu 2024-3-19
Random initialization can encourage initial exploration by starting the policy and value functions in a non-deterministic state.
It doesn't require specific tuning or assumptions about the model architecture, making it a good default choice.
MATLAB's Reinforcement Learning Toolbox does not directly expose an interface to specify the initialization method for the neural networks (actor and critic) within the PPO agent or other agents directly through high-level functions or options.
So as you have mentioned regarding starting from scratch, when you create the neural networks for the actor and critic using MATLAB's Deep Learning Toolbox (e.g., using layerGraph, dlnetwork, or similar functions), you can specify the initialization for each layer manually. After defining the networks with your desired initialization, you can then pass them to the PPO agent creation function.
Here is a page comparing 3 initializers when training LSTMs:
Hope this helps to an extent!

更多回答(0 个)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by