Tuning ExperienceHorizon hyperparamter for PPO agent (Reinforcement Learning)
17 次查看(过去 30 天)
显示 更早的评论
Hello everyone,
I'm trying to train a PPO agent, and I would like to change the value for the ExperienceHorizon hyperparameter (Options for PPO agent - MATLAB - MathWorks Switzerland)
When I try another value than the default, the agent wait for the end of the episode to update its policy. For example, ExperienceHorizon=1024 don't work for me, dispite the episode's lenght of more than 1024 steps. I'm also not using Parallel training.
I also get the same issue if I change the MiniBatchSize from its default value.
Is there anything I've missed about this parameter?
More infos on PPO algorithms: Proximal Policy Optimization (PPO) Agents - MATLAB & Simulink - MathWorks Switzerland
If anyone could help, that would be very nice!
Thanks a lot in advance,
Nicolas
0 个评论
回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!