if we can use learn rate drop factor with adam optimizer in DDPG or not ?
1 次查看(过去 30 天)
显示 更早的评论
if we can use learn rate drop factor with adam optimizer in DDPG or not ?
to decay to learnig rate during steps...if it possible...the options only provides OptimizerParameters , which not contain learn rate drop factor
0 个评论
回答(1 个)
Yash
2024-12-23
The learning rate for training the actor or critic function approximator can be specified as a positive scalar by setting the LearnRate property within rlOptimizerOptions. Additionally, OptimizerParameters allows for the configuration of sub-parameters such as Momentum, Epsilon, GradientDecayFactor, and SquaredGradientDecayFactor. Specifically for the Adam solver, adjustments can be made to the Epsilon, GradientDecayFactor, and SquaredGradientDecayFactor parameters.
Adam (Adaptive Moment Estimation) optimizer is an adaptive learning rate optimization algorithm. It updates the learning rate based on several hyperparameters: the initial learning rate, the exponential decay rate for the first moment estimates, the exponential decay rate for the second moment estimates, and epsilon. By fine-tuning these hyperparameters, you can indirectly modify the learning rate.
For more detailed information on learning rate schedules for stochastic solvers such as sgdm, adam, and rmsprop, refer here: https://www.mathworks.com/help/deeplearning/ref/trainingoptions.html#bu59f0q-LearnRateSchedule
0 个评论
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!