Trying to include learning rate and momentum in sgdmupdate function under multiple GPUs
1 次查看(过去 30 天)
显示 更早的评论
I am working on modifying the example given in
"https://www.mathworks.com/help/deeplearning/ug/train-network-in-parallel-with-custom-training-loop.html?searchHighlight=sgdmupdate%20multiple%20gpu&s_tid=srchtitle_sgdmupdate%2520multiple%2520gpu_3".
The modification is that I am trying to incorporate learning rate and momentum into the sgdmupdate function.
That is,
[dlnet.Learnables,workerVelocity] = sgdmupdate(dlnet.Learnables,workerGradients,workerVelocity, learning rate, momentum);
instead of
[dlnet.Learnables,workerVelocity] = sgdmupdate(dlnet.Learnables,workerGradients,workerVelocity); (line 108 in the example).
However, that modification results in the following error in the case of using 4 GPUs (no error with a single GPU)
Would there be anyone who could help me on this ?
Thanks very much !!!
3 个评论
Joss Knight
2022-3-10
Also make sure all your gradients are finite with something like assert(all(cellfun(@(x)all(isfinite(x(:))),workerGradients))). Sometimes when you mess with the learn rate you can get NaNs and infinities in the gradients.
回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 GPU Computing 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!