Adam stochastic gradient descent optimization
`fmin_adam` is an implementation of the Adam optimisation algorithm (gradient descent with Adaptive learning rates individually on each parameter, with Momentum) from Kingma and Ba [1]. Adam is designed to work on stochastic gradient descent problems; i.e. when only small batches of data are used to estimate the gradient on each iteration, or when stochastic dropout regularisation is used [2].
See GIT repository for examples:
https://github.com/DylanMuir/fmin_adam
Usage:
[x, fval, exitflag, output] = fmin_adam(fun, x0 <, stepSize, beta1, beta2, epsilon, nEpochSize, options>)
See the function help for a detailed reference. The github repository has a couple of examples.
References:
[1] Diederik P. Kingma, Jimmy Ba. "Adam: A Method for Stochastic Optimization", ICLR 2015. [https://arxiv.org/abs/1412.6980](https://arxiv.org/abs/1412.6980)
[2] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. "Improving neural networks by preventing co-adaptation of feature detectors." arXiv preprint. [https://arxiv.org/abs/1207.0580](https://arxiv.org/abs/1207.0580)
引用格式
Dylan Muir (2025). Adam stochastic gradient descent optimization (https://github.com/DylanMuir/fmin_adam), GitHub. 检索时间: .
MATLAB 版本兼容性
平台兼容性
Windows macOS Linux类别
标签
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!无法下载基于 GitHub 默认分支的版本
| 版本 | 已发布 | 发行说明 | |
|---|---|---|---|
| 1.0.0.0 | Updated title
Updated description
|
|
