How can I train a Big Data (30k) using neural network fitting problem? or How to set mini batch?
1 次查看(过去 30 天)
显示 更早的评论
Hi
I have a input vector of 518 numbers. For output there are 20 numbers. But I have 30 thousand of samples. I find that using Bayesian Regularization can have a good performance. But when I handel this huge amount of samples, I find that it is too slow.
Is there anyway to sovle this problem?
I guess using deep learning tool box and set a mini batch could help. But I do not know how to do this?
0 个评论
回答(1 个)
Prateek Rai
2021-7-29
To my understanding, you are using Bayesian Optimization. You want to set a mini-batch to increase the speed of training. You can set the mini batch size using ‘MiniBatchSize’ name-value pair arguments of the ‘trainingOptions’ function in Deep Learning Toolbox. Moreover, you can also set a maximum number of epochs and options for data shuffling using the ‘MaxEpochs’ and ‘Shuffle’ as name-value pair arguments.
Please refer to trainingOptions MathWorks documentation page to find more on trainingOptions function. You can also refer to Deep Learning Using Bayesian Optimization MathWorks documentation page to learn more about applying Bayesian optimization to deep learning.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Pattern Recognition and Classification 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!