Mini batch size changing value during gradient descent

6 次查看(过去 30 天)
Hello everyone,
I am currently working on multimodal deep learning, with a neural network classifier receiving two time-dependent inputs, videos and a set of given features. Videos are 4D matrixes of size width x height x depth x frames and features are 2D matrixes of size number of features x frames.
I've been trying to classify the inputs based on the examples given below, as on some of my previous work.
During my training, I have come across a very singular situation. The value of minibatchsize, which I had initially set to 16, was decreased to 9. This produced an error as the layer were expecting batch sizes of 16 in the dlfeval() function.
I haven't found anything related to this problem on here, I was wondering if any of you would have a piece of advice or a solution for me.
Thank you for your help !

回答(1 个)

Shubham
Shubham 2023-9-27
I understand that while training the neural network you found that minibatch size which was initially set to 16 was later decreased to 9.
Please check whether the dataset being used has the total number of samples divisible by 16. This would ensure that all samples are used and the minibatch size is not adjusting automatically to accommodate remaining samples. The minibatch size is also dependent on the available memory. Try looking for any inconsistencies in data preprocessing or the network architecture.
You may try using Mini-Batch datastore for reading data in batches.
Please refer to the following:
Hope this helps!!

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

产品


版本

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by