How to know the GPU memory needed when training a detector network like faster R-CNN?
4 次查看(过去 30 天)
显示 更早的评论
I have a GPU which only have 6G total memory. When training a faster R-CNN detector, although I have set the input size to 224*224*3, the miniBatchSize can only be set as 2. If I set miniBatchSize as 4 or 8 or larger, there are errors that out of memory on device and the data no longer exists on the device. Now I want to buy a new gpu which support me to train 1920*1080 picture with miniBatchSize set to be 64 or 128. But I don't know how to compute the memory and other paraneters the GPU needed. So how can I decide which GPU to choose?
0 个评论
采纳的回答
Mahesh Taparia
2020-7-14
Hi
I think 6GB GPU is enough for your code. Check if the code is running on CPU/ GPU. To run the code on GPU, set ExecutionEnvironment to 'gpu' , if you are using trainingOptions and trainNetwork function for training. You can refer this document for that. For custom training loop, you need to convert the array to gpuArray. For more information, refer this documentation of gpuArray.
2 个评论
Mahesh Taparia
2020-7-17
Hi
If you are using 1920*1080 size (HD) image, try to reduce its size and then start the training. Big images require more memory. Also, set the 'Execution Environment' to 'gpu' in the training option in your code, i.e
options = trainingOptions('sgdm',...
'MaxEpochs',4,...
'MiniBatchSize',2,...
'InitialLearnRate',1e-3,...
'CheckpointPath',tempdir,...
'ValidationData',validationData,...
'ExecutionEnvironment','gpu',...
);
and check if the problem get resolve.
更多回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!