Warning: GPU is low on memory
7 次查看(过去 30 天)
显示 更早的评论
Dear All,
I got this error when run data trained for Unet3Dlayer.
Warning: GPU is low on memory, which can slow performance due to additional data transfers with main memory. Try reducing the
'MiniBatchSize' training option. This warning will not appear again unless you run the command:
warning('on','nnet_cnn:warning:GPULowOnMemory').
Error using trainNetwork
GPU out of memory. Try reducing 'MiniBatchSize' using the trainingOptions function.
Caused by:
Error using gpuArray/cat
Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists,
reset the GPU by calling 'gpuDevice(1)'.
THIS IS MY CODE BELOW.
clc
clear all
close all
%testDataimages
DATASetDir = fullfile('C:\Users\USER\Downloads\NEW 3D U NET 128X128');
IMAGEDir = fullfile(DATASetDir,'ImagesTr');
volReader = @(x) matRead(x);
volds = imageDatastore(IMAGEDir, ...
'FileExtensions','.mat','ReadFcn',volReader);
% labelReader = @(x) matread(x);
matFileDir = fullfile('C:\Users\USER\Downloads\NEW 3D U NET 128X128\LabelsTr');
classNames = ["background", "tumor"];
pixelLabelID = [0 1];
% pxds = (LabelDirr,classNames,pixelLabelID, ...
% 'FileExtensions','.mat','ReadFcn',labelReader);
pxds = pixelLabelDatastore(matFileDir,classNames,pixelLabelID, ...
'FileExtensions','.mat','ReadFcn',@matRead);
volume = preview(volds);
label = preview(pxds);
up1 = uipanel;
h = labelvolshow(label, volume, 'Parent', up1);
h.CameraPosition = [4 2 -3.5];
h.LabelVisibility(1) = 0;
h.VolumeThreshold = 0.5;
volumeViewer(volume, label)
patchSize = [128 128 64];
patchPerImage = 16;
miniBatchSize = 8;
patchds = randomPatchExtractionDatastore(volds,pxds,patchSize, ...
'PatchesPerImage',patchPerImage);
patchds.MiniBatchSize = miniBatchSize;
dsTrain = transform(patchds,@augment3dPatch);
volLocVal = fullfile('C:\Users\USER\Downloads\NEW 3D U NET 128X128\imagesVal');
voldsVal = imageDatastore(volLocVal, ...
'FileExtensions','.mat','ReadFcn',volReader);
lblLocVal = fullfile('C:\Users\USER\Downloads\NEW 3D U NET 128X128\labelsVal');
pxdsVal = pixelLabelDatastore(lblLocVal,classNames,pixelLabelID, ...
'FileExtensions','.mat','ReadFcn',volReader);
dsVal = randomPatchExtractionDatastore(voldsVal,pxdsVal,patchSize, ...
'PatchesPerImage',patchPerImage);
dsVal.MiniBatchSize = miniBatchSize;
inputSize = [128 128 64];
numClasses = 2;
encoderDepth = 2;
lgraph = unet3dLayers(inputSize,numClasses,'EncoderDepth',encoderDepth,'NumFirstEncoderFilters',16)
figure,plot(lgraph);
%analyzeNetwork(lgraph1)
%analyzeNetwork(lgraph2)
maxEpochs = 100;
options = trainingOptions('adam', ...
'MaxEpochs',maxEpochs, ...
'InitialLearnRate',1e-3, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',5, ...
'LearnRateDropFactor',0.97, ...
'ValidationData',dsVal, ...
'ValidationFrequency',200, ...
'Plots','training-progress', ...
'Verbose',false, ...
'MiniBatchSize',miniBatchSize);
doTraining = true;
if doTraining
modelDateTime = datestr(now,'dd-mmm-yyyy-HH-MM-SS');
[net,info] = trainNetwork(dsTrain,lgraph,options);
save(['trained3DUNet-' modelDateTime '-Epoch-' num2str(maxEpochs) '.mat'],'net');
else
load('trained3DVNet-07-Jun-2022-13-45-30-Epoch-250.mat');
end
ANYONE CAN HELP ME.
BECAUSE I THINK MY MEMORY GPU IS SUFFICIENT.
3 个评论
Jan
2022-6-21
@mohd akmal masud: Then the conclusion is, that the GPU on the laptop has more RAM than the machine cause the error. The solution is to use the laptop for this job. Or install a graphic card with more RAM.
Did you follow the advice to use the 'MiniBatchSize' option?
Sorry, this answer is trivial, but I do not see a chance to create a more useful answer.
采纳的回答
Joss Knight
2022-6-22
A 3-D U-net is a very large model. Try reducing patchSize, patchPerImage, miniBatchSize and inputSize.
0 个评论
更多回答(1 个)
Image Analyst
2022-6-21
编辑:Image Analyst
2022-6-21
You can try reducing the minibatchsize to 4 or 2.
I had the same problem and a Mathworks deep learning expert engineer helped us figure it out. I had a GPU (two actually) in my computer and the "ExecutionEnvironment" was set up as auto, the default, so it tried to use the GPU. The problem was that my GPU was only 16 GB and we got the "Out of memory" error, even for mini-batches of size 2. So we changed the ExecutionEnvironment option to CPU and it worked. Even though I had only 32 GB of RAM on my computer, I actually had hundreds of GB of memory because if the CPU runs out of RAM it uses disk space as "virtual memory".
% Train network - 10000 epochs and 10 Augmentations per mask.
options = trainingOptions('adam', ...
'InitialLearnRate', 1e-3, ...
'MaxEpochs', 7000, ...
'VerboseFrequency', 10, ...
'MiniBatchSize', 16, ...
'Shuffle', 'every-epoch', ...
'Plots', 'training-progress', ...
'ExecutionEnvironment','cpu');
% 'CheckpointPath', checkpointPath );
5 个评论
Image Analyst
2022-6-25
OK. Well at least it runs. I had to do that yesterday for another deep learning problem I was training. It seems sad and ironic that you have what you think is a fairly powerful GPU to do deep learning on, and then you can't use it because it doesn't have enough memory. When I had the Mathworks engineer help me with my app we discovered it was using around 128 GB of RAM and virtual memory. I had only 16 GB of RAM on my GPU and 32 GB of RAM on my laptop. I don't even know if they make 128 GB graphics cards for laptops! So it seems that I'm stuck with the CPU on my laptop, or transfer the program to a more powerful desktop with massive GPU card. I have one with a NVidia Titan card but I'll have to ask someone how much RAM is on that card. Maybe that can't even handle it and I'd have to try some cloud/grid computing solution.
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Image Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!