Different deep learning training behavior between MATLAB 2020a and 2021b

2 次查看(过去 30 天)
I have been using this code to train semantic segmentation networks:
function train_deeplab(pth,classes,classNames,sz)
pthTrain=[pth,'training\'];
pthVal=[pth,'validation\'];
% make training datastore
Trainim=[pthTrain,'im\'];
Trainlabel=[pthTrain,'label\'];
imdsTrain = imageDatastore(Trainim);
pxdsTrain = pixelLabelDatastore(Trainlabel,classNames,classes);
pximdsTrain = pixelLabelImageDatastore(imdsTrain,pxdsTrain);
tbl = countEachLabel(pxdsTrain);
% make validation datastore
Valim=[pthVal,'im\'];
Vallabel=[pthVal,'label\'];
imdsVal = imageDatastore(Valim);
pxdsVal = pixelLabelDatastore(Vallabel,classNames,classes);
pximdsVal = pixelLabelImageDatastore(imdsVal,pxdsVal);
% set training options
options = trainingOptions('adam',...
'MaxEpochs',8,...
'MiniBatchSize',5,...
'Shuffle','every-epoch',...
'ValidationData',pximdsVal,...
'ValidationPatience',6,...
'InitialLearnRate',0.0005,...
'LearnRateSchedule','piecewise',...
'LearnRateDropPeriod',1,...
'LearnRateDropFactor',0.75,...
'ValidationFrequency',128,...
'ExecutionEnvironment','gpu',...
'Plots','training-progress',...
'OutputFcn', @(info)savetrainingplot(info,pth));
% design network
numclass = numel(classes);
imageFreq = tbl.PixelCount ./ tbl.ImagePixelCount;
classWeights = median(imageFreq) ./ imageFreq;
lgraph = deeplabv3plusLayers([sz sz 3],numclass,"resnet50");
pxLayer = pixelClassificationLayer('Name','labels','Classes',tbl.Name,'ClassWeights',classWeights);
lgraph = replaceLayer(lgraph,"classification",pxLayer);
% train
[net, info] = trainNetwork(pximdsTrain,lgraph,options);
save([pth,'net.mat'],'net','info');
end
% save a png of training progress when finished
function stop=savetrainingplot(info,pthSave)
stop=false;
if info.State=='done'
exportapp(findall(groot, 'Type', 'Figure'),[pthSave,'training_process_21.png'])
end
end
Since switching from MATLAB 2020a to 2021b, there is something strange happening with the validation loss. My training and validation accuracy are very similar, but my validation loss is orders of magnitude higher than training loss. Here I include a sample network trained with the code above using identical training & validation datasets in MATLAB 2020a vs 2021b to illustrate the problem.
Trained using MATLAB 2020a (training and validation loss/accuracy are similar):
Trained using MATLAB 2021b (validation loss is much higher than training loss while accuracies remain similar):
I appreciate any help!

回答(1 个)

yanqi liu
yanqi liu 2022-2-26
yes,sir,may be use rgn('default') or rand('seed', 0) to make same run environment
  1 个评论
Ashley
Ashley 2022-2-28
Hi Yanqi,
I don't think that's the problem. I tried training in 2021b after presetting the random number generator like you suggeted (rng('default');rand('seed',0)) but the validation loss in 2021b is still acting strangely. My validation accuracy is still similar to training accuracy but the validation loss is orders of magnitude higher than training loss:
I don't think this is a discrepency between different initializations of the deep learning training but could be something different between how MATLAB 2020a and 2021b compute the loss function.
Thank you

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

产品


版本

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by