Increase Image Resolution Using Deep Learning (VDSR) issue with custom training loop

5 次查看(过去 30 天)
I am trying to customize an example of VDSR. Prepared the setup according to the receipe and the training works nicely.
However, I am trying to combine it with custom training loop as in the example NN with multiple outputs (just using SGDM optimizer).
The problem is that it fails during the evaluation of NN with the data from data store/minibatch queue.
K>> [Y] = predict(net, X);
Error using DAGNetwork/predict
Invalid 2-D image data. Specify image data as a 3-D numeric array
containing a single image, a 4-D numeric array containing multiple
images, a datastore, or a table containing image file paths or images in
the first column.
Something is strange about the NN input data here.
Following my understanding of the deepNN infrastructure I also tested it with synthetic data with similar result:
>> XX = dlarray(rand([41 41 1 1],'single'),'SSCB');
>> [Y] = predict(net,XX);
Error using DAGNetwork/predict
Invalid 2-D image data. Specify image data as a 3-D numeric array
containing a single image, a 4-D numeric array containing multiple
images, a datastore, or a table containing image file paths or images in
the first column.
Error in SeriesNetwork/predict (line 320)
Built in train function works with the data, however how it delas with it is hard to get, Tried to debug it but is is difficult for so much object oriented code :(. I believe it has smth to do with randomPatchExtractionDatastore).
Any help would be appreciated.
Przemek
The loop looks like this (the rst is identical to the example):
%% preparations
numEpochs = 10;
miniBatchSize = 128;
initialLearnRate = 0.01;
decay = 0.01;
momentum = 0.9;
velocity = [];
mbq = minibatchqueue(dsTrain);
numObservationsTrain = dsTrain.NumObservations; %numel(dsTrain.Files);
numIterationsPerEpoch = ceil(numObservationsTrain / miniBatchSize);
numIterations = numEpochs * numIterationsPerEpoch;
monitor = trainingProgressMonitor(Metrics="Loss",Info=["Epoch","LearnRate"],XLabel="Iteration");
%% main loop
while epoch < numEpochs && ~monitor.Stop
epoch = epoch + 1;
shuffle(mbq);
% Loop over mini-batches.
while hasdata(mbq) && ~monitor.Stop
iteration = iteration + 1;
% Prepare mini-batch.
[X,T] = next(mbq);
% Evaluate model loss and gradients.
[loss,gradients] = dlfeval(@modelLoss,net,X,T); % <- fails HERE
% Determine learning rate for time-based decay learning rate schedule.
learnRate = initialLearnRate/(1 + decay*iteration);
% Update learnable parameters.
[net,velocity] = sgdmupdate(net,gradients,velocity,learnRate,momentum);
% Update the training progress monitor.
recordMetrics(monitor,iteration,Loss=loss);
updateInfo(monitor,Epoch=epoch,LearnRate=learnRate);
monitor.Progress = 100 * iteration/numIterations;
end
end
function [loss,gradients] = modelLoss(net,X,T)
% Forward data through network.
[Y] = predict(net,X); % << FAILS HERE !!!
% Calculate MSE loss.
loss = mse(Y,T);
% Calculate gradients of loss with respect to learnable parameters.
gradients = dlgradient(loss,net.Learnables);
end
  1 个评论
Nayan
Nayan 2023-4-18
As I understand you are not able to train a deepNN, due to errors in "[loss,gradients] = dlfeval(@modelLoss,net,X,T)" line. The error encountered is because the Y = predict(net,X) requires the image data in the form of
either "3-D numeric array containing a single image", "4-D numeric array containing multiple
images, a datastore, or a table containing image file paths or images in
the first column."
I would need you to share your dataset and model to help you debug this issue.

请先登录,再进行评论。

回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

产品


版本

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by