Normalizing input data for DeepLearning Trainer interface takes very long time

Hey guys,
trying to training a network on R2022a with the help of network trainer, on which a customed datastore is set as training input. The ds has ~50000 observations(~0.6MB each) with some simple augmentation methods integrated.
The problem is, it takes a very long time(30min or so) stucking in "normalizing input data" stage every time after I start training progress. Why is that? How to improve it?

4 个评论

Whats the input size and target? What is the problem about?
@KSSV oh sure. Here’s some details. The input size is 224by224by2 double, response 1by1by2 double(a normalized plane coordinate, to say) . The network has ~45layers with 2M params. Data augmentation and shuffle is implemented within my customized minibatchable data store.
I’ve examined my ds according to doc and it seems to work well, but somehow it takes a long time to prepare data before training

请先登录,再进行评论。

回答(2 个)

Trainer>Trainer.doComputeStatistics
Ran code profiler and: this function up here takes up most of training initialization time. An entire epoch of data is pre-loaded from ROM everytime before I start a training session.
Now the problem is, does anyone know how to make it a faster process to deal with this function?

1 个评论

You'd think a lot of this could be pre-calculated and saved if you're not changing data sets constantly.

请先登录,再进行评论。

You can set the ResetInputNormalization training option to false to prevent the input statistics being recomputed every time. You will need to provide an input layer which has the appropriate values set - if you have an output network from an earlier training attempt then you can extract its input layer and put it in place of the input layer in the layer graph you are training.
This is how to do it for the code example that is in the trainNetwork help:
[XTrain, YTrain] = digitTrain4DArrayData;
layers = [
imageInputLayer([28 28 1], 'Name', 'input')
convolution2dLayer(5, 20, 'Name', 'conv_1')
reluLayer('Name', 'relu_1')
convolution2dLayer(3, 20, 'Padding', 1, 'Name', 'conv_2')
reluLayer('Name', 'relu_2')
convolution2dLayer(3, 20, 'Padding', 1, 'Name', 'conv_3')
reluLayer('Name', 'relu_3')
additionLayer(2,'Name', 'add')
fullyConnectedLayer(10, 'Name', 'fc')
softmaxLayer('Name', 'softmax')
classificationLayer('Name', 'classoutput')];
lgraph = layerGraph(layers);
lgraph = connectLayers(lgraph, 'relu_1', 'add/in2');
% Perform a single epoch of training which will initialize the input layer
initOptions = trainingOptions('sgdm', 'Plots', 'training-progress', 'MaxEpochs', 1);
[net,info] = trainNetwork(XTrain, YTrain, lgraph, initOptions);
% Transfer the initialized input layer to the untrained layergraph
input = net.Layers(1);
lgraph = replaceLayer(lgraph, input.Name, input);
% Perform training without reinitializing the input layer
options = trainingOptions('sgdm', 'Plots', 'training-progress', 'ResetInputNormalization', false);
[net,info] = trainNetwork(XTrain, YTrain, lgraph, options);

类别

帮助中心File Exchange 中查找有关 Deep Learning Toolbox 的更多信息

产品

版本

R2022a

标签

提问:

2022-7-25

回答:

2022-12-7

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by