Mixed input (Image/Feature Data) question (Deep Learning Toolbox)
3 次查看(过去 30 天)
显示 更早的评论
Hello,
I've been following this example:
But has not managed to tweak the code for my benefit.
Input: Image data (90 x 90 x 1 x numOfSamples), feature_1 (numOfSamples x 1), feature_2 (numOfSamples x 1)
Output: feature_1, feature_2
The input is fed with the last time step and I want to train the model it against the current timestep so that I can get a smooth
transition from timestep to timestep.
Data formatting (first time using the "arrayDatastore", I might be completely wrong here):
trainingData = {imgs_(:,:,:,2:numTrain), feat_1(1:numTrain-1), feat_2(1:numTrain-1)};
trainingTargets = {[feat_1(2:numTrain), feat_2(2:numTrain)]};
x1 = arrayDatastore(trainingData{1}, 'IterationDimension', 4);
x2 = arrayDatastore(trainingData{2}, 'IterationDimension', 2);
x3 = arrayDatastore(trainingData{3}, 'IterationDimension', 2);
y = arrayDatastore(trainingTargets{1}, 'IterationDimension', 2);
dsTrain = combine(x1, x2, x3, y);
The net (early stage, just trying to get the model to train):
layers = [
imageInputLayer([90 90 1], Normalization="none")
convolution2dLayer(3, 32, 'Padding', 'same')
batchNormalizationLayer
reluLayer
fullyConnectedLayer(50)
flattenLayer
concatenationLayer(1, 2, Name="cat")
fullyConnectedLayer(2)
regressionLayer
];
lgraph = layerGraph(layers);
featInput = featureInputLayer(2,Name="features");
lgraph = addLayers(lgraph,featInput);
lgraph = connectLayers(lgraph,"features","cat/in2");
options = trainingOptions("sgdm", ...
MaxEpochs=15, ...
InitialLearnRate=0.01, ...
Plots="training-progress", ...
Verbose=0);
net = trainNetwork(dsTrain2, lgraph, options);
The error that I get:
Invalid training data. The output size (2) of the last layer does not match the response size (7040).
The issue could be the data or the model. I can't tell for sure.
Any pointers would be very helpful.
1 个评论
Vinayak
2024-2-12
Hi Hendric,
It would not be possible to help without the data you are using, Please use the clip icon to attach some sample data that reproduces the error.
回答(1 个)
Venu
2024-2-15
编辑:Venu
2024-2-15
Hi @Hendric
Based on the problem statement you provided, it appears you are trying to train a neural network using a combination of image and feature data with a regression output. I've reviewed your code and made some corrections.
1. Data Preparation: Organized your image and feature data into inputs (imgsInput, feat1Input, feat2Input) and targets (feat1Target, feat2Target). (1 feature at a time)
2. Datastores: Transposed feature data for correct "arrayDatastore" creation.
3. Network Architecture: Added layer names and fixed concatenation dimension.
4. Feature Input Layers: Added separate "featureInputLayer" for each feature.
5. Layer Connections: Connected the new feature input layers to the concatenation layer.
% Prepare input data
imgsInput = imgs_(:,:,:,1:numTrain-1); % Images from the first time step to one before the last training sample
feat1Input = feat_1(1:numTrain-1); % Feature 1 from the first time step to one before the last training sample
feat2Input = feat_2(1:numTrain-1); % Feature 2 from the first time step to one before the last training sample
% Prepare target data
feat1Target = feat_1(2:numTrain); % Feature 1 from the second time step to the last training sample (next time step)
feat2Target = feat_2(2:numTrain); % Feature 2 from the second time step to the last training sample (next time step)
targets = [feat1Target, feat2Target];
% Create datastores
x1 = arrayDatastore(imgsInput, 'IterationDimension', 4);
x2 = arrayDatastore(feat1Input', 'IterationDimension', 2);
x3 = arrayDatastore(feat2Input', 'IterationDimension', 2);
y = arrayDatastore(targets', 'IterationDimension', 2);
% Combine datastores
dsTrain = combine(x1, x2, x3, y);
% Define the network architecture (as provided in your code)
layers = [
imageInputLayer([90 90 1], 'Name', 'imageInput', 'Normalization', 'none')
convolution2dLayer(3, 32, 'Padding', 'same', 'Name', 'conv1')
batchNormalizationLayer('Name', 'bn1')
reluLayer('Name', 'relu1')
fullyConnectedLayer(50, 'Name', 'fc1')
flattenLayer('Name', 'flatten')
concatenationLayer(1, 3, 'Name', 'cat') % 3 inputs for concatenation
fullyConnectedLayer(2, 'Name', 'fc2')
regressionLayer('Name', 'output')
];
lgraph = layerGraph(layers);
featInput = featureInputLayer(1, 'Name', 'features1'); % One feature at a time
lgraph = addLayers(lgraph, featInput);
lgraph = connectLayers(lgraph, 'features1', 'cat/in2');
featInput2 = featureInputLayer(1, 'Name', 'features2'); % One feature at a time
lgraph = addLayers(lgraph, featInput2);
lgraph = connectLayers(lgraph, 'features2', 'cat/in3');
% Define the training options
options = trainingOptions("sgdm", ...
MaxEpochs=15, ...
InitialLearnRate=0.01, ...
Plots="training-progress", ...
Verbose=0);
% Train the network
net = trainNetwork(dsTrain, lgraph, options);
Run the corrected code to train your network using the "trainNetwork" function with the combined "dsTrain" datastore and the updated layer graph "lgraph".
Hope this helps!
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Image Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!