Error using minibatchqueue using featureInputLayer
4 次查看(过去 30 天)
显示 更早的评论
Hello everyone,
I'm working on enhancing my neural network by incorporating additional features, similar to the approach described in the MathWorks guide on Training Network on Image and Feature Data.
The existing network, which currently doesn't include these extra features, functions well. It's a straightforward sequence classification model using a wordEmbeddingLayer and BiLSTM layers, mainly for predicting the membership class of entire sequences using sequence data.
However, I'm encountering some challenges when adding an additional feature layer. I would appreciate any advice or insights on how to effectively integrate this layer into my model. If anyone has experience or suggestions to share, it would be of great help!
This is a code without aditional features. And it works perfect (well good enough):
inputSize = 1;
embeddingDimension = 30;
numHiddenUnits = 128;
numWords = enc.NumWords;
numClasses = numel(categories(YTrain));
l2reg = 0.001;
layers0 = [ ...
sequenceInputLayer(inputSize)
wordEmbeddingLayer(embeddingDimension,numWords)
bilstmLayer(numHiddenUnits,'OutputMode','sequence')
batchNormalizationLayer
dropoutLayer(0.2)
bilstmLayer(numHiddenUnits,'OutputMode','last')
batchNormalizationLayer
dropoutLayer(0.2)
fullyConnectedLayer(64)
reluLayer
dropoutLayer(0.2)
fullyConnectedLayer(32)
reluLayer
fullyConnectedLayer(numClasses, 'WeightLearnRateFactor', 1, 'BiasLearnRateFactor', 1, 'WeightL2Factor', l2reg)
softmaxLayer
classificationLayer]
options0 = trainingOptions('adam', ...
"MaxEpochs", 200, ...
"MiniBatchSize", 256, ...
"ValidationFrequency", 111, ...
"ExecutionEnvironment", "gpu", ...
"GradientThreshold", 1, ...
"SequenceLength", "longest", ...
"Shuffle","every-epoch", ...
'ValidationPatience', 30, ...
'ValidationData',{SeqVal, YVal}, ...
'Plots','training-progress', ...
'Verbose',false);
net0 = trainNetwork(SeqTrain,YTrain,layers0,options0);
However, when I try adding an additional feature layer:
layers1 = [ ...
sequenceInputLayer(inputSize)
wordEmbeddingLayer(embeddingDimension,numWords)
bilstmLayer(numHiddenUnits,'OutputMode','sequence')
batchNormalizationLayer
dropoutLayer(0.2)
bilstmLayer(numHiddenUnits,'OutputMode','last')
batchNormalizationLayer
dropoutLayer(0.2)
concatenationLayer(1,2,Name="cat")
fullyConnectedLayer(64)
reluLayer
dropoutLayer(0.2)
fullyConnectedLayer(32)
reluLayer
fullyConnectedLayer(numClasses, 'WeightLearnRateFactor', 1, 'BiasLearnRateFactor', 1, 'WeightL2Factor', l2reg)
softmaxLayer
classificationLayer]
lgraph = layerGraph(layers0);
numFeatures=400;
featInput = featureInputLayer(numFeatures,Name="features");
lgraph = addLayers(lgraph,featInput);
lgraph = connectLayers(lgraph,"features","cat/in2");
% Prepare the data for training
X2pTrainm = cell2mat(X2pTrain);
X2pValm = cell2mat(X2pVal);
dsSeqTrain = arrayDatastore(SeqTrain, IterationDimension=2);
dsX2pTrain = arrayDatastore(X2pTrainm);
dsYTrain = arrayDatastore(YTrain);
dsTrain = combine(dsSeqTrain, dsX2pTrain, dsYTrain);
dsSeqVal = arrayDatastore(SeqVal, IterationDimension=2);
dsX2pVal = arrayDatastore(X2pValm);
dsYVal = arrayDatastore(YVal);
dsVal = combine(dsSeqVal, dsX2pVal, dsYVal);
analyzeNetwork(lgraph)
options1 = trainingOptions('adam', ...
"MaxEpochs", 200, ...
"MiniBatchSize", 256, ...
"ValidationFrequency", 111, ...
"ExecutionEnvironment", "gpu", ...
"GradientThreshold", 1, ...
"SequenceLength", "longest", ...
"Shuffle","every-epoch", ...
'ValidationPatience', 30, ...
'ValidationData', dsVal, ...
'Plots','training-progress', ...
'Verbose',false);
% Train the network
net1 = trainNetwork(dsTrain, lgraph, options1);
I got this error:
Error using trainNetwork
Unable to apply function specified by 'MiniBatchFcn' value.
Caused by:
Error using minibatchqueue
Unable to apply function specified by 'MiniBatchFcn' value.
Error using padsequences
Input sequences must be numeric or categorical arrays.
When I analyze the network using analyzeNetwork(lgraph), everything appears to be fine. My sequence data vary in length, but this wasn't an issue for the network without additional features (net0). However, for the network with additional features (net1), it's not working properly. I've tried experimenting by padding the data before training to equalize the sequence lengths, but that hasn't been successful. My additional features are stored in an N*400 matrix, and everything seems okay with it. Does anyone have any suggestions or ideas?
2 个评论
Matt J
2023-12-16
What about when you view the network in deepNetworkDesigner? Does it all look the way you intend?
回答(1 个)
Gayathri
2024-10-7
I understand that you are getting the “minibatchqueue” error while adding a “feature” layer into the Neural Network.
I created sample data with the dimensions you specified, and I encountered a similar issue as mentioned in the question. The error says “Input sequences must be numeric or categorical arrays.” For this first we will have to convert the “SeqTrain” and “SeqVal” cell variables to matrix in double format. To perform this, please refer to the code attached below.
convertToNumeric = @(cellArray) cellfun(@(x) double(x), cellArray, 'UniformOutput', false);
% Convert cell arrays to numeric arrays
SeqTrainNumeric = cell2mat(convertToNumeric(SeqTrain));
SeqValNumeric = cell2mat(convertToNumeric(SeqVal));
With the above changes “minibatchqueue” error was solved but it had an additional error pertaining to the input dimension. Please ensure that your “X2pTrainm” has dimensions in the format “numFeatures X numSamples”. And then the input data can be converted to “arrayDatastore” format as mentioned below.
dsSeqTrain = arrayDatastore(SeqTrainNumeric,IterationDimension=1);
dsX2pTrain = arrayDatastore(X2pTrainm,IterationDimension=2);
dsYTrain = arrayDatastore(YTrain);
dsTrain = combine(dsSeqTrain, dsX2pTrain, dsYTrain);
dsSeqVal = arrayDatastore(SeqValNumeric, IterationDimension=1);
dsX2pVal = arrayDatastore(X2pValm,IterationDimension=2);
dsYVal = arrayDatastore(YVal);
dsVal = combine(dsSeqVal, dsX2pVal, dsYVal);
This would prepare the input data into the Neural Network in the expected format. Please ensure that read(dsTrain) produces output as shown below.
While mentioning the “options1” for training “dsVal” has to be mentioned corresponding to “ValidationData” option. This is because the model expects the data to be in “arrayDatastore” format.
options0 = trainingOptions('adam', ...
"MaxEpochs", 200, ...
"MiniBatchSize", 256, ...
"ValidationFrequency", 111, ...
"GradientThreshold", 1, ...
"SequenceLength", "longest", ...
"Shuffle","every-epoch", ...
'ValidationPatience', 30, ...
'ValidationData',dsVal, ...
'Plots','training-progress', ...
'Verbose',false);
By incorporating the above changes, we can train the Neural Network with the additional “feature” layer.
For more information on “arrayDatastore” and “Training Network on Image and Feature Data”, please refer to the below links.
Hope you find this information helpful.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Image Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!