Error in matlab included deep learning example

I am trying to run the matlab example
openExample('nnet/SeqToSeqClassificationUsing1DConvAndModelFunctionExample')
In 2019b but, when i change to train the network on gpu the example show me this error. Please help me to run it or give me a workaround to train using gpu.
Error using gpuArray/subsasgn
Attempt to grow array along ambiguous dimension.
Error in deep.internal.recording.operations.ParenAssignOp/forward (line 45)
x(op.Index{:}) = rhs;
Error in deep.internal.recording.RecordingArray/parenAssign (line 29)
x = recordBinary(x,rhs,op);
Error in dlarray/parenAssign (line 39)
objdata(varargin{:}) = rhsdata;
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 484)
loss(i) = crossentropy(dlY(:,i,idx),dlT(:,i,idx),'DataFormat','CBT');
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 469)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 284)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);
Thanks!

1 个评论

Thanks for reporting this - I can reproduce the problem using R2019b here, I shall forward this to the development team...

请先登录,再进行评论。

 采纳的回答

There is a bug in this Example which will be rectified. Thanks for reporting. To workaround, initialize the loss variable in the maskedCrossEntropyLoss function:
function loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps)
numObservations = size(dlY,2);
loss = zeros([1,1],'like',dlY); % Add this line
for i = 1:numObservations
idx = 1:numTimeSteps(i);
loss(i) = crossentropy(dlY(:,i,idx),dlT(:,i,idx),'DataFormat','CBT');
end
end

6 个评论

When I changed the miniBatchSize to 2 I get the following error. I am trying to understand the example so will help to be able to change the miniBatchSize to a different value
Workarounds?
Thanks!
That's odd - I'll get back to you on that.
I appreciate your support, I just changed the miniBatchSize to 2 and I get the following error:
Index exceeds the number of array elements (1).
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 486)
idx = 1:numTimeSteps(i);
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 472)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 287)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);
There are some small issues in the example script that will prevent you from setting the miniBatchSize>1. The fix is pretty simple though.
1) Replace the modelGradients function with the following:
function [gradients,loss] = modelGradients(dlX,T,parameters,hyperparameters,numTimeSteps)
dlY = model(dlX,parameters,hyperparameters,true);
dlY = softmax(dlY,'DataFormat','CBT');
dlT = dlarray(T,'CBT');
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
gradients = dlgradient(mean(loss),parameters); % this line was changed to compute the mean loss
end
2) Replace the transformSequences function with the following:
function [XTransformed, YTransformed, numTimeSteps] = transformSequences(X,Y)
% Removed line which computed the numTimeSteps. We'll compute this later in the loop
miniBatchSize = numel(X);
numFeatures = size(X{1},1);
sequenceLength = max(cellfun(@(sequence) size(sequence,2),X));
classes = categories(Y{1});
numClasses = numel(classes);
sz = [numFeatures miniBatchSize sequenceLength];
XTransformed = zeros(sz,'single');
sz = [numClasses miniBatchSize sequenceLength];
YTransformed = zeros(sz,'single');
for i = 1:miniBatchSize
predictors = X{i};
% Create dummy labels.
numTimeSteps(i) = size(predictors,2); % This line now sets the time steps for the i-th observation
responses = zeros(numClasses, numTimeSteps(i), 'single'); % This line also uses the i-th observation numTimeSteps
for c = 1:numClasses
responses(c,Y{i}==classes(c)) = 1;
end
% Left pad.
XTransformed(:,i,:) = leftPad(predictors,sequenceLength);
YTransformed(:,i,:) = leftPad(responses,sequenceLength);
end
end
Note, however, that depending on your GPU you might run into out-of-memory issues already with a small miniBatchSize. I have a GeForce GTX 1080 and I already run into this issue with a miniBatchSize of 3.
We will work on updating the example to fix these issues as soon as possible. Apologies for the inconvenience!
Thanks, I can change miniBatchSize now.
I found another solution for
"Error using gpuArray/subsasgn
Attempt to grow array along ambiguous dimension."
In dlarray/parenAssign.m, at this location:"\R2019b\toolbox\nnet\deep\@dlarray\parenAssign.m"
Line 15:
obj = zeros(0, 0, 'like', rhs);
Replace line 15 with the following 2 lines:
szrhs = size(rhs);
obj = zeros(szrhs(1), szrhs(2), 'like', rhs);
Users cannot directly edit this file, so I backed it up and replace it with a new file.

请先登录,再进行评论。

更多回答(2 个)

Thank you for reporting the issue. The error you are getting is related to an attempt to grow a gpuArray using linear indexing assignment.
For more information please refer to the following bug report:

1 个评论

Linda,
I just changed the miniBatchSize to 2, in the same example and I get the following error, could you please help me with that? I think this is a bug because that is offered as a parameter in the example but you cannot change it.
Index exceeds the number of array elements (1).
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 486)
idx = 1:numTimeSteps(i);
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 472)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 287)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);

请先登录,再进行评论。

类别

帮助中心File Exchange 中查找有关 Train Deep Neural Networks 的更多信息

产品

标签

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by