Deep Learning Toolbox Multiply by [] in Second Learning Iteration?
5 次查看(过去 30 天)
显示 更早的评论
In the following code for a Conditional Variational Autoencoder I get the following error on the second iteration. I don't see any unusual values in the learnables for the encoderNet or decoderNet. It appears that this error doesn't appear when I run it in 2022a using the "run" button on this page.
Output in 2021a:
>> minimum_cvae
iteration =
1
loss =
1(C) × 1(B) single dlarray
6.6875
iteration =
2
loss =
1(C) × 1(B) single dlarray
6.6573
Error using .*
Arrays have incompatible sizes for this operation.
Error in tp26c2fafb_59b5_412b_af58_81dab1a42f5d (line 36)
tmp_33 = ((tmp_30.*[]).*constants{2});
Error in
deep.internal.recording.convert.tapeToFunction>@(varargin)fcnWithConstantsInput(varargin{:},constants)
(line 37)
fcn = @(varargin)fcnWithConstantsInput(varargin{:},constants);
Error in deep.internal.recording.CodegenExtensionMethod/backward (line
61)
[varargout{1:meth.NumGradients}] =
meth.BackwardFunction(varargin{:});
Error in deep.internal.dlarray.ExtensionOperation/backward (line 46)
[varargout{1:nargout}] = backward(op.Method, varargin{:});
Error in deep.internal.recording.RecordingArray/backwardPass (line 70)
grad = backwardTape(tm,{y},{initialAdjoint},x,retainData,false);
Error in dlarray/dlgradient (line 83)
[grad,isTracedGrad] = backwardPass(y,xc,pvpairs{:});
Error in minimum_cvae>modelGradients (line 64)
[decodeGrad, encodeGrad] = dlgradient(loss, decoderNet.Learnables, ...
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 41)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in minimum_cvae (line 47)
[encodeGrad, decodeGrad] = dlfeval(...
Related documentation
Code:
%generate some fake data
XTrain=dlarray(randn(10,100),'CB');%Autoencoder inputs
YTrain=dlarray(randn(2,100),'CB');%Conditional inputs
N_input=size(XTrain,1);
N_latent = 9;%number of latent variables
N_cond=size(YTrain,1);%number of conditional inputs
%define encoder with two inputs
encoderLG = layerGraph([
featureInputLayer(N_input,'Name','input_encoder');
concatenationLayer(1,2,'Name','concat1')
fullyConnectedLayer(5,'Name','fc1')
reluLayer('Name','relu1')
fullyConnectedLayer(2 * N_latent, 'Name', 'fc_encoder')
]);
encoderLG = addLayers(encoderLG,featureInputLayer(N_cond,'Name','cond_input'));
encoderLG = connectLayers(encoderLG,"cond_input","concat1/in2");
%definte decoder
decoderLG = layerGraph([
featureInputLayer(N_latent,'Name','input_decoder');
concatenationLayer(1,2,'Name','concat1')
fullyConnectedLayer(5,'Name','fc1')
reluLayer('Name','relu1')
fullyConnectedLayer(N_input, 'Name', 'fc_decoder')
]);
decoderLG = addLayers(decoderLG,featureInputLayer(N_cond,'Name','cond_input'));
decoderLG = connectLayers(decoderLG,"cond_input","concat1/in2");
encoderNet = dlnetwork(encoderLG);
decoderNet = dlnetwork(decoderLG);
numEpochs = 10;
lr = 1e-3;
iteration=0;
avgGradientsEncoder = [];
avgGradientsSquaredEncoder = [];
avgGradientsDecoder = [];
avgGradientsSquaredDecoder = [];
for epoch = 1:numEpochs
iteration=iteration+1
[encodeGrad, decodeGrad] = dlfeval(...
@modelGradients, encoderNet, decoderNet, XTrain, YTrain);
[decoderNet.Learnables, avgGradientsDecoder, avgGradientsSquaredDecoder] = ...
adamupdate(decoderNet.Learnables, ...
decodeGrad, avgGradientsDecoder, avgGradientsSquaredDecoder, iteration, lr);
[encoderNet.Learnables, avgGradientsEncoder, avgGradientsSquaredEncoder] = ...
adamupdate(encoderNet.Learnables, ...
encodeGrad, avgGradientsEncoder, avgGradientsSquaredEncoder, iteration, lr);
end
function [encodeGrad, decodeGrad] = modelGradients(encoderNet, decoderNet, x, y)
[z, zMean, zLogvar] = sampling(encoderNet, x, y);
xPred = sigmoid(forward(decoderNet, z, y));
loss = ELBOloss(x, xPred, zMean, zLogvar)
[decodeGrad, encodeGrad] = dlgradient(loss, decoderNet.Learnables, ...
encoderNet.Learnables);
end
function elbo = ELBOloss(x, xPred, zMean, zLogvar)
squares = 0.5*(xPred-x).^2;
weight=10;
reconstructionLoss = mean(squares(:));
KL = mean( -.5 * mean(1 + zLogvar - zMean.^2 - exp(zLogvar), 1));
elbo = weight*reconstructionLoss + KL;
end
function [zSampled, zMean, zLogvar] = sampling(encoderNet, x, y)
encoded = forward(encoderNet, x, y);
d = size(encoded,1)/2;
zMean = encoded(1:d,:);
zLogvar = encoded(1+d:end,:);
sz = size(zMean);
epsilon = randn(sz);
sigma = exp(.5 * zLogvar);
z = epsilon .* sigma + zMean;
zSampled = dlarray(z, 'CB');
end
0 个评论
回答(1 个)
Abhaya
2024-9-5
Hello Travis,
I encountered a similar error while using the Deep Learning Toolbox in MATLAB 2021a.
This error appears to occur during a call to the ‘backwardPass’ function. However, when I run the same code in MATLAB 2021b, it works without any issues for me.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Custom Training Loops 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!