The batchnorm() function input trainedMean, trainedVar has no effect on the result?

4 次查看(过去 30 天)
Why does batchnorm() output the same result for random mean and variance(dlY is always same)?
height = 4;
width = 4;
channels = 3;
observations = 1;
X = rand(height,width,channels,observations);
dlX = dlarray(X,'SSCB');
offset = zeros(channels,1);
scaleFactor = ones(channels,1);
[dlY,mu,sigmaSq] = batchnorm(dlX,offset,scaleFactor)
useMean = rand(channels,1);
useVar = rand(channels,1);
[dlY,mu,sigmaSq] = batchnorm(dlX,offset,scaleFactor,useMean,useVar) % dlY is always same ???

采纳的回答

Katja Mogalle
Katja Mogalle 2021-6-30
Hello cui,
If I understand it correctly, you're wondering why the normalized data returned by batchnorm is the same, no matter if you specify mean (mu) and variance (sigmaSq) values as inputs or not.
There are basically two modes in which batchnorm is used in deep learning: training mode and inference mode.
Training mode
During training mode, mean and variance are computed directly from the current input data (aka "minibatch") and are used to normalize that minibatch of data. During training, several different minibatches of data are being processed and we're trying to compute running values for the mean and variance statistics so that we have approximate statistics for the entire data set.
For training mode, you can make use of the following two syntaxes:
Inference mode
During inference mode, we want to normalize each minibatch in exactly the same way, using the same mu and sigmaSq, namely the statistics of the entire training data set.
For inference mode, you can make use of this syntax:
In conclusion ... I suspect you wanted to try out the inference mode syntax (5 input arguments, one output argument) instead of the second training mode syntax I mentioned above (5 input arguments, 3 output arguments).
I hope this helps.
  1 个评论
cui,xingxing
cui,xingxing 2021-7-5
Thank you for your answer, and based on the example, a little more to add for verification!
%% test a sample
height = 4;
width = 4;
channels = 3;
observations = 1;
X = rand(height,width,channels,observations);
dlX = dlarray(X,'SSCB');
offset = zeros(channels,1);
scaleFactor = ones(channels,1);
[dlY1,mu1,sigmaSq1] = batchnorm(dlX,offset,scaleFactor);
%% Manual calculation
cal_mu1 = mean(dlX,[1,2,4]);
cal_sigmaSq1 = var(dlX,1,[1,2,4]);
cal_Y1 = (dlX -cal_mu1)./sqrt(cal_sigmaSq1);
% validate is equal
eps = 10.^(-3);
assert(all(abs(mu1-squeeze(cal_mu1))<eps));
assert(all(abs(sigmaSq1-squeeze(cal_sigmaSq1))<eps));
assert(all(abs(dlY1-cal_Y1)<eps,'all'));
%% 5 inputs ,3 outputs
useMean = rand(channels,1);
useVar = rand(channels,1);
[dlY2,mu2,sigmaSq2] = batchnorm(dlX,offset,scaleFactor,useMean,useVar); % dlY2 is same to dlY1 ,training mode, not use "useMean" to calculate!
[dlY3,mu3,sigmaSq3] = batchnorm(dlX,offset,scaleFactor,useMean,useVar); % dlY3 is same to dlY1 ,training mode, not use "useMean" to calculate!
dlY4 = batchnorm(dlX,offset,scaleFactor,useMean,useVar); % dlY4 is different from dlY2,dlY3, inference mode,use "useMean" to calculate!
%% Manual calculation
decay = 0.1; % Same as the official default
cal_mu2 = decay*mean(dlX,[1,2,4])+(1-decay)*reshape(useMean,[1,1,channels,1]);
cal_sigmaSq2 = decay*var(dlX,1,[1,2,4])+(1-decay)*reshape(useVar,[1,1,channels,1]);
cal_Y2 = (dlX -mean(dlX,[1,2,4]))./sqrt(var(dlX,1,[1,2,4]));% not use "cal_mu2" and "cal_sigmsSq2"!
% validate is equal
eps = 10.^(-3);
assert(all(abs(mu2-squeeze(cal_mu2))<eps));
assert(all(abs(sigmaSq2-squeeze(cal_sigmaSq2))<eps));
assert(all(abs(dlY2-cal_Y2)<eps,'all'));
% validate dlY4
cal_Y4 = (dlX -reshape(useMean,[1,1,channels,1]))./sqrt(reshape(useVar,[1,1,channels,1]));
assert(all(abs(dlY4-cal_Y4)<eps,'all'));
Validation passed!

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Deep Learning Toolbox 的更多信息

产品


版本

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by