Why number of filters, numF = round(16/s​qrt(Sectio​nDepth)) and why numF is doubled in each convolution in Deep Learning Using Bayesian Optimization in MATLAB Documentation?

1 次查看(过去 30 天)
imageSize = [32 32 3];
numClasses = numel(unique(YTrain));
numF = round(16/sqrt(optVars.SectionDepth));
layers = [
imageInputLayer(imageSize)
% The spatial input and output sizes of these convolutional
% layers are 32-by-32, and the following max pooling layer
% reduces this to 16-by-16.
convBlock(3,numF,optVars.SectionDepth)
maxPooling2dLayer(3,'Stride',2,'Padding','same')
% The spatial input and output sizes of these convolutional
% layers are 16-by-16, and the following max pooling layer
% reduces this to 8-by-8.
convBlock(3,2*numF,optVars.SectionDepth)
maxPooling2dLayer(3,'Stride',2,'Padding','same')
% The spatial input and output sizes of these convolutional
% layers are 8-by-8. The global average pooling layer averages
% over the 8-by-8 inputs, giving an output of size
% 1-by-1-by-4*initialNumFilters. With a global average
% pooling layer, the final classification output is only
% sensitive to the total amount of each feature present in the
% input image, but insensitive to the spatial positions of the
% features.
convBlock(3,4*numF,optVars.SectionDepth)
averagePooling2dLayer(8)
% Add the fully connected layer and the final softmax and
% classification layers.
fullyConnectedLayer(numClasses)
softmaxLayer
classificationLayer];

采纳的回答

Sripranav Mannepalli
Hi,
The number of filters (numF) are proportional to 1/sqrt(SectionDepth), so that the networks of different depths have roughly the same number of parameters and require about the same amount of computation per iteration.
Further, each time the spatial dimensions are down-sampled by a factor of two using max pooling layers, the number of filters (numF) are doubled to ensure that the amount of computation required in each convolutional layer is roughly the same.
For more information, refer to the "Define the convolutional neural network architecture" subsection in the Deep Learning Using Bayesian Optimization.
  1 个评论
Jyoti Nautiyal
Jyoti Nautiyal 2021-7-16
Why we need the networks of different depths to have same number of parameters and same amount of computation per iteration? Is it problematic if number of parameters and amount of computation is not the same?

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

产品


版本

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by