Help regarding the transposed convolution layer

2 次查看(过去 30 天)
Hi,
Please provide help regarding how the transposedConv2dLayer works.
I am struggling to understand the following helper function
function out = createUpsampleTransponseConvLayer(factor,numFilters)
filterSize = 2*factor - mod(factor,2);
cropping = (factor-mod(factor,2))/2;
numChannels = 1;
out = transposedConv2dLayer(filterSize,numFilters, ...
'NumChannels',numChannels,'Stride',factor,'Cropping',cropping);
end
How does the filtersize and stride affect the output of this layer?
What's the difference between this layer and a simple upsampling layer?
whether the weights are somehow transposed or learned from scratch?

采纳的回答

Srivardhan Gadila
Srivardhan Gadila 2020-3-17
An upsampling layer uses a defined/pre-defined interpolation method to upsample the input but a transposed convolution layer learns weights from the scratch. Starting in R2019a, the software, by default, initializes the layer weights of this layer using the Glorot initializer. This behavior helps stabilize training and usually reduces the training time of deep networks.
Check for the references of the transposedConv2dLayer for more information.
Refer to Input Arguments section of transposedConv2dLayer for explanation of each input parameter.

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

标签

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by