How to declare the weight and bias values for a convolution layer?
1 次查看(过去 30 天)
显示 更早的评论
Greetings,
I want to add a convolution layer in the existing squeezenet network. However the errors shown like this:
Error using assembleNetwork (line 47)
Invalid network.
Error in trainyolov3 (line 80)
newbaseNetwork = assembleNetwork(lgraph); % for tiny-yolov3-coco
Caused by:
Layer 'add_conv': Empty Weights property. Specify a nonempty value for the Weights property.
Layer 'add_conv': Empty Bias property. Specify a nonempty value for the Bias property.
Therefore, I want to ask on how to declare the weights and bias value for that layer? My code is as shown below:
lgraph = disconnectLayers(lgraph,'fire8-concat','fire9-squeeze1x1');
layer = [
maxPooling2dLayer([3 3],"Name","pool6","Padding","same","Stride",[2 2])
convolution2dLayer([1 1],512,"Name","add_conv","Padding",[1 1 1 1],"Stride",[2 2])
reluLayer("Name","relu_add_conv")];
lgraph = addLayers(lgraph, layer);
lgraph = connectLayers(lgraph,'fire8-concat','pool6');
lgraph = connectLayers(lgraph,'relu_add_conv','fire9-squeeze1x1');
newbaseNetwork = assembleNetwork(lgraph);
0 个评论
回答(1 个)
Manish
2024-8-27
编辑:Manish
2024-8-27
Hi,
It seems the error you're encountering is due to the weights and biases not being initialized.
You can verify this by inspecting the layer properties using “layer(2).Weights.” In the code provided, the “Weights” and “Biases” are uninitialized.
To resolve this issue, you can explicitly initialize the weights and biases and apply them when defining the “add_conv” layer.
Refer to the code snippet below that demonstrates how to initialize weights and biases:
% Load the network and convert to layer graph
net = squeezenet('Weights', 'imagenet');
lgraph = layerGraph(net);
% Disconnect the specified layers
lgraph = disconnectLayers(lgraph, "fire8-concat", "fire9-squeeze1x1");
% Define the input channels based on the architecture
inputChannels = 512; % This should match the number of output channels from the previous layer
% Initialize weights and biases
weights = randn([1, 1, inputChannels, 512]) * 0.01; % Random initialization
bias = zeros([1, 1, 512]); % Zero initialization
% Define the new layers with initialized weights and biases
layer = [
maxPooling2dLayer([3 3], "Name", "pool6", "Padding", "same", "Stride", [2 2])
convolution2dLayer([1 1], 512, "Name", "add_conv", "Padding", [1 1 1 1], "Stride", [2 2], ...
'Weights', weights, 'Bias', bias)
reluLayer("Name", "relu_add_conv")
];
% Add the new layers to the graph
lgraph = addLayers(lgraph, layer);
% Connect the layers
lgraph = connectLayers(lgraph, 'fire8-concat', 'pool6');
lgraph = connectLayers(lgraph, 'relu_add_conv', 'fire9-squeeze1x1');
% Assemble the network
newbaseNetwork = assembleNetwork(lgraph);
For more information on initializing the weights and biases explicitly, you can refer to the “Parameters and Initialization” section in the below mentioned documentation:
Hope it helps!
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!