Can I get output data from CNN convolution layer without training?

5 次查看(过去 30 天)
I create a CNN and assigned to a variable named layers. I want to get output from a layer without training the network such as from convolution2dLayer. For example I want to feed layers with one image and I want to get output data from "pool4" layer. Is it possible? I tried activations function but I get errors. I tried both with augmented image datastore and normal datastore. As a simple explanation all i wanted to do is make mathematical processes like convolution and pooling for images.
layers = [
imageInputLayer(inputSize,"Name","data")
convolution2dLayer([11 11],96,"Name","conv1","BiasLearnRateFactor",2,"Stride",[4 4])
reluLayer("Name","relu1")
maxPooling2dLayer([3 3],"Name","pool1","Stride",[2 2])
convolution2dLayer([5 5],128,"Name","conv2","BiasLearnRateFactor",2,"Padding",[2 2 2 2])
reluLayer("Name","relu2")
maxPooling2dLayer([3 3],"Name","pool2","Stride",[2 2])
convolution2dLayer([3 3],256,"Name","conv3","BiasLearnRateFactor",2,"Padding",[1 1 1 1])
reluLayer("Name","relu3")
maxPooling2dLayer([3 3],"Name","pool3","Stride",[2 2])
convolution2dLayer([3 3],384,"Name","conv4","BiasLearnRateFactor",2,"Padding",[1 1 1 1])
reluLayer("Name","relu4")
maxPooling2dLayer([3 3],"Name","pool4","Stride",[2 2])
fullyConnectedLayer(4096,"Name","fc1","BiasLearnRateFactor",2)
reluLayer("Name","relu6")
dropoutLayer(0.5,"Name","drop1")
fullyConnectedLayer(4096,"Name","fc2","BiasLearnRateFactor",2)
reluLayer("Name","relu7")
dropoutLayer(0.5,"Name","drop2")
fullyConnectedLayer(2,"Name","fc3","BiasLearnRateFactor",2)
softmaxLayer("Name","prob")
classificationLayer("Name","output")];
>> featuresTrain = activations(layers,augimdsTrain,'pool4','OutputAs','rows');
Check for incorrect argument data type or missing argument in call to
function 'activations'.
  1 个评论
MFK
MFK 2024-12-20
I think i found the answer. Is below code correct for this purpose?
layers = [
imageInputLayer([512 512 3],Normalization="none")
convolution2dLayer([11 11],96,"Name","conv1","BiasLearnRateFactor",2,"Stride",[4 4])
reluLayer("Name","relu1")
maxPooling2dLayer([3 3],"Name","pool1","Stride",[2 2])
convolution2dLayer([5 5],128,"Name","conv2","BiasLearnRateFactor",2,"Padding",[2 2 2 2])
reluLayer("Name","relu2")
maxPooling2dLayer([3 3],"Name","pool2","Stride",[2 2])
convolution2dLayer([3 3],256,"Name","conv3","BiasLearnRateFactor",2,"Padding",[1 1 1 1])
reluLayer("Name","relu3")
maxPooling2dLayer([3 3],"Name","pool3","Stride",[2 2])
convolution2dLayer([3 3],384,"Name","conv4","BiasLearnRateFactor",2,"Padding",[1 1 1 1])
reluLayer("Name","relu4")
maxPooling2dLayer([3 3],"Name","pool4","Stride",[2 2])
reluLayer("Name","relu6")
flattenLayer('Name','flatten1')
]
net=dlnetwork(layers);
dlX = dlarray(double((img)),'SSC');
feature= forward(net,dlX);

请先登录,再进行评论。

回答(1 个)

Matt J
Matt J 2024-12-23
编辑:Matt J 2024-12-23
When not training, it is better to use predict(), rather than forward().
layers = [
imageInputLayer([512 512 3],Normalization="none")
convolution2dLayer([11 11],96,"Name","conv1","BiasLearnRateFactor",2,"Stride",[4 4])
reluLayer("Name","relu1")
maxPooling2dLayer([3 3],"Name","pool1","Stride",[2 2])
convolution2dLayer([5 5],128,"Name","conv2","BiasLearnRateFactor",2,"Padding",[2 2 2 2])
reluLayer("Name","relu2")
maxPooling2dLayer([3 3],"Name","pool2","Stride",[2 2])
convolution2dLayer([3 3],256,"Name","conv3","BiasLearnRateFactor",2,"Padding",[1 1 1 1])
reluLayer("Name","relu3")
maxPooling2dLayer([3 3],"Name","pool3","Stride",[2 2])
convolution2dLayer([3 3],384,"Name","conv4","BiasLearnRateFactor",2,"Padding",[1 1 1 1])
reluLayer("Name","relu4")
maxPooling2dLayer([3 3],"Name","pool4","Stride",[2 2])
reluLayer("Name","relu6")
flattenLayer('Name','flatten1')
];
net=dlnetwork(layers);
dlX = dlarray(rand(512,512,3),'SSC');
feature= predict(net,dlX);
whos feature
Name Size Bytes Class Attributes feature 13824x1 55300 dlarray

产品


版本

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by