Test or validate a model using (same number of images form each subfolder)
1 次查看(过去 30 天)
显示 更早的评论
Hi, i have three subfolders (good, moderate and severe) with images(3608, 406 and 200 images respectivly)
1-the accuracy is (98.18), is it ok?
2-after training my model,how to take (100 images from each subfolder to test (validate) the model) ?
3-what is the code for confusion matrix
Thanks
my code is next:
imds = imageDatastore('C:\Users\Rayan\Desktop\9_8_balance_data\R_9_1_GSM', ...
'IncludeSubfolders',true, ...
'LabelSource','foldernames');
[imdsTrain,imdsValidation] = splitEachLabel(imds,0.85,'randomized');
numTrainImages = numel(imdsTrain.Labels);
idx = randperm(numTrainImages,16);
net = resnet50;
inputSize = net.Layers(1).InputSize;
lgraph = layerGraph(net);
% call the find layers to Replace at : edit(fullfile(matlabroot,'examples','nnet','main','findLayersToReplace.m'))
edit(fullfile(matlabroot,'examples','nnet','main','findLayersToReplace.m'))
[learnableLayer,classLayer] = findLayersToReplace(lgraph);
[learnableLayer,classLayer] %#ok<NOPTS>
numClasses = numel(categories(imdsTrain.Labels));
if isa(learnableLayer,'nnet.cnn.layer.FullyConnectedLayer')
newLearnableLayer = fullyConnectedLayer(numClasses, ...
'Name','new_fc', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);
elseif isa(learnableLayer,'nnet.cnn.layer.Convolution2DLayer')
newLearnableLayer = convolution2dLayer(1,numClasses, ...
'Name','new_conv', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);
end
lgraph = replaceLayer(lgraph,learnableLayer.Name,newLearnableLayer);
newClassLayer = classificationLayer('Name','new_classoutput');
lgraph = replaceLayer(lgraph,classLayer.Name,newClassLayer);
layers = lgraph.Layers;
connections = lgraph.Connections;
layers(1:20) = freezeWeights(layers(1:20));
lgraph = createLgraphUsingConnections(layers,connections);
augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain)
augimdsValidation = augmentedImageDatastore(inputSize(1:2),imdsValidation);
miniBatchSize=10;
valFrequency = floor(numel(augimdsTrain.Files)/miniBatchSize);
options = trainingOptions('sgdm', ...
'MiniBatchSize',10, ...
'MaxEpochs',10, ...
'InitialLearnRate',0.0007, ...
'Shuffle','every-epoch', ...
'ValidationFrequency',valFrequency, ...
'ValidationData',augimdsValidation, ...
'Verbose',false, ...
'Plots','training-progress');
net = trainNetwork(augimdsTrain,lgraph,options);
[YPred,probs] = classify(net,augimdsValidation);
accuracy = mean(YPred == imdsValidation.Labels);
idx = randperm(numel(imdsValidation.Files),100);
R=1;
for j =1:25
figure(j)
for i = 1:4
subplot(2,2,i)
I = readimage(imdsValidation,idx(R));
imshow(I)
label = YPred(idx(R));
title(string(label) + ", " + num2str(100*max(probs(idx(R),:)),3) + "%");
R=R+1;
end
end
0 个评论
采纳的回答
Image Analyst
2022-7-4
Regarding accuracy. It might be. But let's say you were building a dog and cat trainer and gave it 10000 images of dogs and 30 images of cats. Then, once trained, you give it the test set comprised of both dogs and cats - 9818 dog images and 172 cat images - and it called every single one a dog. That's 98.18 accurate. Is it good enough? It didn't find a single cat but it was 98.18% accurate.
0 个评论
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!