Why is the validation set highly accurate (99%), and the test set is only 21%, and the weighted loss is only 27% (slightly increased)?

2 次查看(过去 30 天)
cui
cui 2019-5-9
评论: Raza Ali ,2019-10-9
When I train a typical case of digital image recognition,why is the validation set highly accurate (99%), and the test set is only 21%, and the weighted loss is only 27% (slightly increased)? my all code is here:
imds = imageDatastore('D:\test_video\digitRec\train',...% train data and val data
'includesubfolders',true,...
'LabelSource','foldernames','ReadFcn',@IMAGERESIZE);
testData = imageDatastore('D:\test_video\digitRec\test',... % test data
'includesubfolders',true,...
'LabelSource','foldernames','ReadFcn',@IMAGERESIZE);
%% no add weighted classification layer
trainNumFiles = 0.8;
[trainDigitData,valDigitData] = splitEachLabel(imds,trainNumFiles,'randomize');
layers = [
imageInputLayer([28,28,1])
convolution2dLayer(3,32,'Padding',1)
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2)
fullyConnectedLayer(10)
softmaxLayer
classificationLayer];
options = trainingOptions('sgdm',...
'MiniBatchSize',25,...
'MaxEpochs',5, ...
'ValidationData',valDigitData,...
'ValidationFrequency',30,...
'Verbose',false,...
'Plots','training-progress');
net = trainNetwork(trainDigitData,layers,options);
predictedLabels = classify(net,valDigitData);
valLabels = valDigitData.Labels;
val_accuracy = sum(predictedLabels == valLabels)/numel(valLabels) % high accuracy
predictedLabels = classify(net,testData);
testLabels = testData.Labels;
test_accuracy = sum(predictedLabels == testLabels)/numel(testLabels) % low accuracy ?
%% calculate classWeights
CLAS = categories(trainDigitData.Labels);
classWeights= ones(1,length(CLAS));
for i = 1:length(CLAS)
len = sum(string(trainDigitData.Labels) == string(CLAS{i}));
classWeights(i) = length(trainDigitData.Labels)/len;
fprintf('%s categorical numbers:%d\n',string(CLAS{i}),len);
end
classWeights = classWeights./sum(classWeights);
fprintf('all numbers:%d\n',length(trainDigitData.Labels));
formatSp = repmat('%.2f ',1,length(CLAS));
fprintf(['classWeights:[',formatSp,']\n'],classWeights);
%% train weighted classification net
layers(end) = weightedClassificationLayer1(classWeights,'weightedClass');
net_weight = trainNetwork(trainDigitData,layers,options);
predictedLabels = classify(net_weight,testData);
testLabels = testData.Labels;
test_weight_accuracy = sum(predictedLabels == testLabels)/numel(testLabels) % also low accuracy ?
my train data ,val data and test data is in the attachment.
weight classification layer is here:
classdef weightedClassificationLayer1 < nnet.layer.ClassificationLayer
properties
% (Optional) Layer properties.
ClassWeights;
% Layer properties go here.
end
methods
function layer = weightedClassificationLayer1(ClassWeights,name)
% (Optional) Create a myClassificationLayer.
layer.ClassWeights = ClassWeights;
if nargin == 2
layer.Name = name;
end
layer.Description = "weighted cross entropy";
% Layer constructor function goes here.
end
function loss = forwardLoss(layer, Y, T)
% Return the loss between the predictions Y and the
% training targets T.
%
% Inputs:
% layer - Output layer
% Y – Predictions made by network
% T – Training targets
%
% Output:
% loss - Loss between Y and T
% 注意loss是一个标量
% Layer forward loss function goes here.
N = size(Y,4);
Y = squeeze(Y);
T = squeeze(T);
W = layer.ClassWeights;
loss = -sum(W*(T.*log(Y)))/N;
end
function dLdY = backwardLoss(layer, Y, T)
% Backward propagate the derivative of the loss function.
%
% Inputs:
% layer - Output layer
% Y – Predictions made by network
% T – Training targets
%
% Output:
% dLdY - Derivative of the loss with respect to the predictions Y
% The output dLdY must be the same size as the layer input Y.
% Y,T,dLdY都是同大小的
% Layer backward loss function goes here.
[hight,width,K,N] = size(Y);
Y = squeeze(Y);
T = squeeze(T);
W = layer.ClassWeights;
dLdY = -(W'.*T./Y)/N; % 为什么是W'而不是W
dLdY = reshape(dLdY,[hight,width,K,N]);
end
end
end
IMAGERESIZE function:
function output = IMAGERESIZE(input)
input = imread(input);
if numel(size(input)) == 3
% input = cat(3,input,input,input);% 转成3通道的
input = rgb2gray(input);
end
output = imresize(input,[28,28]);
pytorch version is here
  1 个评论
Raza Ali
Raza Ali 2019-10-9
Hi, I am facing similar problem. My input image size is 512 x 512 x 3. when I use weightedClassificationLayer than at trining I receive Eroro message "Error using 'backwardLoss' in Layer weightedClassificationLayer. The function threw an error and could not be executed".
Can you help me to solve this issue

请先登录,再进行评论。

回答(1 个)

cui
cui 2019-5-18
编辑:cui 2019-5-18
Thank you for your reply. I guess that the inconsistent distribution of the number of categories in the test set and the training set will lead to network failure.
What is not relevant to the problem is that,if I use pytorch ,removing softmax layer can accelerated convergence,but matlab don't need it.

标签

产品


版本

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by