Why is there a difference between output of neural network by inbuilt test function (ANN Toolbox) and custom designed test function? (Test Function: One that checks accuracy of network after training)
2 次查看(过去 30 天)
显示 更早的评论
- Here is the code for the network. *
%%%%%%%%%%%%%%%%%%%%%%%Code %%%%%%%%%%%%%%%%%%%%%%%%
% train_input -- 224 * 320 matrix containing 80 samples each with 224 features
% test_input -- 224 * 80 matrix containing 80 samples each with 224 features
% train_target -- 40 * 320 containing 320 samples
% test_target -- 40 * 80 containing 80 samples
setdemorandstream(491218382);
net = patternnet(44);
net.performFcn = 'mse';
net.trainFcn = 'trainscg';
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'tansig';
net.divideParam.trainRatio = 1.0; % training set [%]
net.divideParam.valRatio = 0.0; % validation set [%]
net.divideParam.testRatio = 0.0; % test set [%]
net.trainParam.epochs = 300;
net.trainParam.showWindow = 0;
[net,tr] = train(net,train_input,train_target)
%%%%%%%%%%%%%%%%%%%%Inbuilt Testing of the Network %%%%%%%%%%%%%%%%%%%%%%%%
testY = net(test_input);
[c,cm] = confusion(test_target,testY);
fprintf('Percentage Correct Classification : %f%%\n', 100*(1-c));
%%%%%%Output : Percentage Correct Classification : 95 % %%%%%%
%%%%%%%%%%%%%%%%%%%Custom Designed Testing of the Network %%%%%%%%%%%%%%%%%%%
wb = formwb(net,net.b,net.iw,net.lw);
[b,iw,lw] = separatewb(net,wb);
weight_input = iw{1,1};
weight_hidden = lw{2,1};
bias_input = b{1,1};
bias_hidden = b{2,1};
test_input = mapminmax(test_input);
test_input = removeconstantrows(test_input);
hidden = [];
output = [];
indx = [];
for j = 1:80
%%%%%%1st Layer Calculation %%%%%
for k = 1:44
weighted_sum = sum(times(test_input(:,j),weight_input(k,:)'));
hidden(k,j) = 2/(1+exp(-2*(weighted_sum + bias_input(k))))-1; %%%Tansig Function
end
%%%%%%2nd Layer Calculation %%%%%
for k = 1:40
weighted_sum = sum(times(hidden(:,j),weight_hidden(k,:)'));
output(k,j) = 2/(1+exp(-2*(weighted_sum + bias_hidden(k))))-1; %%%Tansig Function
end
output = mapminmax(output);
[c,cm] = confusion(test_target,output);
fprintf('Percentage Correct Classification : %f%%\n', 100*(1-c));
%%%%%%Output : Percentage Correct Classification : 90 % %%%%%%
Why is there a difference in percentage of correct classification when both are expected to be equal?
0 个评论
采纳的回答
Greg Heath
2015-4-7
MAPMINMAX is not used correctly:
1. The parameters obtained from the training input should be used on the test input.
2. The inverse parameters obtained from the training target should be used on the test output.
Thank you for formally accepting my answer
Greg
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!