good test error and wrong relative output

2 次查看(过去 30 天)
I've a problem with my neural network. I use Matlab fitnet with trainlm and validation stop and without pre and post processing data. The test error is very good and so I imagine the output of new inputs is correct but this was not. I don't need to use normalization of new inputs because there's not pre and post processing function. After training I use one step new input to generate output because I can know only one step new input at a time. Somebody had the same problem? The code seems to be correct :
net=fitnet(hiddenLayerSize);
net.inputs{1}.processFcns = {};
net.outputs{2}.processFcns = {};
net.divideFcn = 'divideblock';
[net,tr] = train(net,inputs,targets);
outputs = net(inputs);

采纳的回答

Greg Heath
Greg Heath 2015-3-30
What are the results of something like (Notice the deliberate omission of semicolons)
[ I N ] = size(x)
[ O N ] = size(t)
MSE00 = mean(var(t',1))
minmaxxt = minmax([x;t])
figure(1)
plot(x,t)
hold on
net = fitnet(H);
[ net tr y e ] = train(net,x,t);
NMSE = mse(e)/MSE00
plot(x,y,'r.')
Next, use tr to obtain the indices for trn/val/tst and repeat for each subset i= trn,val,tst
figure
hold on
for i = 1:3
[ Ii Ni ] = size(xi)
[ Oi Ni ] = size(ti)
MSE00i = mean(var(ti',1))
minmaxxiti = minmax([xi;ti])
NMSEi = mse(ei)/MSE00i
plot(xi,ti)
plot(xi,yi,'r.')
end
The purpose of this is to make sure the data is stationary, i.e., the statistics of the 3 subsets are comparable
THEN, if any new data appears to come from the same source, it can be verified by comparing it with each of the three subsets.
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 个评论
Emiliano Rosso
Emiliano Rosso 2015-3-31
Thanks for your partecipation. I tried your suggestion and everything is good,but my problem is not resolved. When I say the relative output is wrong I mean the output calculated onto the same test set does not match the error previously obtained. I started with this code for 2000 inputs:
net=fitnet(hiddenLayerSize);
net.inputs{1}.processFcns = {};
net.outputs{2}.processFcns = {};
net.divideFcn = 'divideblock';
[net,tr] = train(net,inputs,targets);
net.divideParam.trainRatio = 65/100;
net.divideParam.valRatio = 10/100;
net.divideParam.testRatio = 25/100;
[m,n]=size(inputs);
scartN=round(n./100*75)+1;
for test=scartN:n
targetstest(1,test-scartN+1)=targets(1,test);
for i=1:m
inputstest(i,test-scartN+1)=inputs(i,test);
end
end
outputstest= net(inputstest);
errors = gsubtract(targetstest,outputstest);
figure, ploterrhist(errors);
I decided to not use somethings like :
inputtest = cell2mat(gmultiply(inputs,tr.testMask));
to avoid Nan in array and I checked in tr test/val/test division to control index.
After I've made the same training using 1500 inputs:
net.divideParam.trainRatio = 87/100;
net.divideParam.valRatio = 13/100;
net.divideParam.testRatio = 0;
The size of training and validation set are almost the same and data are the same. The training is not affected by the different initial random weights because I tried many times to train the network and it works well every time. The only difference is that I omitted in the training the test set and I replaced it with single step prediction of old test set (data are the same) using :
newoutputs=net(newinputs); (500 times- 25% of 2000)
This would not affect the performance of neural network. I could recall error and output directly from :
[net,tr,Y,E,Pf,Af] = train(net,P,T,Pi,Ai)
but I would want to test the code for future real use. Now,the problem is that,for my porpouse,the test error is very good but not the output. I can't understand why output doesn't match the goal.
Hope somebody help me...

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by