Inconsistent test-results with neural network
2 次查看(过去 30 天)
显示 更早的评论
Hello,
I have just used the GUI-tool of the neural network toolbox for fitting. This app automatically divides the input data in training-, validation- and testdatasets. After training, I get the performance of these 3 stages which are all fine (MSE~50). Especially the included test-setting shows good results. -> the network generalised Unfortunately these good results vanish when I test the neural network manuelly. When I load an unused testdata in the trained network, I get really bad performance values(~1000). -> the network does not generalise at all This confuses me, because this 2 test-options, 'included in tool' and 'manually', should bring almost similar performance values!
Can anyone please tell me, why there is such a huge difference in the results? The tool tells me, that the neural network is good, but when I use it, it sucks. Why?
I am grateful for every answer!
Kind regards, Detlef
0 个评论
回答(2 个)
Walter Roberson
2015-7-13
How are you initializing the weights? By default NN initialize the weights randomly.
Greg Heath
2015-7-13
The only time that should happen is when the 2 sets do not appear to come from the same probability distribution.
You don't give enough information. Assuming the data are 1-Dimensional
For subset 1.training 2. validation 3. test1 4. test2:
a. size(subset) =
b. mean(subset) =
c. var(subset) =
d. (mean(test1)-mean(test2))/sqrt(var(test1) + var(test2))
Hope this helps
Greg
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Get Started with Statistics and Machine Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!