validation performance or test performance?

3 次查看(过去 30 天)
If one divided data to 3 subsets training/validation/test and Which of these can be defined as a criteria to select the network for regression?
1- validation performance
2- test performance
I really would appreciate any advice.
  2 个评论
Muhammad Usman Saleem
编辑:Muhammad Usman Saleem 2016-4-7
what is your climate data. Tell me also source like ECMWF etc? also tell me for which you(either missing or some other data) want to consider validation?
The reason to ask these terms , for the batter solution

请先登录,再进行评论。

采纳的回答

Greg Heath
Greg Heath 2016-4-8
I have posted MANY detailed explanations of the separate data-division roles of the training, validation and testing subsets in BOTH the NEWSGROUP and ANSWERS.
Try searching with
greg nomenclature
greg nondesign
greg nontraining
  2 个评论
Rita
Rita 2016-4-8
Thanks Greg for your precious posts.I have read most of your posts about hidden neurons and I run ANN and have 10 iterations for 19 different hidden neurons and I have ranked them based on validation performance ( based on your posts )I should take the lowest H and lowest validation performance.
Just one quick question : Do I need to calculate that the performance of the network did not significantly (with 95% significance level) improve after for example 4 neurons and therefore select the best hidden neuron = 4? or just ranking the hidden neurons based on validation perforance and taking the lowest H is enough?
Greg Heath
Greg Heath 2016-4-9
Do I Need to ? Well, it depends on who you want to impress.
My goals are, typically,
1. Obtain an unbiased estimate of Rsqtst >= 0.99 using as few hidden nodes
as possible.
2. Summarize the design details via a Ntrials vs numhiddennodes
dimensioned matrix of Rsq results for each of the 3 datadivision subsets.
The training results are adjusted for the loss in degrees of freedom via
dividing SSEtrn by Ndof = Ntrneq-Nw (instead of Ntrneq) where
Nw is the number of estimated weights.
Rsqtrn = 1 = MSEtrn/mean(var(ttrn',1))
Rsqval = 1 = MSEval/mean(var(tval',1))
Rsqtst = 1 - MSEtst/mean(var(ttst',1))
Rsqtrna = 1 - MSEtrna/mean(var(ttrn',0))
3. Summarize final results via a 3-colored plot with the best Rsq values
for the trn, val, and tst subsets vs number of hidden nodes.
Hope this helps.
Greg

请先登录,再进行评论。

更多回答(2 个)

Greg Heath
Greg Heath 2016-4-8
I have posted MANY detailed explanations of the separate data-division roles of the training, validation and testing subsets in BOTH the NEWSGROUP and ANSWERS.
Try searching with
NEWSGROUP ANSWERS
GREG NOMENCLATURE 5 3
GREG NONDESIGN 51 43
GREG NONTRAINING 93 112
Hope this helps
Thank you for formally accepting my answer
Greg
  2 个评论
Greg Heath
Greg Heath 2016-4-10
The focus is on
1. Obtaining the smallest net that can achieve Rsq >= 0.99 on
unseen data that has the same summary characteristics as the
design data.
2. Presenting supporting evidence that verifies, without a
doubt, the qualifications of the net.
I can think of no better way to accomplish the above than
via multiple design results summarized via
a. Four Ntrials by numhidden Rsq matrices
b. Four curves on a plot of maximum Rsq vs numhidden
How could a purchaser of the net be satisfied without the supporting multiple design evidence?
Hope this helps clarify the need for multiple design results.
Greg

请先登录,再进行评论。


Muhammad Usman Saleem
1- validation performance
  1 个评论
Muhammad Usman Saleem
编辑:Muhammad Usman Saleem 2016-4-7
If you are doing missing data immputation for climate data then you may use this method.
(1) Makes different interpolations on your kind data
(2) Select performance parameters like , RMS error, AME, R^2

请先登录,再进行评论。

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by