nnstart Neural Net toolbox and validation ROC
3 次查看(过去 30 天)
显示 更早的评论

Hi,
I have been training neural networks for classification with nnstart. I get perfect training restults 0% false positives (FP) and 100% true positives (TP). The testing set performs quite less well but still acceptable, usually i can get up to 60% TP for 40% FP (and sometimes 80% TP). However, the Validation set ROC is usually very bad: i.e., random or worse. Can someone help me understand what it means ? What does it mean for the validation ROC to be bad and for the training ROC to be perfect ?
P.S. I use nn classification with usually about 100-200 samples split by default with 70% training set, 15% validation, 15% test. And default params, scaled conjugate gradient back-propagation & cross entropy minimization, 1000 hidden layers with sigmoid activation and last layer with softmax squashing function.
0 个评论
回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
产品
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!