Validation and test data show nan, regularization does not work, problems with divideblock.
2 次查看(过去 30 天)
显示 更早的评论
I am trying to classify large datasets (~80 - 100k samples). The problem is that data are time series where neighboring datapoints can be highly correlated, which seems to make block validation neccessary.
I am running multiple trials with increasing number of hidden neurons. I have googled "Greg Ntrials" but I did not quite understand how the recommendations transfer to a binary classification problem with 6 - 36 input layers and 80k samples.
Ntrials = [0 : 5 : 20];
for n =Ntrials
net = patternnet([length(X{1})+n]) ;
net.trainParam.showWindow =0;
net.trainFcn = 'trainlm';
net.performParam.regularization =0.1;
net = train(net,X,T,'useParallel','yes');
end
First of all, I always get the following message regardless even if I manually depict 'crossentropy' for the net.performFcn:
Warning: Performance function replaced with squared error performance.
Second, I sometimes get the following error:
Subscript indices must either be real positive integers or logicals
Error in divideblock>divide_indices (line 108)
Third, the nntraintool window will always show up and net.trainParam.showWindow will be set to 1 after training.
Fourth and most importantly, plotconfusion will always show NAN for both validation and testdata. I have tried various settings and also the very default settings and none of them seem to use validation.
Somehow I cannot get the patternnet to show validation results (confusion matrix shows NAN for validation and test sets whatever I do). Validation checks in nntraintool is showing 0 - 0.
Finally, when I choose regularization of 0.1 it will switch to trainbr. If I turn it off, it stays at trainlm, but still changes performFcn to mse.
I tried to thin out the training set X=X(1:10:end) & T = T(1:10:end) but the behavior stays the same.
Any help would be much appreciated. Thank you so much.
0 个评论
采纳的回答
Elizabeth Reese
2017-12-5
1. Based on this documentation, trainlm is limited to using a performance function of mse (mean square error) or sse (sum of square error). This is due to the Jacobian calculations in the algorithm. In order to use the crossentropy performance function with that regularization, please use a different training function.
2. Without knowing more about when this occurs, I am not sure I can suggest anything for this. If you see it again, feel free to contact MathWorks Technical Support.
3. The nntraintool window opens because you update the trainFcn after you set the showWindow to false. Based on this, updating the trainFcn sets the trainParam back to the defaults which means that showWindow is true. You can just switch those lines to fix this.
4. The plotconfusion uses regular MATLAB arithmetic to calculate the values. This means that, if you have NaN in the data, then you will likely get NaN result. For example NaN + 1 = NaN and sum([1:10 NaN]) = NaN.
5. I believe this is linked to the point I made above about the available options when using trainlm. The switch to mse does not occur until the training and it reverts to the default trainParam for mse at that point.
0 个评论
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!