Early stopping method in neural networks, digits recognition
显示 更早的评论
I'd like to perform early stopping algorithm on neural network in order to improve digit recognition by the network. Example of mine comes from coursera online course "machine learning" by Andrew NG, here is a link to codes from certain exercise
https://github.com/zhouxc/Stanford-Machine-Learning-Course/tree/master/Neural%20network%20learning/mlclass-ex4 (not my github account)
The problem is I cannot figure out how to modify fmincg.m so that in every epoch it compares result with output of predict.m function. This is what I need to implement early stopping method.
I come up with dealing with matlab's neural network toolbox
p=xt;
t=yt;
plot(p,t,'o')
net = newff(p,t,25);
y1 = sim(net,p);
plot(p,t,'o',p,y1,'x')
net.trainParam.epochs = 50;
net.trainParam.goal = 0.01;
net = train(net,p,t);
y2 = sim(net,p);
plot(p,t,'o',p,y1,'x',p,y2,'*')
where p is 3000x400 and t is 3000x1(originally they have 5000 elements, but I trimmed it to 3000) and here the problem emerges:
"Error using ==> network.train at 145
Targets are incorrectly sized for network.
Matrix must have 400 columns."
Any idea how to deal with that?
Or anybody maybe is able to give me hint how to modify fmicg.m to perform early stopping?
Thanks a lot in advance
DC
1 个评论
Walter Roberson
2013-10-7
Is it possible you need to pass in p' rather than p ? Is 3000 the number of features or the number of samples ?
采纳的回答
更多回答(3 个)
Greg Heath
2013-10-8
0 个投票
What does monitoring weights after each iteration have to do with early stopping?.
The only way to monitor weights every epoch is to loop over single epoch trials. The problem with doing this with the latest functions fitmet and feedforwardnet is that the default trainlm mu is initialized every time train is called. Therefore, mu has to be saved after each epoch and used to reinitialze mu before train is called again. Designs are successful; however, they are not the same as if you trained continuously.
I am not sure if patternnet's trainscg reinitializes any parameters every time train is called. I'll check. Meanwhile,
I'm pretty sure newfit and newff use trainlm. Does newpr use trainscg?
Hope this helps.
Thanks for formally accepting my answer
Greg
2 个评论
Greg Heath
2013-10-8
You might want to see the help and/or doc explanations of each training function to see if there are any that do not change parameters during training.
Greg Heath
2013-10-8
I don't see any nonconstant parameters in trainscg. Therefore, you have at least one solution.
D C
2013-10-9
0 个投票
4 个评论
Greg Heath
2013-10-9
Early stopping is a default in the MATLAB training functions. So, I don't see your problem.
D C
2013-10-9
Greg Heath
2013-10-9
doc getwb
help getwb
类别
在 帮助中心 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!