Unexpected Bayesian Regularization performance
1 次查看(过去 30 天)
显示 更早的评论
I'm training a network to learn the sin function from a noisy input of 400 samples.
If I use a 1-30-1 feedforwardnet with 'trainlm' the network generalises well. If I use a 1-200-1 feedforwardnet the network overfits the training data, as expected. My understanding was that 'trainbr' on a network with too many neurons will not overfit. However if I run trainbr on a 1-200-1 network until convergence (Mu reaches maximum), the given network seems to overfit the data despite a strong reduction in "Effective # Param".
To me this is a strange behaviour. Have I misunderstood bayesian regularization? Can someone provide an explanation?
I can post my code if necessary, however first I want to know if the following is correct:
'trainbr' will not overfit with large networks if run to convergence
Thanks
2 个评论
Greg Heath
2020-8-22
编辑:Greg Heath
2020-8-22
How many periods are covered by the 400 samples?
What minimum number of samples per period are necessary?
Greg
回答(1 个)
Shubham Rawat
2020-8-28
Hi Jonathan,
Given your dataset and number of neurons it might be possible that your model is overfitting.
I have reproduced your code with 20 neurons and "trainbr" training funtion and it is giving me these results attached here. With Effective # Param = 18.6.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!