In learning curve, training error decrease with increase training datasize.
3 次查看(过去 30 天)
显示 更早的评论
I’ve learned and observed that training loss / error increases with training data size as stated in Dr Andrew Ng’s ML course.
I’ve recently experienced an anomaly. Training error and Test error curves were decreasing while training data size was increasing.
is this normal?
Some post said because regularization. in my case I use trainbr :Bayesian regularization backpropagation
is this reason?
Thank you.
0 个评论
采纳的回答
Shashank Gupta
2021-2-5
Hi,
These all figures boils down to number of learnable parameter v/s training data size. Regularization and all does have impact on the loss and yes it is possible that it might be the case. Also there are many other reasons, the graph which you plot describing the losses, are these optimal? does all hyperparameters are optimized properly? Prof. Andrew Ng talks about cases when optimality is reached. Now if you increase the training data. The optimal loss with same number of learnable parameter and more training data will be higher. It is a tradeoff. The explanation given in the link which you shared also make sense. There is no denial.
I hope my insight gave you enought help.
Cheers.
0 个评论
更多回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!