What is your suggestion for improving the result of such a bad ANN when the training is stopped by the maximum Mu value?
1 次查看(过去 30 天)
显示 更早的评论
As I know, increasing the value of the MU can't help the network to be improved. It is probably caused by this fact that training will not improve learning anymore. I have also increase the size of the hidden layer, but ... no significant change happened in the result!!! So,I would appreciate for sharing if there is any other suggestion or recommendation regarding this issue ? Thanks in advance Thanks very much in advance.
Useful information:
Regression problem. size of the input and target matrices are 5 and 1, respectively. Data are not normalized! (Maybe it could be a possible way of improving). The information of the network is also shown in the attached image.
Thank you very much in advance.
0 个评论
采纳的回答
Shrestha Kumar
2018-6-4
Hi,
The MU reaches its maximum value means that further training will lead to degradation of the network.
In your case, as the dataset is very small so if the values vary a lot then it will cause large variation in weight values which will cause problem to the network. So it is better to normalize the dataset.
Another approach will be to increase the size of the hiddenlayer( which you have already tried) or add one more hidden layer.
Also if you want to train your network even if the default maximum MU is reached then you can set the value of maximum MU as you want(using the command: Net.trainParam.Mu = value).
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!