Why is there so much difference in my cnn regression between loss of train and loss of validation?

2 次查看(过去 30 天)
I'm doing position prediction with 1D deep learning using EMG signals. In other words, if I say angle estimation, I would have expressed it better. There are signs and angle information I get from 2 simultaneous muscles. In other words, my inputs are 2 channels and the target is angle values.
As with all classical deep learning methods (for regression), I prepared my data first ( like normalization, reshape). Then I give it to the deep learning network I created and provide the training.
here is my cnn model:
But in the results of the train and test (i.e. when I follow the training process: 'Plots', 'training-progress') I encounter something like this: There is a lot of difference between training loss and validation loss.
I can say that network has always learned the train data. So it memorized.
does not look like test data at all.
I felt it was overfitting. And I tried the following:
1-I tried with sgdm and adam and it didn't make much difference
2- I reduced the learning rate.(current learning rate 1e-2)
3- I reduced the size of the data, and I made a downsample to avoid over-memorizing it (because the angles had the following values: for example, 178.50 178.52 178.52 177.88 177.87 )
When the size decreased, yes difference between training loss and validation loss decreased. .still there is difference.it didn't solve my problem. I can't find where I'm mistake.
If anyone could give some insight on this I would greatly appreciate it.
Thank you.

回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by