Training error prediction higher than Training error in training process

2 次查看(过去 30 天)
Is it correct to check that the values of the Training error at the end of the training process and of the Training error of the prediction at fixed net are the same? I am currently using an RNN which in the training phase gives an error of 3% on the Trainig set, but when I use it to predict the values of the Training set I get an error of 18% (I am using it for prediction, not classification), while the error on the Validation set is the same in both cases. Is there any finalization on the network after the last step which might lead to this result?

回答(1 个)

Avadhoot
Avadhoot 2024-2-21
Hi Claudia,
I have interpreted from your question that you are getting a very low error rate on your training samples but a high error rate on your testing samples, while the validation error rate remains the same.
This looks like a classic example of overfitting where the model performs extremely well on the training set but fails to perform on the testing data. You can use the following ways to remedy this:
  1. Regularization: You can use L2 regularization in your loss function to reduce the overfitting in your model. This promotes generalization in the model.
  2. Early stopping: Monitor the performance of the model on a validation set and stop training when performance begins to degrade, indicating overfitting. This can be specified in the training options in MATLAB.
  3. Data Augmentation : You can perform data augmentation on your training dataset to create new samples from the existing data samples by adding noise, applying temporal distortions, or using techniques like back-translation for text data. This causes the model to generalize well. You can find data augmentation options in MATLAB datastores.
  4. Reduce model complexity: Try reducing model complexity so that the overfitting is remedied.
  5. Weight initialization: Initialize weights using an initialization scheme like Xavier or He initialization.
  6. Hyperparameter optimization: Find better values of hyperparameters in your model by performing grid search or random search on the hyperparameters.
For more information about data augmentation and early stopping, refer to the below documentation:
  1. https://www.mathworks.com/help/deeplearning/ref/trainingoptions.html
  2. https://www.mathworks.com/help/deeplearning/ref/imagedataaugmenter.html
Hope this helps.

类别

Help CenterFile Exchange 中查找有关 Deep Learning Toolbox 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by