Getting NaN validation lost in regressionLearner app

3 次查看(过去 30 天)
Hello, After applying a deep neural network by regressionLearner app, I want to see the validation loss after exporting the model, but the validation loss values are NaN,although I selected holdout validation with 15% of the training data, what's the wrong here ? Thanks
  2 个评论
Omar Abdel Deen
Omar Abdel Deen 2021-7-21
编辑:Omar Abdel Deen 2021-7-21
@KSSVI cannot see the training progress when I use the regressionLearner but I got a rmse for validation after training so I don't know what's the problem.The training is good, I got the Validation rmse value (4.201) but when I export the model to see how it was during the training I found the values as NaN, also for the Validation checks, I already traind the data using trainnetwork function but I would like to see the difference so I tried regressionLearner app, and the rmse is almost the same by the app and the function.

请先登录,再进行评论。

回答(1 个)

prabhat kumar sharma
Hi Omar,
I understand you are getting your validation loss value as NaN.When you encounter NaN (Not a Number) values as validation loss in the MATLAB Regression Learner app, particularly after training a deep neural network, it indicates that there might be issues with the data, model configuration, or the training process itself.
Here are several steps you can take to diagnose and resolve this issue:
1. Check for NaNs or Infinities in Your Data for Input Features and target variables both.
2. Deep learning models are sensitive to the scale of input data. If your features are on very different scales, consider normalizing or standardizing your data. Common practices include scaling the inputs to have a mean of 0 and a standard deviation of 1, or scaling to a [0, 1] range.
3. Model Configuration
  • Learning Rate: A too high learning rate can cause the model to diverge, resulting in NaN values. Try lowering the learning rate.
  • Regularization: Ensure that regularization parameters (if any) are set appropriately. Too much regularization might lead to underfitting, while too little can cause overfitting or instability in training.
  • Network Architecture: Review your network architecture. Sometimes, overly complex models for the given dataset size can lead to overfitting or numerical instabilities. Try simplifying the model.
4. Ensure your data splitting is correct and it represents the complete data set.
5. You can experiement with different batch sizes for debugging.
I hope it helps to resolve your issue.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by