The performance value of your NARX neural network changes upon retraining due to the following reasons:
- Random Initialization: Neural networks typically initialize weights randomly, leading to different starting points for each training run.
- Stochastic Training Process: Algorithms like stochastic gradient descent introduce randomness in the training process.
Solutions to Achieve Consistent Performance
- Set a Fixed Random Seed: Ensures reproducibility by initializing the random number generator to a fixed state.
- Increase Training Epochs: Allows the network more time to converge, reducing variability.
- Cross-Validation: Provides a more reliable performance assessment by averaging results over multiple data splits.
- Ensemble Methods: Training multiple models and averaging their predictions can stabilize performance.