I understand that you are trying to train a "NARX" model using input data of shape "[180000x8]" and target data "[180000x6]" to predict multivariate time series values. You want to ensure that the sum of each predicted row closely matches the sum of the actual row, while also preserving the ratios among the six target variables within each row.
The standard "NARX" training using "trainlm" and the default "MSE" loss might not effectively optimize both sum and ratio constraints simultaneously. Also, training performance and prediction accuracy may degrade due to the high dimensionality and sequence length, especially without suitable normalization.
To help resolve the issue you can kindly consider the following improvements:
1. Normalize the Targets Row-wise Before Training: This preserves the ratios among target values.
T_mat = cell2mat(Ziel);
row_sums = sum(T_mat, 1);
T_normalized = T_mat ./ row_sums;
T = mat2cell(T_normalized, size(T_normalized, 1), ones(1, size(T_normalized, 2)));
2. Use "Bayesian Regularization" ("trainbr") Instead of "Levenberg-Marquardt" ("trainlm"): This helps in generalization and handles noisy data better
trainFcn = 'trainbr';
3. Reduce Delay Ranges for Stability and Efficiency: Instead of "[1:8]", you can use the following
inputDelays = 1:3;
feedbackDelays = 1:3;
4. Evaluate the Model with Sum and Ratio Metrics: This will give a clearer measure of performance
sum_error = mean(abs(sum(y2_rescaled, 1) - sum(T_mat, 1)));
cos_sim = dot(y2_rescaled, T_mat)./(vecnorm(y2_rescaled).*vecnorm(T_mat));
ratio_error = 1 - mean(cos_sim);
These enhancements ensure that your "NARX" network focuses both on learning the correct magnitude (via sum recovery) and internal structure (via ratio preservation). Kindly integrate these suggestions into your existing loop and test various hidden layer sizes.
For further reference, kindly refer to the following official documentation:
- https://www.mathworks.com/help/deeplearning/ref/narxnet.html
- https://www.mathworks.com/help/deeplearning/ref/preparets.html
I hope this helps!

