- Dynamic Nature of NARX Networks: NARX networks are dynamic systems that use previous inputs and outputs to predict future outputs.
- Bayesian Regularization (trainbr): The trainbr algorithm updates the weights and biases according to Bayesian regularization, which includes an error minimization term and a regularization term.
Using with trainbr with narxnet: the predicted value at time t seems to depend in part on the target value at time t. But we don't know that value yet!
1 次查看(过去 30 天)
显示 更早的评论
Assume we have a time series several thousand points long where we know all of the inputs up to day t, and all of the correct target values up to day t-1.
In scenario one, we apply narxnet with trainbr to the entire series, and look at the predicted value at day t, call it P(t).
In scenario two, we apply narxnet with trainbr to the same series up to day t, and look at the predicted value at day t, call it P*(t).
In general, P(t) ~= P*(t). Why?
In fact, if we substitute in some fictitious value for the time series at day t before training, P*(t) can be wildy different than P(t).
However, also in scenario two, even if we submit the true value of the target at time t , P*(t) is still often not equal to P(t).
Is there a way to deal with this behavior? It seems odd indeed that the target value at time t, the very thing we are trying to predict, influences the network.
0 个评论
回答(1 个)
Gagan Agarwal
2024-2-26
Hi Kevin,
The discrepancy between P(t) and P*(t) in the scenarios you've described can be attributed to the nature of the Nonlinear Autoregressive Network with Exogenous Inputs (NARX) and the training algorithm used, which is the Bayesian Regularization backpropagation (trainbr).
Here could be some reasons for the discrepancy:
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Modeling and Prediction with NARX and Time-Delay Networks 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!