As used in closed loop network NarX reality
1 次查看(过去 30 天)
显示 更早的评论
Thanks Greg for your reply. I think the network is evaluated openloop and I think it gives good results, but for now I do not believe them. My questions are as follows:
1 - I have evaluated the accuracy openloop, how would an in closeloop?. For a first simulation closeloop as inputs I have therefore created a matrix with the last two values InputSeries and say 30 inputs outside the training sample which would have an array of 2 * 32. To create the matrix newsTargets introduce targetSeries the last value of the array and fill with 31 NaN thereby get another matrix 1 * 32. From here I simulate the network and gives me values that putting them together with real targets is a bit displaced. (I would like to send a screenshot of that graph, if you tell me in what way will send it). I think I perform the operation correctly, although I would like to evaluate the accuracy closeloop very strictly.
2 - My second question is because if I try the net for real, that is, suppose we are to March 4, 2013. I by a platform get data (RSI inputs and EMA), whereby I newInputs other matrix in which the last 2 introduce inputSeries values and RSI values and EMA today. (I think I'm doing well so far). To create another array newTarget do with the last value NaN targetSeries and 2 to have the same dimension as newInput (2 * 3) and newTarget (1 * 3) and performed the simulation to obtain the predicted value of the day March 5, 2013 . To continue iterating introduce the value of March 5 and get the value of the March 6, much like the network does NAR?, Is, in preparets command:
[inputs, inputStates, layerStates, targets] = preparets (net, inputSeries, targetSeries);?? and evaluate the network outside the sample.
Basically these are my doubts.
Many thanks Greg
0 个评论
采纳的回答
Greg Heath
2013-3-5
编辑:Greg Heath
2013-3-5
I think the network is evaluated openloop and I think it gives good results,
What are I, N, ID, FD, H, Ntrn, Nval, Ntst, R2trn, R2trna, R2val and R2tst ?
but for now I do not believe them.
Why?
What do you get from the same data using closeloop?
My questions are as follows:
1 - I have evaluated the accuracy openloop, how would an in closeloop?. For a first simulation closeloop as inputs I have therefore created a matrix with the last two values InputSeries and say 30 inputs outside the training sample which would have an array of 2 * 32. To create the matrix newsTargets introduce targetSeries the last value of the array and fill with 31 NaN thereby get another matrix 1 * 32.
I don't understand where this data is coming from.
From here I simulate the network and gives me values that putting them together with real targets is a bit displaced. (I would like to send a screenshot of that graph, if you tell me in what way will send it). I think I perform the operation correctly, although I would like to evaluate the accuracy closeloop very strictly.
There is a way to see plots on ANSWERS. Fnd out how. Right now I need to see your code.
2 - My second question is because if I try the net for real, that is, suppose we are to March 4, 2013. I by a platform get data (RSI inputs and EMA), whereby I newInputs other matrix in which the last 2 introduce inputSeries values and RSI values and EMA today. (I think I'm doing well so far). To create another array newTarget do with the last value NaN targetSeries and 2 to have the same dimension as newInput (2 * 3) and newTarget (1 * 3) and performed the simulation to obtain the predicted value of the day March 5, 2013 . To continue iterating introduce the value of March 5 and get the value of the March 6, much like the network does NAR?, Is, in preparets command:
[inputs, inputStates, layerStates, targets] = preparets (net, inputSeries, targetSeries);?? and evaluate the network outside the sample.
Basically these are my doubts.
I don't understand. Post the code.
Greg
更多回答(1 个)
Greg Heath
2013-3-8
Please add a line of English translation beneath your Spanish cooments
% I have two files: % p (are the inputs), consisting of 8 columns and 2200 rows % t (are the targets) consisting of 1 column and 2200 rows
Which have to be transposed ...
% MSE00=mean(var(t1,1)); % MSE00=0.0095 % MSE00a=mean(var(t1,0)); MSE00a=0.0095
Not quite. Should remove the initial delays and divide into separate trn/trna/val/tst estimates
% %numero de capas y retrasos % inputDelays = 1:2; % feedbackDelays = 1:2; % hiddenLayerSize = 9; % %calculo de otros parámetros % [I N]=size(p); % I=8 ; N=2200 % [O N]=size(t); % O=1 ; N=2200 % Neq=N*O; % Neq=2200
No. Should exclude N2 = 100 holdout data and 2 initial delay data points
Neq = (N-100-2)*O = 2098
% Nw=(I+1)*hiddenLayerSize+(hiddenLayerSize+1)*O; % Nw=91
No. Should include the delay weights
Nw = (NID*I+NFD*O+1)*H+(H+1)*O = 181
After configure or train can check with
Nw = net.numWeightElements
% Ntrneq=0.7*Neq; % Ntrneq=1540
Ntrn = N -2*round(0.15*N) % 1468
% Ndof=Ntrneq-Nw; % Ndof=1449
Ndof = 1287
% net.divideFcn='divideblock';
'divideind' will also work if you specify the integers
% net.trainParam.goal=0.01*MSE00;
net.trainParam.goal = 0.01*Ndof*MSEtrn00a/Ntrneq
to obtain R2trna ~ 0.990
% %entrenamiento de la red % [net,tr] = train(net,inputs,targets,inputStates,layerStates);
[ net tr Ys Es Xf Af ] = ...
Ys,Es contain trn/val/test output and error time series
Xf,Af are the final delay data for use with your next N2 = 100 data points.
%salidas y errores % outputs = net(inputs,inputStates,layerStates); % errors = gsubtract(targets,outputs); % MSE = perform(net,targets,outputs); % MSE= 7.7982e-5 % MSEa=Neq*MSE/(Neq-Nw); % MSEa= 8.1346e-5 % R2=1-MSE/MSE00; % R2=0.9918 % R2a=1-MSEa/MSE00a; % R2a=0.9915 % MSEtrn=tr.perf(end); %MSEtrn=7.1240e-5 % MSEval=tr.vperf(end); %MSEval=9.4383e-5 % MSEtst=tr.tperf(end); %MSEtst=9.3139e-5 % R2trn=1-MSEtrn/MSE00; %R2trn=0.9925 % R2val=1-MSEval/MSE00; %R2val=0.9901 % R2tst=1-MSEtst/MSE00; %R2tst=0.9902 % precisiones=[MSE MSEa R2 R2a R2trn R2val R2tst];
Almost.
Need MSEtrna,R2trna and correct use of MSEtrn00, MSEtrn00a, MSEval00 and MSEtst00 .
Now close the loop and test on the original data as I explained in the recent answer to Nicholas.
% % we close the loop and check with 50 values of p2 % %cerramos bucle % netc = closeloop(net); % netc.name = [net.name ' - Closed Loop']; % view(netc) % %cogemos numero de predicciones % NumberOfPredictions = 50;
Why not use all 100 of p2(:,1:N2) ?
=============== MORE LATER
Greg
3 个评论
Greg Heath
2013-3-9
Only the training error needs the DOF adjustment. The test error is unbiased. The val error is somewhat biased if it's minimum caused training to stop. However, most of the time it is near the test set error. Besides, I've never seen anyone propose a way to mitigate val set stopping bias.
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Modeling and Prediction with NARX and Time-Delay Networks 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!