Matlab Neural Network program error

2 次查看(过去 30 天)
My goal is to optimize my NN for a fiting problem. So I want to test several number of neurons in the hidden layer and repeat the simulation with a new initialization using initwn function several times, finnally i'll choose the best architecture.
run('Data.m') % downloading data
Nmax=13; % maximum Number of neurones in the hidden layer
s2=10; % maximum Number of initialization
for i=1:Nmax
com=0;
while true
com=com+1;
inputs = In'; % In dimension 4*576
targets = Out'; % Out dimension 2*576
hiddenLayerSize =i;
net = feedforwardnet(hiddenLayerSize);
net.layers{1}.transferFcn = 'tansig'; %
net.layers{2}.transferFcn = 'tansig'; % I choosed tansig because I want to use 'trainbr' and 'initnw'
net.initFcn = 'initlay';
net.layers{1}.initFcn = 'initnw';
net.layers{2}.initFcn = 'initnw';
net = init(net);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
net.divideFcn='dividerand'; net.divideParam.trainRatio = 75/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 10/100;
net.trainFcn = 'trainbr'; % Bayesian Regularization backpropagation
net.performFcn = 'mse'; % Mean squared error
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ... 'plotregression', 'plotfit'};
[net,tr] = train(net,inputs,targets);
outputs = net(inputs); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs);
% What is the importance of the folowing lines?
trainTargets = targets .* tr.trainMask{1}; valTargets = targets .* tr.valMask{1}; trainPerformance = perform(net,trainTargets,outputs) valPerformance = perform(net,valTargets,outputs)
save(sprintf('Network_%d',com), 'net') % I save networks
if com>s2
break;
end
end % end of while
end
Unfortunately I got an error in this program when I run it. It seems that the the probleme occur in trainTargets = targets .* tr.trainMask{1}; valTargets = targets .* tr.valMask{1};
Could someone help me with this issue?
I want to know if this strategy to find the best NN for my problem is good too. I want to find the optimal Neurons in the hidden layer and the good weight initialization to find a global minimum with a good generalization at the same time.

采纳的回答

Greg Heath
Greg Heath 2013-4-23
1. Always begin with all of the defaults on one of the nndatasets that has approximately the same input and output dimensions as your data. (help nndatasets).
2. Next try your data with as many defaults as possible.
3. Finally, replace defaults, one-by-one until you are satisfied.
4. The only defaults that I typically change are
a. H, the number of hidden nodes
b. net.divideFcn and corresponding ratios (especially with Timeseries)
c. net.trainParam.goal
d. net.trainParam.min_grad
5. If you search the NEWSGROUP and ANSWERS using either Ntrials or Nw, you will find many examples of doubleloop design over H and random initial weights.
6.Keep all data loading and preparation outside of the outer loop.
7. Initialize the RNG before the loops so you can duplicate the random data divisions and random weight initializations.
8. You can either save the current best net ( min MSEval with mse; min MSEtrn with msereg) or keep track of the initial RNG state for each design. Then you can search the Ntrials-by-numH tabulations to determine the best net, recreate it and investigate it till the cows come home.
9.WARNING: TRAINBR does not use mse. Use it's default msereg rather than specify msereg or the regularization option on mse.
10.You seem to have gotten into trouble with the data division. With TRAINBR there is only a training set and a test set. You seem to have wanted a training set and a validation set. It would be worth your while to take a look at the data division info in properties like those tabulated below. Unfortunately, tr.divideParam contains what you inputted and is incorrect. See
a. tr.performParam,
b. tr.valInd,
c. tr.valMask,
d. tr.best_vperf,
e. tr.vperf
11. if you practice on the engine_dataset and use rng(0), We can compare results.
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 个评论
Platon
Platon 2013-5-2
Effectively I did a lot of mistakes. Here is the code adjusted:
Nmax=10;
Iter=65;
for i=4:Nmax
clear net;
for j=1:Iter
net=creeRN(inputs,targets,i);
save(sprintf('Net_%d_%d',i,j), 'net')
end
end
function net = creeRN(inputs,targets,i)
numHiddenNeurons = i;
net = newfit(inputs,targets,numHiddenNeurons);
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'tansig';
net.divideFcn='';
net.trainFcn = 'trainbr';
net.trainParam.showWindow=0;
[net] = train(net,inputs,targets);
The question now is how to choose the best model between the Nmax*Iter ones? When I plot data (Predicted Vs Actual) with NN who gives the small sse the two curves are not bad, but I got sse and mse very large more than 100000. Is it due to my data (they vary from 0 to 10000) OR it is possible to find and optimal NN with very good performance, if so how can I proceed? This is annoying me!!
I can send you my data by email in order to compare results if you don't mind!!

请先登录,再进行评论。

更多回答(2 个)

Greg Heath
Greg Heath 2013-5-3
% Platon on 23 Apr 2013 at 9:03
% Effectively I did a lot of mistakes. Here is the code adjusted:
% 1. I asked you to choose a nndata set to make it easier for me.
% 2. You did not. Therefore I chose the engine dataset. Then duplicated rows and deleted columns to get your data set size.
% 3. If you can help it, do not post code that will not run when cut and pasted into the command line. That means do not include functions
% 4. Did you use Iter = 65 because Ntst = 0 and you are trying to mitigate the bias of estimating performance using training data?
%5. Remove selected semicolons to help understand what is being calculated
% 6. Unnecessary to clear a net if it is going to be overwritten by a new creation
% 7. Showing the training window helps in debugging; especially in changing the goal or minimum gradient parameters.
% 8. It is very wasteful and highly unnecessary to save every net. My computer cried like a baby and pleaded with me to never run a program like that again!
% 9. Calculate the reference MSE MSE00 = mean(var(target',1))
%10. a. initialize the RNG before the outer loop
b.10 iterations per hidden node value is usually sufficient if
you use a nontraining test set. Otherwise, 30 might be ok
c.create the net
d. Why did you override the default transfer functions?
e. train the net using [ net tr ] = train(net,x,t);
f. obtain R^2 and other info from tr and store in training summary
matrices to be searched later
R2(i,j) = 1 - tr.perf(tr.best_epoch)/MSE00;
g. OPTIONAL: store ONLY, the best current net.
close all, clear all, clc
[inputs, targets] = engine_dataset;
[ I N ] = size(inputs) % [2 1199 ]
[O N ]= size(targets) % [2 1199 ]
whos
N = 576
inputs = [ inputs; inputs];
inputs = inputs(:,1:N);
targets = targets(:,1:N);
whos
% Name Size Bytes Class
% I 1x1 8 double
% N 1x1 8 double
% O 1x1 8 double
% inputs 4x576 18432 double
% targets 2x576 9216 double
Neq = N*O % 576 training equations
MSE00 = mean(var(targets',1)) % Reference MSE
Hmax=10
Nwmax =(I+1)*Hmax+(Hmax+1)*O % 1152 unknown weights
Niter = 10
rng(0) % INITIALIZE THE RNG
for i = 4:Hmax
for j = 1:Niter
h = i
ntrial = j
numHiddenNeurons = i;
net = newfit(inputs,targets,numHiddenNeurons);
net.divideFcn='';
net.trainFcn = 'trainbr';
[net tr ] = train(net,inputs,targets);
R2(i-3,j) = 1-tr.perf(tr.best_epoch)/MSE00;
end
end
R2=R2
The question now is how to choose the best model between the Nmax*Iter ones?
%Search for my examples using the keyword Ntrials
When I plot data (Predicted Vs Actual) with NN who gives the small sse the two curves are not bad, but I got sse and mse very large more than 100000. Is it due to my data (they vary from 0 to 10000)
% Always standardize your data (0 mean/ unity variance): very useful for finding outliers as well as balancing the coordinate computations. This effectively converts the reference MSE00 to unity.
OR it is possible to find and optimal NN with very good performance, if so how can I proceed? This is annoying me!!
%you can keep a frequently updated current minimum MSE or maximum R^2 along with its (i,j) indices. I typically search the summary matrices after the loops.
Greg
  1 个评论
Platon
Platon 2013-5-3
Thanks Greg. As you know a good NN is the one who can predict efficiently output from input even if they are different from training data (=good generalization). According to Matlab NN doc it seems that 'trainbr' function help us with this issue, but we still can encounter another problem which is local minima. In order to solve this problem it is advisable to do a lot of weight initializations. Matlab NN tool box propose the function initnw (Nguyen-Widrow layer initialization function) to be a good strategy to initialize the weith each time we run the simulation for the same network. So this was the reason why I tried to choose a large number Iter=65 for initializations so I can map the weight space as much as possible. The initialization is done by default by the initnw function so I did not get why we should use rng (0) and why we put it before the first for loop ?
I used your code as proposed and I get best R² equal to 0,997 but the MSE still so large surprising!! In fact, data normalization is included by default in the code so the data are in the range [-1,1] therefore how it can be possible to have such a large number (MSE max = 4 because (Yactual -Ypredicted)²=4 at the bad case ) !! You stated that I should standardize my data...standarize the raw data or the normalized ones? where Do I put the command Z = zscore(X)?
Finally, in the R2 matrix using (i,j) indices we can find the NN architecture but not the corresponding weights. So in order to complete the code (and to do not save the entire networks each time otherwise PC will cry like a baby...) I have then to save the weight each time so I can find the complete best NN do you agree?

请先登录,再进行评论。


Greg Heath
Greg Heath 2013-5-4
编辑:Greg Heath 2013-5-4
> The initialization is done by default by the initnw function so I did not get why we should use rng (0) and why we put it before the first for loop ?
When modifying and debugging, I remove selected semicolons to monitor results. The process is GREATLY facilitated when, because of RNG initialization, the same output appears before (and some places after) modifications. In addition you can monitor exactly how the modification affected the output.
> I used your code as proposed and I get best R² equal to 0,997 but the MSE still so large surprising!!
That comma convention is confusing to Yankees and computers.
> In fact, data normalization is included by default in the code so the data are in the range [-1,1] therefore how it can be possible to have such a large number (MSE max = 4 because (Yactual -Ypredicted)²=4 at the bad case ) !!
Default normalization facilitates nonsaturation of sigmoids. The normalization is reversed in the output of train.
That is why some inexperienced designers cannot duplicate net output when they manually use the stored weights but don't realize that they also have to use the reverse transformation.
> You stated that I should standardize my data...standarize the raw data or the normalized ones?
You have no access to the normalized data. norrmalization/training/denormalization is all done within train.
> where Do I put the command Z = zscore(X)?
Use the three output form right after you read the data and check the matrix sizes. Next use the variance output and minmax to check variable ranges for irregularities. Next use corrcoef on [ zx ; zt ] to check for unusually high or low correlations.
> Finally, in the R2 matrix using (i,j) indices we can find the NN architecture but not the corresponding weights. So in order to complete the code (and to do not save the entire networks each time otherwise PC will cry like a baby...) I have then to save the weight each time so I can find the complete best NN do you agree?
No. Use rng(0) and the best (i,j) index, to shift to the RNG state of the best design. Then redesign it.
Or, you can just save the net or weights of the current best design.
  4 个评论
Platon
Platon 2013-5-7
I did a mistake in the previous code I would put rng(0) in the outer loop rather than in the inner one. Anyway, it seems to be intteresting to add another loop for rng(k)? Is there another strategy to improve the neural network performance without changing the train function trainbr?
Greg Heath
Greg Heath 2013-5-7
No need for a loop over RNG seeds. Just make Ntrials larger.

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by