How can I improve the performance of a feed-forward backpropagation neural network?
6 次查看(过去 30 天)
显示 更早的评论
Hi, I am working with MATLAB R2013a to build a prediction neural network model. I have tried to use different training algorithms, activation functions and number of hidden neurons but still can't get the R more than 0.8 for training set, validation set and testing set. The R of training set for some networks can be more than 0.8 but provide low R values (around 0.4~0.5) for validation and testing set. Below are the codes. Is there any solutions to improve the performance and R value?
inputs<48x206>, targets<5x206>
inputs = inputs;
targets = targets;
hiddenLayerSize = 15;
net = fitnet(hiddenLayerSize);
net.layers{1}.transferFcn='tansig';
net.layers{2}.transferFcn='purelin';
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
net.divideFcn = 'dividerand';
net.divideMode = 'sample';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
net.trainFcn = 'traincgp';
net.performFcn = 'mse';
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ... 'plotregression', 'plotfit'};
[net,tr] = train(net,inputs,targets);
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
trainTargets = targets .* tr.trainMask{1};
valTargets = targets .* tr.valMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
view(net)
1 个评论
pepper yuan
2016-3-30
Hi, Jocelyn, have you solve the problem of improving the performance of neural network? As I'm dealing the problem same with you, can you provide me your email, so I can ask you some questions via the email. Appreciate if you can rely me.
采纳的回答
Greg Heath
2016-3-4
% 1. This is REGRESSION, not PREDICTION.
%2. Placeholders:
input = randn(48,206);
target = randn(5,48)*input.^2+randn(5,48)*input+ randn(5,206);
[ I N ] = size(input) % [ 48 206 ]
[ O N ] = size(target) % [ 5 206 ]
%3. N = 206 doesn't provide enough information to be using I = 48:
Ntrn = N - 2*round(0.15*N) % 144 training points
Ntrneq = Ntrn*O % 720 No. of training equations
% Nw = (I+1)*H+(H+1)*O No. of weights for an I-H-O net
% Nw > Ntrneq <==> H > Hub
Hub = (0.7*N-O)/(I+O+1) % 2.6 to 2.9 for O = 5 to 1
% Therefore, regardless of O, the net is OVERFIT when H >= 3
% 4. Remedies:
a. H <= 2 and/or
b. Validation Stopping and/or
c. TRAINBR Regularization and/or
d. MSEREG Regularization and/or
e. Input variable Reduction
% 5. I recommend trying 4e first. To get a feel for the data, you could
a. Standardize all variables with zscore or mapstd
b. Create 48 graphs with the 5 targets plotted vs each input
c. Remove or modify outliers
d. Obtain the 54 x 54 correlation coefficient matrix
e. Consider multiple single output models
f. Use STEPWISEFIT and/or STEPWISE on a linear model
Good Luck.
P.S. I try not to waste my time by using statements that just assign default parameter values
Hope this helps.
Thank you for formally accepting my answer
Greg
1 个评论
Greg Heath
2016-4-20
CORRECTION:
Hub = (Ntrneq-O)/(I+O+1)% 13.2 for O = 5
I have run 10 initial random number trials for each value of H = 0:13. This resulted in a DECREASE in performance as H increased!!!??? I was quite surprised since I had never experienced that before.
This became somewhat more believable when I plotted the data. The best Linear model (i.e., H=0 ) yielded NMSE = 0.44, Rsq = 1-NMSE = 0.56 and R = sqrt(Rsq) = 0.75.
Next I tried a non-neural quadratic model and hit paydirt!
If this were my problem I would use STEPWISEFIT to see which inputs, crossproducts and squares are really necessary.
Hope this helps.
Greg
更多回答(3 个)
Jocelyn
2016-3-28
1 个评论
Greg Heath
2016-3-28
1. You are still wasting time, space and attention by keeping statements that merely assign default values
2. How many inputs and outputs are you using after the variable reduction?
3. What happens when you use
net.divideFcn = 'dividetrain'
and try to minimize H using a double for loop as in my posts?
Hope this helps.
Greg
Jocelyn
2016-4-12
7 个评论
Greg Heath
2016-4-16
If you post the data in *.m or *.txt, I may be able to take a look at it.
Again: My first impression is that you don't have enough data to accurately deal with 48 inputs.
Greg
Jocelyn
2016-4-19
6 个评论
Greg Heath
2016-4-26
For no overfitting
Ntrneq >= Nw
Which leads to
Hub = (Ntrneq - O)/(I + O + 1)
= (Ntrn*O - O)/(I + O +1)
~ (0.7*N*O - O )/ (I + O + 1)
Hope this helps.
Greg
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!