Increasing Neural Network Accuracy

13 次查看(过去 30 天)
Hi All
I am trying to implment a NN for class, and I am somwhat struggling to get it to be accurate no matter how I try to change the NN parameters. Even after a few hours I am unsure of WHAT affects its accuracy and I am having a hard time reaching a %error that is acceptable (0.5 % or lower is my target) on a previous attempt at doing this, with the same code but different range parameters, I was able to have the NN meet the calculated results at almost 100% accuracy with max error of 0.4% or so, with two hidden layers of 20 nuerons.
The code is as follows:
clear all
close all
clc
%% Actual Result Calculations %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Training patterns
numInputs = 5000; % 800 iteratons for epoch training
% Number of inputs for testing
numInputstest = 50; % 50 input tests
% Varilables for mathematics
x1 = 4*(rand(1,numInputs)) - 2; % Range 1 = [-2, 2]
x2 = 100*(rand(1,numInputs)); % Range 2 = [0, 100]
x3 = 8*(rand(1,numInputs)) - 5; % Range 3 = [-5, 3]
% output argument calculations
y1 = ((5.*(sin(pi.*(x1))) + (20.*(sqrt(x2)))))
y2 = (0.3.*x1.*x2) + (exp(x3))
y3 = (2.*(cos((0.2.*(x2)-2)) + ((x1).*(x3))))
%% Neural network parameter initialisation %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nn = feedforwardnet(200);
nn.trainParam.show = 25;
nn.trainParam.lr = 0.01;
nn.trainParam.mc = 0.9;
nn.trainParam.goal = 0;
nn.trainParam.max_fail = 200;
% Untrained Network - With 1 Epoch
nn.trainParam.epochs = 1;
nn_untrained = train(nn, [x1; x2; x3], [y1; y2; y3]);
save untrained nn
% Trained Network - Repeat training but with 3000 epochs
nn.trainParam.epochs = 2000;
nn_trained = train(nn, [x1; x2; x3], [y1; y2; y3]);
save trained nn
%% Testing Outline %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Varilables for mathematics
% Varilables for mathematics
x1t = 4*(rand(1,numInputs)) - 2; % Range 1 = [-2, 2]
x2t = 100*(rand(1,numInputs)); % Range 2 = [0, 100]
x3t = 8*(rand(1,numInputs)) - 5; % Range 3 = [-5, 3]
% output argument calculations
y1t = (5.*sin(pi.*x1t) + (20.*sqrt(x2t)))
y2t = (0.3.*x1t.*x2t) + (exp(x3t))
y3t = (2.*cos((0.2.*x2t)-2) + (x1t.*x3t))
%% Result Comparison Actual vs Nueral Network results %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
resultsuntrainedNN = nn_untrained([x1; x2; x3]);
resultsTrainedNN = nn_trained([x1; x2; x3]);
% Error comparison Y1, Y2 and Y3 untrained
Y1_UntrainedError = y1t - resultsuntrainedNN(1,:);
Y2_UntrainedError = y2t - resultsuntrainedNN(2,:);
Y3_UntrainedError = y3t - resultsuntrainedNN(3,:);
% Error comparison Y1, Y2 and Y3 trained
Y1_TrainedError = y1t - resultsTrainedNN(1,:);
Y2_TrainedError = y2t - resultsTrainedNN(2,:);
Y3_TrainedError = y3t - resultsTrainedNN(3,:);
%% Final Result Plot %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Results after one epoch iteration comparison with Calculated results
figure(1)
subplot(311); hold on;
title('y1 (Neural Network after 1 epoch)');
plot(y1t,'g-+','LineWidth',0.1); hold on;
plot(resultsuntrainedNN(1,:),'k-o','LineWidth', 0.1); hold off;
legend('predicted Y1', 'actual Y1'); grid;
subplot(312); hold on;
title('y2 (Neural Network 1 epoch)');
plot(y2t,'g-+','LineWidth',0.1); hold on;
plot(resultsuntrainedNN(2,:),'k-o','LineWidth', 0.1); hold off;
legend('predicted Y2', 'actual Y2'); grid;
subplot(313); hold on;
title('y3 (Neural Network 1 epoch)');
plot(y3t,'g-+','LineWidth',0.1); hold on;
plot(resultsuntrainedNN(3,:),'k-o','LineWidth', 0.1); hold off;
legend('predicted Y3', 'actual Y3'); grid;
% Results after 3000 iterations compared with calculated results
figure(2)
subplot(311); hold on;
title('y1 (Neural Network after 3000 epochs)');
plot(y1t,'b-+','LineWidth',1); hold on;
plot(resultsTrainedNN(1,:),'k-o','LineWidth', 1); hold off;
legend('predicted Y1', 'actual Y1'); grid;
subplot(312); hold on;
title('y2 (Neural Network after 3000 epochs)');
plot(y2t,'b-+','LineWidth',1); hold on;
plot(resultsTrainedNN(2,:),'k-o','LineWidth', 1); hold off;
legend('predicted Y2', 'actual Y2'); grid;
subplot(313); hold on;
title('y3 (Neural Network after 3000 epochs)');
plot(y3t,'b-+','LineWidth',1); hold on;
plot(resultsTrainedNN(3,:),'k-o','LineWidth', 1); hold off;
legend('predicted Y3', 'actual Y3'); grid;
% Results comparing error between one epoch iterations and 3000 epoch
% iterations
figure(3)
subplot(311); hold on;
title('Prediction error after 1 epoch)');
plot(Y1_UntrainedError,'k','LineWidth',1); hold on;
plot(Y2_UntrainedError,'r','LineWidth',1); hold on;
plot(Y3_UntrainedError,'g','LineWidth',1); hold off;
legend('Y1 error', 'Y2 error', 'Y3 error'); grid;
subplot(312); hold on;
title('Prediction error after 3000 epochs)');
plot(Y1_TrainedError,'k','LineWidth',1); hold on;
plot(Y2_TrainedError,'r','LineWidth',1); hold on;
plot(Y3_TrainedError,'g','LineWidth',1); hold off;
legend('Y1 error', 'Y2 error', 'Y3 error'); grid;
I feel like my paramers might be off? I plotted the calculated results with an O marker and the NN results with a + marker, so displaying the results after max epochs I can vew if the +'s are in the O's
I am new to this and I am trying to implement it for a class so any help would be appreciated.
(and any advice on making the results neater would be apreciated too hahaha, I am still learning to be neat with my script and results)
Thanks in advance for any help.

回答(1 个)

Akshat
Akshat 2024-5-7
Hi Muamin,
As per my understanding, you are getting a 99.5% accuracy, which is a very high accuracy for a neural network model. It is very difficult to beat these kind of numbers usually, but you can check for the following things and try to get better results:
  1. I see that your data is not normalized; that is, one of the variables is varying from -2 to 2, one is varying from 0 to 100 and the last one is -5 to 3.
  • In simpler words, a variation of 1 unit in the first variable should have more effect than a variation of 1 unit in the second variable.
  • Hence, we need to normalize this data, that is, bring this in similar ranges so that the model understands how the variation of each of the variables affects the output.
  • You can use the "normalize" function for this : https://www.mathworks.com/help/matlab/ref/double.normalize.html
2. You are training for 2000 epochs (according to your code, but you have written 3000 in comments), which might result in overfitting, which is basically when your model is fit too closely to the training data, that it cannot perform on other data sets. To overcome this, you can use regularization, cross validation etc.
You can refer this page for more : https://www.mathworks.com/discovery/overfitting.html
3. Lastly, you can tweak the learning rate. The variable "nn.trainParam.lr" is currently set to 0.01. You can lower it, and keep the epochs 2000. By lowering the learning rate, the weights will change slowly, hence it needs more epochs to give better results. But this will drastically increasing the training time, with more or less no guarantee of increasing the accuracy as it is already at 99.5%.
Hope this helps!

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by