Feed forward NN is taking much time with my current dataset and output is not matching with target

1 次查看(过去 30 天)
0 down vote favorite I am trying to develop a feedforward NN in MATLAB
1.I have a dataset of 12 inputs and 1 output with 46998 samples. It's giving results soon for one sample, but for the complete dataset it's taking much time.
Even with one row of data set the output is not matching with the target. can anyone help me with this?
%%Clear Variables, Close Current Figures, and Create Results Directory
clc;
clear all;
close all;
mkdir('Results//'); %Directory for Storing Results
%%Configurations/Parameters
load 'Heave_dataset'
% dataFileName = 'InputANN.txt';
nbrOfNeuronsInEachHiddenLayer = [24];
nbrOfOutUnits = 1;
unipolarBipolarSelector = -1; %0 for Unipolar, -1 for Bipolar
learningRate = 0.08;
nbrOfEpochs_max = 46998;
for j = 1:nbrOfEpochs_max
Sample = Heave_dataset(1,:);
%%Read Data
Input = Sample(:, 1:length(Sample(1,:))-1);
TargetClasses = Sample(:, length(Sample(1,:)));
%%Calculate Number of Input and Output NodesActivations
nbrOfInputNodes = length(Input(1,:)); %=Dimention of Any Input Samples
nbrOfLayers = 2 + length(nbrOfNeuronsInEachHiddenLayer);
nbrOfNodesPerLayer = [nbrOfInputNodes nbrOfNeuronsInEachHiddenLayer nbrOfOutUnits];
%%Adding the Bias as Nodes with a fixed Activation of 1
% bias matrix for hidden layer
bias1 = ones(1,1);
% bias matrix for output layer
bias2 = ones(1,1);
%%Initialize Random Wieghts Matrices
% weights from input layer
weights_input = rand(nbrOfInputNodes,nbrOfNeuronsInEachHiddenLayer);
% weights from hidden layer
weights_hidden = rand(nbrOfNeuronsInEachHiddenLayer,nbrOfOutUnits);
% weights from hidden layer
weights_bias1 = rand(1,nbrOfNeuronsInEachHiddenLayer);
% weights from hidden layer
weights_bias2 = rand(1,nbrOfOutUnits);
zeroRMSReached = 0;
nbrOfEpochs_done = 0;
for Epoch = 1:nbrOfEpochs_max
%%Backpropagation Training
%Forward Pass
% net input of each hidden neurons
for i = 1:nbrOfNeuronsInEachHiddenLayer
netinput_hidden{i} = Input*weights_input(:,i) + bias1*weights_bias1(:,i);
% squashing net input of hidden neurons to get output using bipolar sigmoid transfer function
x = netinput_hidden{i};
out_hidden{i} = Activation_func_hidden(x, unipolarBipolarSelector);
% net input to output neurons
netinput_out(i,:) = out_hidden{i}*weights_hidden(i,:) + bias2*weights_bias2;
end
% squashing net input of output neurons to get output using linear transfer functio
y = sum(netinput_out);
out_output = Activation_func_output(y, unipolarBipolarSelector);
%%Calculating Error
E_total = (0.5*(TargetClasses-out_output)^2);
% %Backward Pass
% % output layer
% main objective is to find, how the concerned weight affects the total
% error. In order to do this, we have to find we will use gradient method,
% 1. Find change in error w.r.t output
% 2. Find change in output w.r.t input from hidden layer to the output
% 3. Find change in input w.r.t to weights
% % 1. Find change in error w.r.t output
deltaEtotal_out = (out_output-TargetClasses);
% % 2. Find change in output w.r.t input from hidden layer to the output
deltaout_input = ones(length(TargetClasses),1);
% % 3. Find change in input w.r.t to weights
for i = 1:nbrOfNeuronsInEachHiddenLayer
deltainput_wh_i = out_hidden{i};
deltaEtotal_wh_i = deltaEtotal_out * deltaout_input * deltainput_wh_i;
deltaEtotal_wh(i,:) = deltaEtotal_wh_i;
end
% To decrease the error, we subtract these value from initial assumed
% weights(multiplied by learning rate)
weights_hidden_updated = weights_hidden - (learningRate*deltaEtotal_wh);
% % Hidden Layer
% % follow the same procedure as previous
% % 1. Find change in error w.r.t output of hidden neurons
% % i) change in error w.r.t to output
deltaEtotal_out = (out_output-TargetClasses);
% ii) change in output w.r.t input to the output layer from hidden layer
deltaout_input = ones(length(TargetClasses),1);
% iii) change in input to the output layer from hidden layer w.r.t output from hidden layer
for i = 1:nbrOfNeuronsInEachHiddenLayer
deltainputoutput_hiddenoutput{i} = weights_hidden(i,:);
deltaEtotal_hiddenout{i} = deltaEtotal_out * deltaout_input * deltainputoutput_hiddenoutput{i};
% % 2. Find change in output w.r.t input of hidden layer
fx = out_hidden{i};
deltahout_hin{i} = Activation_func_drev(fx, unipolarBipolarSelector);
% 3. Find change in input w.r.t to weights
deltainput_wi(i,:) = Input;
deltaEtotal_wi_i = deltaEtotal_hiddenout{i} * deltahout_hin{i} * deltainput_wi(i,:);
deltaEtotal_wi(i,:) = deltaEtotal_wi_i;
end
deltaEtotal_wrtinputweights = deltaEtotal_wi';
% To decrease the error, we subtract these value from initial assumed
% weights(multiplied by learning rate)
weights_input_updated = weights_input - (learningRate*deltaEtotal_wrtinputweights);
%%Evaluation
E_total(Epoch) = E_total;
if (E_total(Epoch) == 1e-25)
zeroRMSReached = 1;
end
end
end

采纳的回答

Greg Heath
Greg Heath 2016-10-24
With "I"nput and "O"utput target matrices with sizes
[ I N ] = size(input)
[ O N ] = size(target)
Remove rows from BOTH matrices if EITHER input or target contains NaNs
Hope this helps.
Thank you for formally accepting my answer
Greg

更多回答(1 个)

Greg Heath
Greg Heath 2016-10-21
The next time think twice before posting, WITHOUT WARNING, a MEGABYTE data set with NaNs and zero variance rows.
  2 个评论
Tulasi Ram Tammu
Tulasi Ram Tammu 2016-10-22
I have some NaN values in last rows of Matrix, because some inputs are accelerations & velocities which are 1 & 2 steps less respectively than displacements.
Tulasi Ram Tammu
Tulasi Ram Tammu 2016-10-22
I tried to remove them using `
Heave_dataset(isnan(Heave_dataset))=[];,
but my dataset is getting converted into a column matrix of (1*610964). could you please suggest me what should I do?

请先登录,再进行评论。

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by