Neural Network for Classification: Different Results - same parameters

1 次查看(过去 30 天)
Hello,
attached you can find a few plots representing the results from my neural network. What is the main reason for the mistake on the rigth site? Is it normal that the lines are seperating different in plot 1 and 2? Maybe the reason is the number of iterations, the learning factor or a bad backpropagation? This is my code:
Any improvements regarding the code are highly appreciated.
clear
clc
%% Input
x1=0.1.*rand(10,2)+0.6;
x2=0.1.*rand(10,2);
x31=0.1.*rand(10,1);
x32=0.1.*rand(10,1)+0.6;
x3=[x31,x32];
x41=0.1.*rand(10,1)+0.6;
x42=0.1.*rand(10,1);
x4=[x41,x42];
x0=-ones(40,1); % negative bias for every Input
in=[x1;x2;x3;x4];
in=[x0,in]'; % complete Inputmatrix
clear x1
clear x2
clear x0
o1=ones(10,1);
o2=zeros(10,1);
output=[o1;o1;o2;o2]'; % correct Output
clear o2
clear o1
weights=2*rand(3,3)-1; % randomn values for the weight matrix
iterations=1000;
coeff=0.5;
err=zeros(iterations,1);
%% Training
for i=1:iterations
out=zeros(40,1);
numIn=length(in);
e=0;
for j=1:numIn
input=in(:,j);
desired_out=output(:,j);
% Forward-Propagation
% Hidden_Layer
H1=dot(input,weights(:,1)); % Hidden-Layer_Neuron1
H2=dot(input,weights(:,2)); % Hidden-Layer_Neuron2
HL(1)=sigmoid(H1); % Acitvation function (first Input for the Output_Layer)
HL(2)=sigmoid(H2); % Activation function (second Input for the Output_Layer)
% Output-Layer
input_OL=[input(1);HL(1);HL(2)]; % Input vector for the Ouput-Layer (Bias and the output from the Hidden-Layer)
O3=dot(input_OL,weights(:,3)); % Output-Layer
OL=sigmoid(O3); % Output value neuronal network
out(j)=OL;
% Backward-Propagation
% Output-Layer
Hout=[-1,HL]'; % Output Hiddenlayer
delta=desired_out-OL; % Deviation Errorfunction MSE 0.5*(delta)^2
dsigmoid_OL=OL*(1-OL); % Deviation Sigmoid-Function (Activationfunction)
% dO3/dw=H0, Deviation weighted sum regarding the weights
weights(:,3)=weights(:,3)+coeff*delta*dsigmoid_OL.*Hout;
% Hidden-Layer
dsigmoid_HL=Hout(2:3).*(1-Hout(2:3));
% weigths(input->hidden)+coeff*der(err.fct.)*der(act.fct_Outputlayer)*Input Outputlayer*weigths(hidden->outputlayer)*der(act.fct._Hiddenlayer)*Input
weights(:,1)=weights(:,1)+coeff*delta*dsigmoid_OL*HL(1)*weights(2,3)*dsigmoid_HL(1).*input; % Weights 1. Neuron in Hidden-Layer
weights(:,2)=weights(:,2)+coeff*delta*dsigmoid_OL*HL(2)*weights(3,3)*dsigmoid_HL(2).*input; % Weights 2. Neuron in Hidden-Layer
end
e=e+abs(delta);
err(i)=e;
end
weights
out
e
%% Plot
plotpv(in(2:3,:),output)
hold on
x=(0:0.1:1);
a=weights(2,1)/weights(3,1);
b=weights(1,1)/weights(3,1);
y=-a*x+b; % aus den Gewichten eine Funktion ableiten
plot(x,y) % diese Funktion grafisch darstellen
a=weights(2,2)/weights(3,2);
b=weights(1,2)/weights(3,2);
y=-a*x+b; % aus den Gewichten eine Funktion ableiten
plot(x,y) % diese Funktion grafisch darstellen
%% Sigmoid
function s=sigmoid(x)
s=1/(1+exp(-x));
end

回答(0 个)

产品

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by