It is a prog about linear vector quantisation neural network. while updating weight when negative value comes as a result it is showing an error.we can make those negative values to zero and can proceed iteratrations. iam umable to do it.plz help me.

1 次查看(过去 30 天)
clc;
clear all;
st=[1 2 2 1 2];
alpha=0.6;
w=[0.2 0.8; 0.6 0.4; 0.5 0.7; 0.9 0.3];
disp('initial weight matrix');
disp(w);
x=[1 1 0 0; 0 0 0 1; 1 0 0 0; 0 0 1 1];
disp(x);
t=[st(2);st(3);st(4);st(5)];
e=1;
while (e<=3)
i=1;
j=1;
k=1;
disp('epoch=');
e
while(i<=4)
for j=1:2
temp=0;
for k=1:4
temp=temp+(w(k,j)-x(i,k))^2;
end
D(j)=temp;
end
if (D(1)<D(2))
J=1;
else
J=2;
end
disp('The winning unit is');
J
disp('weight updation');
if J==t(i)
w(:,J)=w(:,J)+alpha*(x(i,:)'-w(:,J));
else
w(:,J)=w(:,J)-alpha*(x(i,:)'-w(:,J));
end
w
i=i+1
end
temp=alpha(e);
e=e+1;
alpha(e)=0.5*temp;
disp('first epoch completed');
disp('learning rate updated for second epoch');
alpha(e)
end
  1 个评论
Walter Roberson
Walter Roberson 2013-12-4
Learn to use the debugger to figure out where the negative value is coming from.
After one loop iteration your alpha becomes a vector, and then your line
w(:,J)=w(:,J)+alpha*(x(i,:)'-w(:,J))
starts involving matrix multiplication where the "*" is. Are you sure that is what you want, not element-by-element multiplication, the .* operator ?

请先登录,再进行评论。

采纳的回答

Greg Heath
Greg Heath 2013-12-4
The topic should be LEARNING (NOT linear) VECTOR QUANTIZATION
Why is st 5 dimensional?
alpha = 0.6 is too high for an initial learning rate
e = 3 is too low for a maximum number of epochs
Why are you using loops instead of vectorization?
Since the x and w vectors have the same dimensions, (w(k,j)-x(i,k))^2 is incorrect
Your treatment of alpha as a scalar and a vector is incorrect.
HTH
Thank you for formally accepting my answer
Greg

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Parallel and Cloud 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by