Set weight in neural network in [0 1]

3 次查看(过去 30 天)
Hello everyone
my project is training parity 3 bit
my neural network model has 3 input, 6 neuron in hidden layer and 1 output.
I have a question for weight in neural network.
I want to set weight value in [-1 1] and bias in [-1 1], so how i can do it?.
this is my code
clear all; clc
n=input('learning rate, n = ');
emax=input('error max, emax = ')
fprintf('input:')
%x=[0 0 0;0 0 1;0 1 0;0 1 1;1 0 0;1 0 1;1 1 0;1 1 1]
x=[-1 -1 -1;-1 -1 1;-1 1 -1;-1 1 1;1 -1 -1;1 -1 1;1 1 -1;1 1 1]
fprintf('hidden weight:');
v=[0.3 -0.2 -0.1;-0.5 0 0.7;0.1 -0.5 0.1;0.2 0.3 -0.1;0.1 0.3 -0.7;0.5 -0.5 -0.4]
fprintf('ouput weight:')
w=[0.3 -0.1 0.6 -0.4 0.3 -0.5]
fprintf('desire output:')
d=[0 1 1 0 1 0 0 1]
%d=[-1 1 1 -1 1 -1 -1 1]
b1=[0.3 -0.4 0.6 -0.5 0.6 -0.3]
b2=0.8
k=0;
e=10;
%v11=[]
while ((e > emax) && (k<50000))
e=0;
k=k+1;
for i = 1:8
fprintf('epoch: k=%i\n',k);
fprintf('Input : i=%i\n',i);
for j=1:6
net_h(j)=(dot(x(i,:),v(j,:))+ b1(j));
z(j)=logsig(net_h(j));
end
disp('Input: x(1,2)=')
disp(x(i,:))
net_h=[net_h(1) net_h(2) net_h(3) net_h(4) net_h(5) net_h(6)]
z=[z(1) z(2) z(3) z(4) z(5) z(6)]
y=(dot(z,w)+b2);
e=e+1/2*(d(i)-y)^2;
fprintf('Output: y=%f\n',y);
fprintf('Desire: d=%f\n',d(i));
fprintf('Error: e=%f\n',e);
disp('update weight');
delta_o=(d(i)-y)*1;
wtr=w;
b2tr=b2
w=(w+n*delta_o*z)
b2=(b2+n*delta_o)
% w2=w/4
% b22=b2/4
for m=1:6
delta_h(m)=delta_o*wtr(m)*z(m)*(1-z(m));
v(m,1)=(v(m,1)+n*delta_h(m)*x(i,1));
v(m,2)=(v(m,2)+n*delta_h(m)*x(i,2));
v(m,3)=(v(m,2)+n*delta_h(m)*x(i,3));
b1(m)=(b1(m)+n*delta_h(m));
% v1(m,1)=v(m,1)/4;
% v1(m,2)=v(m,2)/4;
% v1(m,3)=v(m,3)/4;
% b11=b1/4;
end
% w2
% b22
% v1
% b11
w
v
end
end
hope your help,
thanks.

采纳的回答

Greg Heath
Greg Heath 2016-7-28
1. Forget about controlling weight ranges. You have several more serious problems:
2. With an I-H-O = 3-6-1 node topology, the number of unknown weights is
Nw = (I+1)*H+(H+1)*O = (3+1)*6+(6+1)*1 = 31
whereas with N = 8 data points and O = 1 output node,
the number of training equations is no larger than
Ntrneq = Ntrn*1 <= N*1 = 8
2. Using a validation subset to mitigate overtraining
an overfit net would result in BOTH Nval and Ntrn
being inadequately small.
3. Other well known remedies (which can be combined)
a. Regularization: Add a constant times the sum of
weight magnitudes to the performance function
b. Add simulated noisy data about each original
data point within a sphere of radius of 0.5.
4. However, before getting all fancy, just try to
reduce the number of unknown weights by reducing the
number of hidden nodes.
Hope this helps,
*Thank you for formally accepting my answer*
Greg
  2 个评论
nguyen tien
nguyen tien 2016-7-28
Thanks for helping
actually, i run it, with learning rate = 0.1 and emax = 0.1. I got the result as i desired. and i also reduce the number of neuron, i used 4 neuron in hidden layer. i get weight v in input layer and w in hidden layer. but it include values <1 and values > 1 . so i want to control weight in range [-1 1] .i want this because i must control weight in circuit.
and i dont know this expression, can you give a document about it? Nw = (I+1)*H+(H+1)*O = (3+1)*6+(6+1)*1 = 31
thanks.
nguyen tien
nguyen tien 2016-7-30
i see why have 31. but reduce neuron in hidden layer is not effect to weight value in range [-1 1].
Hope your helps.

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Deep Learning Toolbox 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by