learnwh
Widrow-Hoff weight/bias learning function
Syntax
[dW,LS] = learnwh(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnwh('code
')
Description
learnwh
is the Widrow-Hoff weight/bias learning function, and is also
known as the delta or least mean squared (LMS) rule.
[dW,LS] = learnwh(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W |
|
P |
|
Z |
|
N |
|
A |
|
T |
|
E |
|
gW |
|
gA |
|
D |
|
LP | Learning parameters, none, |
LS | Learning state, initially should be = |
and returns
dW |
|
LS | New learning state |
Learning occurs according to the learnwh
learning parameter, shown here
with its default value.
LP.lr — 0.01 | Learning rate |
info = learnwh('
returns useful
information for each code
')code
character vector:
'pnames' | Names of learning parameters |
'pdefaults' | Default learning parameters |
'needg' | Returns 1 if this function uses |
Examples
Here you define a random input P
and error E
for a
layer with a two-element input and three neurons. You also define the learning rate
LR
learning parameter.
p = rand(2,1); e = rand(3,1); lp.lr = 0.5;
Because learnwh
needs only these values to calculate a weight change
(see “Algorithm” below), use them to do so.
dW = learnwh([],p,[],[],[],[],e,[],[],[],lp,[])
Network Use
You can create a standard network that uses learnwh
with
linearlayer
.
To prepare the weights and the bias of layer i
of a custom network to
learn with learnwh
,
Set
net.trainFcn
to'trainb'
.net.trainParam
automatically becomestrainb
’s default parameters.Set
net.adaptFcn
to'trains'
.net.adaptParam
automatically becomestrains
’s default parameters.Set each
net.inputWeights{i,j}.learnFcn
to'learnwh'
.Set each
net.layerWeights{i,j}.learnFcn
to'learnwh'
.Set
net.biases{i}.learnFcn
to'learnwh'
. Each weight and bias learning parameter property is automatically set to thelearnwh
default parameters.
To train the network (or enable it to adapt),
Set
net.trainParam
(ornet.adaptParam
) properties to desired values.Call
train
(oradapt
).
Algorithms
learnwh
calculates the weight change dW
for a given
neuron from the neuron’s input P
and error E
, and the
weight (or bias) learning rate LR
, according to the Widrow-Hoff learning
rule:
dw = lr*e*pn'
References
Widrow, B., and M.E. Hoff, “Adaptive switching circuits,” 1960 IRE WESCON Convention Record, New York IRE, pp. 96–104, 1960
Widrow, B., and S.D. Sterns, Adaptive Signal Processing, New York, Prentice-Hall, 1985
Version History
Introduced before R2006a
See Also
adapt
| linearlayer
| train