learnlv1
LVQ1 weight learning function
Syntax
[dW,LS] = learnlv1(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnlv1('code
')
Description
learnlv1
is the LVQ1 weight learning function.
[dW,LS] = learnlv1(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W |
|
P |
|
Z |
|
N |
|
A |
|
T |
|
E |
|
gW |
|
gA |
|
D |
|
LP | Learning parameters, none, |
LS | Learning state, initially should be = |
and returns
dW |
|
LS | New learning state |
Learning occurs according to learnlv1
’s learning parameter, shown here
with its default value.
LP.lr - 0.01 | Learning rate |
info = learnlv1('
returns useful
information for each code
')code
character vector:
'pnames' | Names of learning parameters |
'pdefaults' | Default learning parameters |
'needg' | Returns 1 if this function uses |
Examples
Here you define a random input P
, output A
, weight
matrix W
, and output gradient gA
for a layer with a
two-element input and three neurons. Also define the learning rate LR
.
p = rand(2,1); w = rand(3,2); a = compet(negdist(w,p)); gA = [-1;1; 1]; lp.lr = 0.5;
Because learnlv1
only needs these values to calculate a weight change
(see “Algorithm” below), use them to do so.
dW = learnlv1(w,p,[],[],a,[],[],[],gA,[],lp,[])
Network Use
You can create a standard network that uses learnlv1
with
lvqnet
. To prepare the weights of layer i
of a custom
network to learn with learnlv1
,
Set
net.trainFcn
to'trainr'
. (net.trainParam
automatically becomestrainr
’s default parameters.)Set
net.adaptFcn
to'trains'
. (net.adaptParam
automatically becomestrains
’s default parameters.)Set each
net.inputWeights{i,j}.learnFcn
to'learnlv1'
.Set each
net.layerWeights{i,j}.learnFcn
to'learnlv1'
. (Each weight learning parameter property is automatically set tolearnlv1
’s default parameters.)
To train the network (or enable it to adapt),
Set
net.trainParam
(ornet.adaptParam
) properties as desired.Call
train
(oradapt
).
Algorithms
learnlv1
calculates the weight change dW
for a given
neuron from the neuron’s input P
, output A
, output
gradient gA
, and learning rate LR
, according to the
LVQ1
rule, given i
, the index of the neuron whose output
a(i)
is 1:
dw(i,:) = +lr*(p-w(i,:))
if gA(i) = 0;
=
-lr*(p-w(i,:))
if gA(i) = -1
Version History
Introduced before R2006a