learnlv2
LVQ2.1 weight learning function
Syntax
[dW,LS] = learnlv2(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnlv2('code
')
Description
learnlv2
is the LVQ2
weight learning function.
[dW,LS] = learnlv2(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W |
|
P |
|
Z |
|
N |
|
A |
|
T |
|
E |
|
gW |
|
gA |
|
D |
|
LP | Learning parameters, none, |
LS | Learning state, initially should be = |
and returns
dW |
|
LS | New learning state |
Learning occurs according to learnlv2
’s learning parameter, shown here
with its default value.
LP.lr - 0.01 | Learning rate |
LP.window - 0.25 | Window size (0 to 1, typically 0.2 to 0.3) |
info = learnlv2('
returns useful
information for each code
')code
character vector:
'pnames' | Names of learning parameters |
'pdefaults' | Default learning parameters |
'needg' | Returns 1 if this function uses |
Examples
Here you define a sample input P
, output A
, weight
matrix W
, and output gradient gA
for a layer with a
two-element input and three neurons. Also define the learning rate LR
.
p = rand(2,1); w = rand(3,2); n = negdist(w,p); a = compet(n); gA = [-1;1; 1]; lp.lr = 0.5;
Because learnlv2
only needs these values to calculate a weight change
(see “Algorithm” below), use them to do so.
dW = learnlv2(w,p,[],n,a,[],[],[],gA,[],lp,[])
Network Use
You can create a standard network that uses learnlv2
with
lvqnet
.
To prepare the weights of layer i
of a custom network to learn with
learnlv2
,
Set
net.trainFcn
to'trainr'
. (net.trainParam
automatically becomestrainr
’s default parameters.)Set
net.adaptFcn
to'trains'
. (net.adaptParam
automatically becomestrains
’s default parameters.)Set each
net.inputWeights{i,j}.learnFcn
to'learnlv2'
.Set each
net.layerWeights{i,j}.learnFcn
to'learnlv2'
. (Each weight learning parameter property is automatically set tolearnlv2
’s default parameters.)
To train the network (or enable it to adapt),
Set
net.trainParam
(ornet.adaptParam
) properties as desired.Call
train
(oradapt
).
Algorithms
learnlv2
implements Learning Vector Quantization 2.1, which works as
follows:
For each presentation, if the winning neuron i
should not have won, and
the runnerup j
should have, and the distance di
between
the winning neuron and the input p
is roughly equal to the distance
dj
from the runnerup neuron to the input p
according to
the given window,
min(di/dj, dj/di) > (1-window)/(1+window)
then move the winning neuron i
weights away from the input vector, and
move the runnerup neuron j
weights toward the input according to
dw(i,:) = - lp.lr*(p'-w(i,:)) dw(j,:) = + lp.lr*(p'-w(j,:))
Version History
Introduced before R2006a