learnpn
Normalized perceptron weight and bias learning function
Syntax
[dW,LS] = learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnpn('code
')
Description
learnpn
is a weight and bias learning function. It can result in faster
learning than learnp
when input vectors have widely varying
magnitudes.
[dW,LS] = learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W |
|
P |
|
Z |
|
N |
|
A |
|
T |
|
E |
|
gW |
|
gA |
|
D |
|
LP | Learning parameters, none, |
LS | Learning state, initially should be = |
and returns
dW |
|
LS | New learning state |
info = learnpn('
returns useful
information for each code
')code
character vector:
'pnames' | Names of learning parameters |
'pdefaults' | Default learning parameters |
'needg' | Returns 1 if this function uses |
Examples
Here you define a random input P
and error E
for a
layer with a two-element input and three neurons.
p = rand(2,1); e = rand(3,1);
Because learnpn
only needs these values to calculate a weight change
(see “Algorithm” below), use them to do so.
dW = learnpn([],p,[],[],[],[],e,[],[],[],[],[])
Limitations
Perceptrons do have one real limitation. The set of input vectors must be linearly separable if a solution is to be found. That is, if the input vectors with targets of 1 cannot be separated by a line or hyperplane from the input vectors associated with values of 0, the perceptron will never be able to classify them correctly.
Algorithms
learnpn
calculates the weight change dW
for a given
neuron from the neuron’s input P
and error E
according to
the normalized perceptron learning rule:
pn = p / sqrt(1 + p(1)^2 + p(2)^2) + ... + p(R)^2) dw = 0, if e = 0 = pn', if e = 1 = -pn', if e = -1
The expression for dW
can be summarized as
dw = e*pn'
Version History
Introduced before R2006a