learnp
Perceptron weight and bias learning function
Syntax
[dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnp('code
')
Description
learnp
is the perceptron weight/bias learning function.
[dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W |
|
P |
|
Z |
|
N |
|
A |
|
T |
|
E |
|
gW |
|
gA |
|
D |
|
LP | Learning parameters, none, |
LS | Learning state, initially should be = |
and returns
dW |
|
LS | New learning state |
info = learnp('
returns useful
information for each code
')code
character vector:
'pnames' | Names of learning parameters |
'pdefaults' | Default learning parameters |
'needg' | Returns 1 if this function uses |
Examples
Here you define a random input P
and error E
for a
layer with a two-element input and three neurons.
p = rand(2,1); e = rand(3,1);
Because learnp
only needs these values to calculate a weight change (see
“Algorithm” below), use them to do so.
dW = learnp([],p,[],[],[],[],e,[],[],[],[],[])
Algorithms
learnp
calculates the weight change dW
for a given
neuron from the neuron’s input P
and error E
according to
the perceptron learning rule:
dw = 0, if e = 0 = p', if e = 1 = -p', if e = -1
This can be summarized as
dw = e*p'
References
Rosenblatt, F., Principles of Neurodynamics, Washington, D.C., Spartan Press, 1961
Version History
Introduced before R2006a