Why is the size of the input weight matrix sometimes smaller than the input length when training a neural network?
4 次查看(过去 30 天)
显示 更早的评论
I have a question regarding the size of the inut weight matrix for a neural network. My IW Matrix is smaller than expected and I don't know why. What I do:
net=patternnet(1);
[net,tr]=train(net,inputs,targets);
net.IW %size of the input weight matrices
ans =
[1x14 double]
[]
net.inputs.size %size of my inputs
ans =
[15]
net.layers.size %size of my hidden and output layer
ans =
[1]
[2]
As far as I understood, the size of my input weight matrix should be 1 (size of hidden layer) by 15 (length of input vectors). I tried it several times with different input sizes, but the size of IW sometimes is equal or 1-2 smaller than my input size.
I want to know why this happens and how I can match the weights to the input variables. Thanks in advance, Antje
2 个评论
采纳的回答
Antje
2012-9-6
5 个评论
enjy fikry
2017-5-5
how can i stop that from happening ? i don't want the training process to ignore these constant columns
Greg Heath
2017-5-5
You should.
They have zero variance.
Therefore they cannot contribute to learning.
However, they can confuse those who do not understand this.
Hope this helps.
Greg
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!