RNG neural Network and outputs
6 次查看(过去 30 天)
显示 更早的评论
Hello everyone.!
I am new in nn so that might be a silly question, but as I change the rng of my nn, the quality of the solution changes too.
For example, for a specific rng , the best setup is using softmax in the second layer, with 11 neurons in the first one. However with a different rng, the best setup is logsig in the second layer with 11 neurons in the first one. What is going on with that?Is there an optimal rng? Also, although I have formulated my output in a 1-c form, the output I get is not binary. Why? I use patternnet, with 10 input categories with 180 responses and 5 output classes.
Thank you all.!
0 个评论
采纳的回答
Greg Heath
2016-2-27
GEH1 = 'Size of input and target matrices and Hub?'
GEH2 = ' Are target columns {0,1} unit vectors?'
GEH3 = 'I find it better to only look at ~ 100 designs at a time : Ntrials ~ numel(Hmin:dH:Hmax) ~ 10. I don"t recall ever having to make more than ~ 200 designs'
GEH4 = ' Too much space is wasted on statements that assign defaults. Accept as many as you can and delete the corresponding statements. goal and min_grad are the only ones I specify'
GEH5 = ' Use dH > 1 and do not use h as a matrix index'
GEH6 = 'defaults TRAINSCG and CROSSENTROPY are usually preferred for classification'
GEH7 = ' Error rates are what what you are trying to minimize. Check out some of my patternnet posts that yield error rates.'
ope tis helps.
Thank you for formally accepting my answer
Greg
3 个评论
Greg Heath
2016-2-28
% I=10, N=173, O=5, Ntrn=121, Ntrneq=605, Hub=37.5
Ntrials = 10, h = 0:4:36 or 1:4:37 yields ~ 100 values.
% Targets were constructed through dummyvar command,
WHAT?
indices = [ 5 3 1 4 2 ],
target1 = full(ind2vec(indices)) % Correct
target2 = dummyvar(indices) % Error
% Maybe I could use this: % Ndof = Ntrneq-Nw % MSEgoal = 0.01*(Ndof/Ntreq)*MSE00 % (MSE00=mean(var(targets',1)) ) , % as a goal?
No. Only use training data
MSEtrn00 = mean(var(targets(:,trnind))',1) % biased
MSEtrn00a = mean(var(targets(:,trnind))',0) % unbiased
Ntrndof = Ntrneq-Nw
MSEgoal = 0.01*max(0,Ntrndof/Ntrneq)*MSEtrn00a
% Finally, using: % % input=minmax(std(input)) % could give me a better quality of input for my ordinal data?
No. That makes no sense.
(Do you mean integer data?)
% By error rate you mean the sum(target~=predicted)?
No. That is Nerr, the total number of errors.
PctErr = 100*Nerr/N
However, you also have to calculate the errors for trn/val/tst using the indices in tr
% I've read that adding noisy duplicates of small classes with % high error rates can address the problem, but I cannot get %the implementation of it.
Something like
noisyduplicate = mean + a*std*randn(size(original)) % scalar
for each input variable (different means and stds).
Good to have Ni (i=1:c) equal
See my BIOID posts
% Also I chose trainlm as it had less overall error (~0.21) than trainscg (~0.31)
Again, val/tst error rates should be the criteria
Hope this helps.
Greg
更多回答(1 个)
Greg Heath
2016-2-25
I typically
1. Initialize rng once and only once before training (henever you use the
rng it AUTOMATICALLY moves to another state).
2. Use a double for loop design over ~ 10 candidates for H, number of
hidden nodes and ...
3. For each value of H, design ~ 10 candidate nets with different random
data divisions AND random intitial weights.
4. for h = Hmin:dH:Hmax
net = .
for i = 1:Ntrials ...
I have posted zillions of examples on both the NEWSGROUP and ANSWERS. Search using
patternnet Ntrials
or
patternnet Hmin:dH:Hmax
Hope this helps.
Thank you for formally accepting my answer
Greg
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!