Why scaling data inside [-1,1] ?

1 次查看(过去 30 天)
What are the diferences between normalizating features in [0,1], [-1,1] or [-5,5] with NN minmax ?

采纳的回答

Greg Heath
Greg Heath 2015-1-18
The purpose of normalization is to keep inputs to transfer functions as close to the middle of the so called 'active region' as much as possible. For example, Warren Sarle posted the results of experimental examples in the FAQ of comp.ai.neural-nets indicating that in general, you can do no better than use bipolar inputs, outputs and transfer functions.
Nevertheless, it is easier in MATLAB to use unit sum unipolar [0,1] coding for target classification because of the functions vec2ind and ind2vec.
My interpretation of 'better' is faster and/or more accurate. Obviously, this result is machine dependent. So, given what you know now, you can perform your own speed and accuracy tests on your own machine.
You have to take into account how the weights are being initialized. That means understanding the functions init, initwb and initnw.
However, before you start, see my post "Nonsaturating Initial Weights" in comp.ai.neural-nets.
Hope this helps.
Thank you for formally accepting my answer
Greg

更多回答(0 个)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by