Neural network - Why are the outputs not within -1 and 1 when i apply tansig as the activation function in the output layer?
3 次查看(过去 30 天)
显示 更早的评论
I got outputs greater than 1 (it ranges from 0.sth to 11.sth) when i use tansig as the activation function in the output layer. My neural network has the architecture of (4,6,5,1).
1 个评论
Vishnu
2023-6-16
Hi JUN HANG,
Whatever the input to the "tansig" function, output should be in the range [-1,1].
Because the equation of the "tansig" is :
tansig(x) = (2/(1+exp(-2*x)))-1;
I suggest you to try it by normalizing the input values and weights of the network. If it still gives the output beyond the expected range, you can attach your neural network here I will look into it.
回答(1 个)
Krishna
2024-1-4
Hello OOI JUN HANG,
From what I gather, you're having trouble in achieving outputs in the interval of [-1, 1] with the tansig function. The 'tansig' activation function is designed to yield results that always fall between -1 and 1, irrespective of the architecture it's applied to. Formula for tansig is,
tansig(x) = 2/(1+exp(-2*x)) – 1 = (1 – exp(-2*x))/(1+exp(-2*x)) ---- (2)
now if you just take the reciprocal of exp(-2*x) in both numerator and denominator we get,
tansig(x) = (exp(2*x)-1) / (exp(2*x) +1) ---- (3)
Now when x tends to infinity tansig(x) tends to 1 as exp(-2*x) becomes zero and we are left with (1/1) (see equation 2).
When x tends to -1*infinity tansig(x) tends to -1 as exp(2*x) becomes zero and we are left with (-1/1) (see equation 3).
This is why the range for tansig is [-1,1]. For more information, please go through this documentation,
Hope this helps.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!