Does the Neural Network Toolbox have a Cross Entropy Error Function?

4 次查看(过去 30 天)

采纳的回答

MathWorks Support Team
编辑:MathWorks Support Team 2021-2-24
Starting in R2013b, Neural Network Toolbox provides a CROSSENTROPY function:
Prior releases of Neural Network Toolbox do not provide a Cross Entropy Error performance functionality. This is an alternative to the mean-squared error (MSE) function. In general, the values of 'd', the desired output for a given training example, and 'y', the observed output for that case, will be real numbers. We can interpret 'd' as the desired probability that a binary-valued output will assume a value of 1 in this case, and 'y' as the observed probability of seeing a 1 in that case.
The cross-entropy error C is then expressed as:
C = - sum [all cases and outputs] (d*log(y) + (1-d)*log(1-y) )
The derivative of this error function for a given output and training example (this is the value we actually back-propagate) is:
dC/dy = - d/y + (1-d)/(1-y)
Note that this back-propagated derivative goes to infinity as the difference between y and d goes to +1 or -1. This can counteract the tendency of the network to get stuck in regions where the derivative of the sigmoid function approaches zero.
Refer to the Neural Network Toolbox User's Guide, specifically to the chapter on custom networks, for information on how to set a custom performance function through the 'performFcn' property:
Additional information may also be found in the related solutions below.
  1 个评论
Greg Heath
Greg Heath 2013-12-5
Also see the following NEWSGROUP threads:
2 May 2009 Estimation of Multivariate Classification Posterior Probabilities Greg Heath 13 3951
6 Apr 2009 Unbalanced Priors in BioID Face Image Data Greg Heath 54 5009
14 Apr 2006 NN performance function Greg Heath 3 756

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Deep Learning Toolbox 的更多信息

标签

尚未输入任何标签。

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by