How to avoid Inf values when writing deep learning code?

3 次查看(过去 30 天)
Hi,
I wrote a deep learning code including the following Softmax function. During the training I start to get Inf values (and thus NaN values) in some matrix multiplication operations or as the result of softmax operation.
I also tried other softmax implementations which I found on the internet and books with no improvement.
Having these NaN values even in the first training epoch and in the very initial samples (such as in the 5. th sample) causes a false training of the model.
In order to simplfy my question I didn't add information related to the number of nodes in the input, output and hidden layers, cause I thing that this problem occurs independent of these numbers. If requested I may provide more info..
Best Regards,
Ferda Özdemir Sönmez
function y = Softmax(x)
ex = exp(x);
y = ex/sum(ex);
end

回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

产品


版本

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by