Deep network behavior in custom training loop on shared layers
1 次查看(过去 30 天)
显示 更早的评论
Hello,
I study the Siamese network example in https://www.mathworks.com/help/deeplearning/examples/train-a-siamese-network-to-compare-images.html.
The example is clear. My question is about how it'd work if a dropout layer were added to the sub-network.
The question arises because dropout behaves differently in training (forward) and predicting (predict). During training the layer randomly sets input elements to zero given by the dropout mask each time it is invoked and at prediction the output of the layer is equal to its input (https://www.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.dropoutlayer.html?s_tid=doc_ta). Therefore, it'd reason that the mask would be different for each image in the input images pair! But this is NOT what we want!
Please advise,
D
0 个评论
回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Image Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!