Prediction differs during training with result
2 次查看(过去 30 天)
显示 更早的评论
I have twelve weighted classes that I train with a large augmented training and validation pixelLabelImageDatastore.
Created with:
lgraph=deeplabv3plusLayers(imageSize, numel(classes), 'resnet18');
lgraph = replaceLayer(lgraph, "classification", pixelClassificationLayer('Name','labels','Classes',tbl.Name,'ClassWeights',classWeights));
lgraph = replaceLayer(lgraph, "data", imageInputLayer(imageSize,"Name","data","Normalization","none"));
The training accuracy converges very fine to about 99,3% (98,5% - 99,7) and the loss to about 0.05 (for both training and validation).
When I test the generated DAGNetwork with "jaccard", only the first ten classes have high IOU, and the last 2 are zero! I also tested different normalizations such as zscore - always the same result. When I use the "predict" or "semanticseg" functions to check individual images, classes 11 and 12 seem to be poorly learned indeed.
But if I set a breakpoint in the "forwardLoss" function in "SpatialCrossEntropy.m" during the training and examine e.g. class 11 with "imshow(Y(:,:,11))", everything is fine learned!
What happens in "trainNetwork()" when the training is finished? Under what circumstances do forwardLoss() scores differ?
4 个评论
Abhijit Bhattacharjee
2022-5-19
There might be more specifics in your code that need to be addressed 1-1. I'd suggest submitting a technical support request.
回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Image Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!