The training and validation accuracy stuck in the value of 74% in Resnet 50

13 次查看(过去 30 天)
I'm new to the world of Deep Learning and I'm attempting to classify sequences of proteins into two classes using scalogram images [class1=192, class2=171] I've implemented Transfer Learning with ResNet 50 as my model architecture.the accuracy in The training and validation stuck in 73-78 %.Anyone can explain to me what happen because I confused
The sample data : [class1]:
The sample data : [class2]:
as you see the pictures in both groups look very similar
The code:
[imdsTrain,imdsVal]= splitEachLabel(imds,0.7,"Randomized");
aug= imageDataAugmenter("RandYReflection",true,"RandXReflection",true)
augTrain = augmentedImageDatastore([224,224],imdsTrain,"DataAugmentation",aug);
augVal = augmentedImageDatastore([224,224],imdsVal);
options =trainingOptions("sgdm",...
"InitialLearnRate",3e-4,...
"MaxEpochs",100,...
"Plots","training-progress",...
"Shuffle","every-epoch",...
"ValidationData",augVal,...
"ValidationFrequency",1,...
"MiniBatchSize",128,...
"L2Regularization",1e-4)
net = resnet50;
lgraph = layerGraph(net);
fclayer = lgraph.Layers(end-2);
newFclayer = fullyConnectedLayer(2,"Name","NewFc",'BiasL2Factor',10,"WeightL2Factor",10);
lgraph = replaceLayer(lgraph,fclayer.Name,newFclayer);
classlayer = lgraph.Layers(end);
newClassLayer = classificationLayer("Name","newClassLayer");
lgraph= replaceLayer(lgraph,classlayer.Name,newClassLayer);
net = trainNetwork(augTrain,lgraph,options);
The Training process:

回答(1 个)

Shantanu Dixit
Shantanu Dixit 2024-8-5
Hi Moetez,
It seems that the model’s accuracy is stuck in a particular range. Although there’s no definitive answer to pinpoint the issue, the following could be potential workarounds or issues to consider:
  1. Data Similarity: Since you mentioned that data from both samples are quite similar, this can make it difficult for the model to learn the underlying patterns from the data.
  2. Small Data Size: Due to the small data size, you can try augmentation using ‘imageDataAugmenter to generate more variety.
  3. Learning Rate: Instead of using a fixed learning rate throughout the training process, experiment with schedulers with different drop factors using trainingOptions.
  4. Varying and Using Different Regularization Techniques: Vary the L2 regularization parameter and add dropout using ‘dropoutLayer.
Refer to the below MathWorks documentation for more information regarding the above functions.

类别

Help CenterFile Exchange 中查找有关 Recognition, Object Detection, and Semantic Segmentation 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by