Using data with googlenet
1 次查看(过去 30 天)
显示 更早的评论
I have recently used the following tutorial: https://uk.mathworks.com/help/deeplearning/ug/train-deep-learning-network-to-classify-new-images.html
I have used my own dataset, each photo corresponds to a x,y and theta value instead of an actual class such as 'table'. currently the accuracy is only 20%, this will be because they are only two pictures per class. What i want to do is use my semantic segmentation NN, take the data from this such as the pixel value and use this within the googlenet to help classify the image. Is this possible? and any tip as i am new to this. Thank you
1 个评论
Madhav Thakker
2021-3-15
Hi Benjamin, what do you mean "take the data from semantic segmentation NN".
The semantic segmentation NN should take as input raw images and return the semantically segmented map. You can directly feed the raw image to the googlenet if you have the labels for each image.
Perhaps you can elaborate the question with an example?
回答(1 个)
Raynier Suresh
2021-3-19
Hi, Based on my understanding you are trying to segment the image and then you are trying to classify it using the google net, to do this you can feed the image to the segmentation network and then feed the result to the google net. But the job of the semantic segmentation network itself is to classifies every pixel in an image, so using another network for classification is redundant. The below link can provide you more idea
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Image Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!