- ‘groundTruth’ type https://www.mathworks.com/help/vision/ref/groundtruth.html
- Image Labeler App’s exported data https://www.mathworks.com/help/vision/ug/get-started-with-the-image-labeler.html#mw_72146e4f-e569-457d-91b9-9ac80caffb67
how to create groundtruth.mat file for yolov3 detection?
8 次查看(过去 30 天)
显示 更早的评论
I have my own dataset in a folder named "NEW DATASET". This folder contains 12 subfolders like Aieplane, Helicopter, Drone, Birds etc. I desinged a feature extruction model using "DEEP NETWORK DESIGNER APP" and add this backbone network with YOLOV2. Then i created a .mat file using "Image Labeller APP" where i just put a rectangular box to the objects that are shown in the images and named it IL100.mat. This IL100.mat contains 200 images of each class. Before i go to train my model, i open that IL100.mat file using "Image Labeller APP" and click the "Export" button and export in the workspace as table. After that, i train my model i.e. the backbone network added to the YOLOv2. After train, it shows me the accuray, precision, recall etc. Then in test the model with images that are not in the training datase. It works good. But i want to do the same with YOLOv3. But can convert the IL100.mat file as YOLOV3 format. I saw an example of YOLOv3 in Mathworks. Here a "vehicleDatasetGroundTruth.mat" was used. But when i put my cursor on "vehicledata", it shows a table which contains "'/MATLAB Drive/Examples/R2023a/deeplearning_shared/ObjectDetectionUsingYOLOV3DeepLearningExample/vehicleImages/image_00001.jpg'" and the 2 nd column is "[220,136,35,28]" for each image. But cannot understand how to create a table like that which can be used for YOLOv3 training and detection or how to convert my IL100.mat file in YOLOv3 format so that using that IL100.mat file i can train the YOLOv3. I will be very much grateful to you if any one solve the problem or guide me how to solve the problem. Thank you.
0 个评论
回答(1 个)
Vinayak Choyyan
2023-4-11
Hi Hrishi,
As per my understanding, you are referring to this https://www.mathworks.com/help/vision/ug/object-detection-using-yolo-v3-deep-learning.htmlexample and would like to know to how to create a variable like ‘vehicledata’.
The input data to any model, say YOLOv3 or YOLOv2, need not be in a specific data structure. How to input data is fed to a model depends on the pre-processing code you have before the model. In the case of the above example, we see that ‘vehicledata’ is a of class type or data type ‘table’. You can read more about ‘table’ type in this documentation page https://www.mathworks.com/help/matlab/ref/table.html.
As you noticed, each row of this table is a file path followed by a 1-by-4 matrix. The file path is the path to each image and the matrix contains details about the location of the bounding box. The 1-by-4 numeric vector of the form [xmin, ymin, width, height]. xmin and ymin specify the coordinates of the upper left corner of the rectangle. width and height specify the width and height of the rectangle.
If you would like to visually see this, please try out the below code after filling in the blank spaces with any 1 row of the table ‘vehicledata’.
tmp=imread(‘path\to\image\folder\image_name.jpg');
imshow(tmp);
h = drawrectangle('Position',[1-by-4 matrix for the corresponding row])
The images you labelled using Image Labeler App are exported as ‘groundTruth’ type. The ‘LabelData’ component of this type has this 1-by-4 matrix for rectangular labels. The ‘DataSource’ component has the file path to the image. Now you have both the values needed to populate the table like ‘vehicledata’. Please refer to the following documentation for more info on
I hope this helps resolve the issue you are facing.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Recognition, Object Detection, and Semantic Segmentation 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!