How to use vision.PointTracker with ImageLabeler?
8 次查看(过去 30 天)
显示 更早的评论
There is an excellent tutorial on how to use the point tracker with the Ground Truth Labeler app. https://au.mathworks.com/videos/ground-truth-labeler-app-1529300803691.html
Unfortunately, I don't have access to the Automated Driving Toolbox, but I do have access to the image processing toolbox. The image processing toolbox includes both the ImageLabeler app and a point tracker algorithm. So I think it should be possible to implement the same functionality. I have tried comparing the vision.PointTracker class to the template that comes up when I click "create new algorithm" in the ImageLabeler app, but I am having trouble understanding how to make them work together. If there is a tutorial I have overlooked, please point me in the right direction. If not, a brief explanation would be much appreciated.
2 个评论
Florian Morsch
2018-7-4
编辑:Florian Morsch
2018-7-4
The image labeler is used to label a image with a ROI. You want to automate that labeling i assume?
If not its pointsless for you to create a new algorithm in the image labeler. The algorithm you use in this option is to detect objects in the images you want to label, you can then use them to train a detector for example.
回答(1 个)
Florian Morsch
2018-7-5
编辑:Florian Morsch
2018-7-5
The vision.PointTracker does, as the name implies, tracks points (with KLT-algorithm).
But to track those points you first have to find them. This is mostly done with a object detector for example. Now what you are aiming for is to find a specific object in each frame and label it. If its something simple like people or faces you could try a already trained cascade object detector (MATLAB has some pre-trained variants of it).
If you want to detect a more special object you are better off if you
a.) write a algorithm for detection on your own (if you only want to detect white colored objects for example you can search for only white pixels, or if you want to detect a cube you can search for it with edge detection)
b.) label it yourself. Depending on how many pictures you have for training it might be faster to label them yourself instead of writing the algorithm and then check each picture if its labeled correctly.
The vision.PointTracker itselfs cant detect anything. It needs points you give it which it can then track.
Now if you are able to find your first object and get enough points, then the point tracker can follow those points over multiple images. So basically yes, you can use a point tracker to follow points over multiple images. But you have to make sure that you give it enough points to follow (id recommend 10 or more) and after you have processes all images you still should check if the labeling is done correctly.
3 个评论
Florian Morsch
2018-7-6
No, your machine learning algorithm has nothing to do with the algorithm which labeled the images. The machine learner trains itself, so if you have enough positiv and negative examples (depending on which detector you want to train) you can get way better results.
Since i never wrote such an algorithm (i labeled the images myself) i cant tell for sure, but i guess the best try would be to load all images into the labeler app, then choose the first and set the points for the tracker. After that you give it every following frame and step through them with the active tracker.
I can refer to this video https://de.mathworks.com/videos/computer-vision-with-matlab-for-object-detection-and-tracking-81866.html which shows how the point tracker is used.
Also this might help, which is a coded example of the point tracker https://de.mathworks.com/help/vision/examples/face-detection-and-tracking-using-the-klt-algorithm.html
And maybe this might interest you as well: https://de.mathworks.com/help/vision/ug/find-corresponding-interest-points-between-pair-of-images.html here you can find corresponding points between images, also a possible way to achieve your goal
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!