How to Optimize Kalman Filter Tracking for Specific Video
7 次查看(过去 30 天)
显示 更早的评论
I've been following this article to track the position of a red circle in my video. The circle stands out very well from the rest of the background, but the filter is having trouble detecting it. Is there anything I can do to optimize the code for my specific video (help it to detect the red marker)?
Here is one frame from the video I am analyzing and a warning I am receiving for the showDetections function:


0 个评论
回答(1 个)
Image Analyst
2022-12-15
For this kind of image I'd detect the red spot using color segmentation, something like
[r,g,b] = imsplit(rgbImage);
mask = (r > 200) & (g < 200) & (b < 200); % Change values as needed.
mask = bwareafilt(mask, 1); % Take largest blob.
props = regionprops(mask, 'Centroid');
xCentroid = props.Centroid(1);
yCentroid = props.Centroid(2);
2 个评论
Elad Kivelevitch
2022-12-15
Hi Jeremy,
There are two separate steps that may need improvement, and I am trying to understand which step it is.
The first step is detecting the red dot on the video frame and the second step is tracking the red dot using the measurement provided by the first step.
Detector Step
To understand which of the steps does not work well in this case, let's start with the first part - detection. You may want to run the video frame-by-frame by adding a pause at the end of displaying each frame. Use the output from your detector (it seems like you're using a blob detection algorithm) to annotate the frame with the detection bounding box. This will allow you to visualize whether the red dot is indeed detected at each frame (encircled by a bounding box) or not (no bounding box).
If the detection step is not working correctly (dot is not detected most of the frames), you need to improve the detection prcoess. I am not a computer vision expert, so I will leave that to folks that know better than me.
Tracking Step
If all goes well in the detection step (dot is detected most of the frames) then the next step is to see whether the Kalman filter is working correctly. To do that, collect all the measurements of the box centroid from the detection step in an array:
pos = [x1 y1; % Measurements from frame 1
x2 y2; % Measurements from frame 2
...
xN yN]; % Measurements from frame N
Configure a Kalman filter, e.g., trackingKF. The rest of the code assumes you're using trackingKF.
kf = trackingKF("MotionModel", "2D Constant Velocity");
kf.State = [pos(1,1);0;pos(1,2);0]; % Initialize the state based on the first measurement
kf.StateCovariance = diag([100 1e4 100 1e4]); % Initialize the state covariance
kf.ProcessNoise = eye(4); % A guess about how well the dot follows a constant velocity model
kf.MeasurementNoise = 100*eye(2); % A guess about how well the dot is detected
% Assuming utilities.videoReader is your videoReader
ind = 1; % Initialize the frame
while isFrame(utilities.videoReader)
ind = ind+1;
% Read video frame
predict(kf); % Predict the filter state to the next frame
correct(kf,[pos(ind,:)']); % Correct with the measurement at frame ind
% annotate video with the centroid of the Kalman filter kf.State([1 3]);
pause % To observe how well the centroid follows the dot
end
Depending on frame rate, type of motion that the dot is following, or level of noise in the measurements, you may need to modify the trackingKF to work better with your particular case.
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Tracking and Motion Estimation 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!