In traditional tracking systems, the point target model is commonly used. In a point target model:
Each object is modeled as a point without any spatial extent.
Each object gives rise to at most one measurement per sensor scan.
Though the point target model simplifies tracking systems, the assumptions above may not be valid when modern tracking systems are considered:
In modern tracking systems, the dimensions of the extended object play a significant role. For example, in autonomous vehicles, target dimensions must be considered properly to avoid collision with objects around the autonomous system.
Modern sensors have a high resolution, and an object can occupy more than one resolution cell. As a result, the sensor may report multiple detections for that object. In this case, the point model cannot fully exploit the sensor ability to detect object extent.
In extended object tracking, a sensor can return multiple detections per scan for an extended object. The differences between extended object tracking and point object tracking are more about the sensor properties rather than object properties. For example, if the resolution of a sensor is high enough, even an object with small dimensions can still occupy several resolution cells of the sensor.
Sensor Fusion and Tracking Toolbox™ offers several methods and examples for multiple extended object tracking. Depending on the assumptions made in the detection and tracker, these methods can be separated into the following categories:
One detection per object.
A point detection per object.
In this method, even though the sensor returns multiple detections per object, these detections are first converted into one representative point detection with certain covariance to account for the distribution of these detections. Then the representative point detection is processed by a conventional tracker, which models the object as a point target and tracks its kinematic state. Even though this method is simple to use, it overlooks the ability of the sensor to detect the object dimension.
The Point Object Tracker approach shown in the first part of Extended Object Tracking of Highway Vehicles with Radar and Camera example adopts this method.
An extended object detection per object.
In this method, the multiple detections of an extended object are converted into a single parameterized shape detection. The shape detection includes the kinematic states of the object, as well as its extent parameters such as length, width and height. Then the shape detection is processed by a conventional tracker, which models the object as an extended object by tracking both the object kinematic state and its dimensions.
In the Track Vehicles Using Lidar: From Point Cloud to Track List example, the Lidar detections of each vehicle are converted into a cuboid detection with length, width, and height. A JPDA tracker is used to track the position, velocity and dimensions for all the vehicles with these cuboid detections.
Multiple detections per object.
In this category, extended object trackers (such as
trackerPHD) are used,
which assume multiple detections per object. The detections are fed directly to
the tracker, and the tracker models the extended object using certain default
geometric shapes with variable sizes.
In the Extended Object Tracking of Highway Vehicles with Radar and Camera example, the GGIW-PHD Extended Object Tracker approach represents vehicle shapes as ellipses, and the Prototype Extended Object Tracker approach represents vehicle shapes as rectangles.
In the Extended Object Tracking With Radar For Marine Surveillance example, the GGIW-PHD tracker models the ship shapes as ellipses.