Main Content

Convert Detections to objectDetection Format

Since R2021a

These examples show how to convert actual detections in the native format of the sensor into objectDetection objects. objectDetection is the standard input format for most tracking filters and trackers in the toolbox. The six examples progressively show how to set up objectDetection with varied tracking scenarios.

  • Example 1 configures the detection in a stationary rectangular frame.

  • Example 2 configures the detection in a moving rectangular frame.

  • Example 3 configures the detection in a moving spherical frame.

  • Example 4 shows how to express detections obtained by consecutive rotations.

  • Example 5 shows how to configure 3-D detections.

  • Example 6 shows how to configure classified detections.

An objectDetection report must contain the basic detection information: Time and Measurement. It can also contain other key properties, including MeasurementNoise, SensorIndex, ObjectClassID, ObjectClassParameters, ObjectAttributes, and MeasurementParameters. Setting up MeasurementParameters correctly so that a filter or tracker can interpret the measurement is crucial in creating objectDetection. The first example shows the basic setup of an objectDetection. Examples 2 through 5 focus on how to correctly set up MeasurementParameters. The last example shows how to set up ObjectClassParameters.

Example 1: Convert Detections in Stationary Rectangular Frame

Consider a 2-D tracking scenario with a stationary tower and a truck. The tower located at the origin of the scenario frame is equipped with a radar sensor. At t = 0 seconds, the truck at the position of (10,20,0) meters is traveling in the positive X direction at a speed of 5 m/s.

The radar sensor outputs 3-D position and velocity measurements in the scenario frame, so the measurement can be written as follows:

measurement1 = [10;20;0;5;0;0]; % [x;y;z;vx;vy;vz]

You can specify additional properties such as MeasurementNoise, SensorIndex, ObjectClassID, and ObjectAttributes for the objectDetection object. For example, assuming the standard deviation of the position and velocity measurement noise is 10 m and 1 m/s, respectively, you can define the measurement error covariance matrix as:

measurementNoise1 = diag([10*ones(3,1);ones(3,1)]);

Create an objectDetection using these values.

time1 = 0; % detection time
detect1 = objectDetection(time1,measurement1,'MeasurementNoise',measurementNoise1)
detect1 = 
  objectDetection with properties:

                     Time: 0
              Measurement: [6x1 double]
         MeasurementNoise: [6x6 double]
              SensorIndex: 1
            ObjectClassID: 0
    ObjectClassParameters: []
    MeasurementParameters: {}
         ObjectAttributes: {}

Example 2: Convert Detections in Moving Rectangular Frame

Consider a 2-D tracking scenario with an ego car and a truck. At t= 0 seconds, the car is located at (20,10,0) meters with respect to the scenario frame. The car is moving with a speed of 5 m/s in the Y direction of the scenario frame. The local (forward) frame of the ego car, {x,y}, rotates from the scenario frame by an angle of 90 degrees. As in the previous example, a truck at the position of (10,20,0) meters is traveling in the positive X direction at a speed of 5 m/s.

Meanwhile, the ego car is observing the truck in its own local frame, {x,y}. In practice, you can obtain the measurement directly from the sensor system of the ego car. From the figure, the measurements of the truck are [10; 10; 0 -5; -5; 0] with respect to the {x,y} frame in the order of [x;y;z;vx;vy;vz].

measurement2 = [10; 10; 0; -5;  -5; 0]; % [x;y;z;vx;vy;vz]

To specify the object detection, you need to specify the coordinate transformation from the scenario rectangular frame {X,Y} to the local rectangular frame {x,y}. You can use the MeasurementParameters property of objectDetection to specify these transformation parameters. In the transformation, the scenario frame is the parent frame, and the ego car local frame is the child frame.

  • The Frame property sets the child frame type to 'rectangular'(in this example) or 'spherical'.

  • The OriginPosition property sets the position of the origin of the child frame with respect to the parent frame.

  • The OriginVelocity property sets the velocity of the origin of the child frame with respect to the parent frame.

MP2 = struct();

MP2.Frame = 'rectangular'; 
MP2.OriginPosition =[20; 10; 0]; 
MP2.OriginVelocity = [0; 5; 0]; 

Specify rotation using the rotation matrix converted from Euler angles. Set IsParentToChild to true to indicate rotation from the parent frame to the child frame.

rotAngle2 = [90 0 0]; % [yaw,pitch,roll]
rotQuat2 = quaternion(rotAngle2,'eulerd','ZYX','frame');
rotMatrix2 = rotmat(rotQuat2,'frame');
MP2.Orientation = rotMatrix2;
MP2.IsParentToChild = true; 

Specify measurements.

  • Set HasElevation and HasAzimuth both to false, since the child frame is rectangular.

  • Set HasRange to true to enable position measurement.

  • Set HasVelocity to true to enable velocity measurement.

MP2.HasElevation = false;
MP2.HasAzimuth = false;
MP2.HasRange = true;
MP2.HasVelocity = true;

Create the objectDetection object and specify the MeasurementParameters property.

time2 = 0;
detection2 = objectDetection(time2,measurement2,'MeasurementParameters',MP2)
detection2 = 
  objectDetection with properties:

                     Time: 0
              Measurement: [6x1 double]
         MeasurementNoise: [6x6 double]
              SensorIndex: 1
            ObjectClassID: 0
    ObjectClassParameters: []
    MeasurementParameters: [1x1 struct]
         ObjectAttributes: {}

To verify the object detection, you can use the cvmeas (Sensor Fusion and Tracking Toolbox) measurement function to regenerate the measurement. The cvmeas function can take the actual state of the target and measurement parameters as input. The state input of cvmeas is on the order of [x;vx;y;vy;z;vz]. As shown in the following output, the results agree with measurement2.

state2 =[10; 5; 20; 0; 0; 0]; % State of truck in [x;vx;y;vy;z;vz]
cvmeas2 = cvmeas(state2,MP2) % Measurement in [x;y;z;vx;vy;vz]
cvmeas2 = 6×1

   10.0000
   10.0000
         0
   -5.0000
   -5.0000
         0

Example 3: Convert Detections in Moving Spherical Frame

Consider the previous tracking scenario, only now the measurement is obtained by a scanning radar with a spherical output frame. The boresight direction of the radar is aligned with the Y direction (same as x direction) at t = 0 seconds.

Since the relative velocity between the truck and the car is in the line-of-sight direction, the measurement, which is in the order of [azimuth; elevation; range; range-rate], can be obtained as follows:

measurement3 =[45; 0; 10/sind(45); -5/sind(45)]; % [az;el;rng;rr]. Units in degrees.

Specify the measurement parameters.

MP3 = struct();

MP3.Frame = 'spherical'; % The child frame is spherical.
MP3.OriginPosition = [20; 10; 0];
MP3.OriginVelocity = [0; 5; 0];

% Specify rotation.
rotAngle3 = [90 0 0];
rotQuat3 = quaternion(rotAngle3,'Eulerd','ZYX','frame');
rotMatrix3 = rotmat(rotQuat3,'frame');
MP3.Orientation = rotMatrix3;
MP3.IsParentToChild = true; 

Set HasElevation and HasAzimuth to true to output azimuth and elevation angles in the spherical child frame. Set HasRange and HasVelocity both to true to output range and range-rate, respectively.

MP3.HasElevation = true;
MP3.HasAzimuth = true;
MP3.HasRange = true;
MP3.HasVelocity = true;

Create the objectDetection object.

time3 = 0;
detection3 = objectDetection(time3,measurement3,'MeasurementParameters',MP3)
detection3 = 
  objectDetection with properties:

                     Time: 0
              Measurement: [4x1 double]
         MeasurementNoise: [4x4 double]
              SensorIndex: 1
            ObjectClassID: 0
    ObjectClassParameters: []
    MeasurementParameters: [1x1 struct]
         ObjectAttributes: {}

Verify the results using cvmeas. The results agree with measurement3.

state3 = [10;5;20;0;0;0]; % [x;vx;y;vy;z;vz]
cvmeas3 = cvmeas(state3,MP3) % [az;el;rng;rr]
cvmeas3 = 4×1

   45.0000
         0
   14.1421
   -7.0711

Example 4: Convert Detections Between Three Frames

Consider the previous tracking scenario, only now the boresight direction of the radar rotates 45 degrees from the x direction of the car's local frame.

The new measurements, expressed in the new spherical frame {x,y}, are:

measurement4 = [0; 0; 10/sind(45); -5/sind(45)]; % [az;el;rng;rr]

For the measurement parameters, you can specify the rotation as a 135-degree rotation from the scenario frame to the new spherical frame. Alternately, you can specify it as two consecutive rotations: rectangular {X,Y} to rectangular {x,y} and rectangular {x,y} to spherical {x,y}. To illustrate the multiple frame transformation feature supported by the MeasurementParameters property, this example uses the latter approach.

The first set of measurement parameters is exactly the same as MP2 used in Example 2. MP2 accounts for the rotation from the rectangular {X,Y} to the rectangular {x,y}. For the second set of measurement parameters, MP4, you need to specify only a 45-degree rotation from the rectangular {x,y} to the spherical {x,y}.

MP4 = struct();

MP4.Frame = 'spherical';
MP4.OriginPosition = [0; 0; 0]; % Colocated positions.
MP4.OriginVelocity = [0; 0; 0]; % Same origin velocities.

% Specify rotation.
rotAngle4 = [45 0 0];
rotQuat4 = quaternion(rotAngle4,'Eulerd','ZYX','frame');
rotMatrix4 = rotmat(rotQuat4,'frame');
MP4.Orientation = rotMatrix4;
MP4.IsParentToChild = true; 

% Specify outputs in the spherical child frame.
MP4.HasElevation = true;
MP4.HasAzimuth = true;
MP4.HasRange = true;
MP4.HasVelocity = true;

Create the combined MeasurementParameters input, MPc.

MPc =[MP4 MP2];

Note that the sequence of measurement parameters matters here. The specified MPc represents first performing coordinate transformation corresponding to MP2 and then performing coordinate transformation corresponding to MP4.

Next, create the objectDetection object.

time4 = 0;
detection4 = objectDetection(time4,measurement4,'MeasurementParameters',MPc)
detection4 = 
  objectDetection with properties:

                     Time: 0
              Measurement: [4x1 double]
         MeasurementNoise: [4x4 double]
              SensorIndex: 1
            ObjectClassID: 0
    ObjectClassParameters: []
    MeasurementParameters: [1x2 struct]
         ObjectAttributes: {}

Verify the results using cvmeas. The result agrees with measurement4.

state4 = [10;5;20;0;0;0]; % [x;vx;y;vy;z;vz]
cvmeas4 = cvmeas(state4,MPc) % [az;el;rr;rrate]
cvmeas4 = 4×1

    0.0000
         0
   14.1421
   -7.0711

Example 5: Convert 3D Detections

Consider an unmanned aerial vehicle (UAV) monitoring a region. At t = 0 seconds, the UAV is at the position of (5,5,-1) km with respect to the global north-east-down (NED) frame. The velocity of the UAV is (-50,-100,5) m/s. The orientation of the UAV body frame {x,y,z} with respect to the global NED frame is given as (-120,2,2) degrees in yaw, pitch, and roll. At the same time, a car at the position of (1,1,0) km is moving east with a speed of 30 m/s. The UAV measures the car using a radar system aligned with its own body axis.

Based on this information, specify the kinematic parameters for the measurement transformation.

Specify the frame type, origin position, and origin velocity of the UAV body frame.

MP5 = struct();

MP5.Frame = 'spherical'; 
MP5.OriginPosition = [5000; 5000; -1000]; 
MP5.OriginVelocity = [-50; -100; 5]; 

Specify the rotation from the NED frame to the UAV body frame.

Rot_angle5 = [-120 2 2]; % [yaw,pitch,roll]
Rot_quat5 = quaternion(Rot_angle5,'Eulerd','ZYX','frame');
Rot_matrix5 = rotmat(Rot_quat5,'frame');
MP5.Orientation = Rot_matrix5;
MP5.IsParentToChild = true; 

Specify the output measurements in a spherical frame.

MP5.HasElevation = true;
MP5.HasAzimuth = true;
MP5.HasRange = true;
MP5.HasVelocity = true;

You can obtain the measurement directly from the radar system on the UAV. Here, you use the cvmeas function to obtain the measurement in the order of [azimuth;elevation;range;range-rate].

car_state5 = [1000;0;1000;30;0;0]; % [x;vx;y;vy;z;vz].
measurement5 = cvmeas(car_state5,MP5);
meas_az5 = measurement5(1)
meas_az5 = 
-14.6825
meas_el5 = measurement5(2)
meas_el5 = 
12.4704
meas_rng5 = measurement5(3)
meas_rng5 = 
5.7446e+03
meas_rr5 = measurement5(4)
meas_rr5 = 
-126.2063

The elevation angle is defined as an angle from the xy-plane to the z direction. That is why the elevation angle is positive for a target on the ground relative to the UAV. This convention is used throughout the toolbox.

The measurement noise for azimuth, elevation, range, and range-rate is [1,1,20,2], respectively. Also, the index of the radar is 2, and the radar can classify the detected object as 1 for the type of 'car'.

index5 = 2; 
covariance5 = diag([1;1;20;2]);
classID5 = 1;

Create an objectDetection object for the detection.

time5 = 0;
detection5 = objectDetection(time5,measurement5,'SensorIndex',index5,...
            'MeasurementNoise',covariance5,'ObjectClassID',classID5,'MeasurementParameters',MP5)
detection5 = 
  objectDetection with properties:

                     Time: 0
              Measurement: [4x1 double]
         MeasurementNoise: [4x4 double]
              SensorIndex: 2
            ObjectClassID: 1
    ObjectClassParameters: []
    MeasurementParameters: [1x1 struct]
         ObjectAttributes: {}

Example 6: Classified Detections

Consider a vision tracking scenario where camera frames feed into an object detector. In such cases, detections often provide a classification of the object. Consider an object detector that output bounding box detections and classifies objects into the following classes {'Car', 'Pedestrian', 'Bicycle'}. The statistics of the detector are captured by its confusion matrix C. Create a detection with bounding box measurement, 'Pedestrian' classification, and confusion matrix C as defined below.

C = [0.9 0.05 0.05; ... 
     0.05 0.9 0.05; ...
     0.05 0.05 0.9];
ClassID = 2; % Pedestrian
ClassParams = struct('ConfusionMatrix',C);

boundingbox = [250 140 300 400]; % bounding box in top left width height coordinates
detection6 = objectDetection(0,boundingbox,ObjectClassID=ClassID,ObjectClassParameters=struct('ConfusionMatrix', C))
detection6 = 
  objectDetection with properties:

                     Time: 0
              Measurement: [250 140 300 400]
         MeasurementNoise: [4x4 double]
              SensorIndex: 1
            ObjectClassID: 2
    ObjectClassParameters: [1x1 struct]
    MeasurementParameters: {}
         ObjectAttributes: {}