Main Content

Highway Vehicle Tracking with Multipath Radar Reflections

Since R2021a

This example shows the challenges associated with tracking vehicles on a highway in the presence of multipath radar reflections. It also shows a ghost filtering approach used with an extended object tracker to simultaneously filter ghost detections and track objects.

Introduction

Automotive radar sensors are robust against adverse environment conditions encountered during driving, such as fog, snow, rain, and strong sunlight. Automotive radar sensors have this advantage because they operate at substantially large wavelengths compared to visible-wavelength sensors, such as lidar and camera. As a side effect of using large wavelengths, surfaces around the radar sensor act like mirrors and produce undesired detections due to multipath propagation. These detections are often referred to as ghost detections because they seem to originate from regions where no target exists. This example shows you the impact of these multipath reflections on designing and configuring an object tracking strategy using radar detections. For more details regarding multipath phenomenon and simulation of ghost detections, see the Simulate Radar Ghosts Due to Multipath Return (Radar Toolbox) example.

In this example, you simulate the multipath detections from radar sensors in an urban highway driving scenario. The highway is simulated with a barrier on both sides of the road. The scenario consists of an ego vehicle and four other vehicles driving on the highway. The ego vehicle is equipped with four radar sensors providing 360-degree coverage. This image shows the configuration of the radar sensors and detections from one scan of the sensors. The red regions represent the field of view of the radar sensors and the black dots represent the detections.

The radar sensors report detections from the vehicles and from the barriers that are on both sides of the highway. The radars also report detections that do not seem to originate from any real object in the scenario. These are ghost detections due to multipath propagation of radar signals. Object trackers assume all detections originate from real objects or uniformly distributed random clutter in the field of view. Contrary to this assumption, the ghost detections are typically more persistent than clutter and behave like detections from real targets. Due to this reason, an object tracking algorithm is very likely to generate false tracks from these detections. It is important to filter out these detections before processing the radar scan with a tracker.

Generate Sensor Data

The scenario used in this example is created using the drivingScenario class. You use the radarDataGenerator (Radar Toolbox) System object™ to simulate radar returns from a direct path and from reflections in the scenario. The HasGhosts property of the sensor is specified as true to simulate multipath reflections. The creation of the scenario and the sensor models is wrapped in the helper function helperCreateMultipathDrivingScenario, which is attached with this example. The data set obtained by sensor simulation is recorded in a MAT file that contains returns from the radar and the corresponding sensor configurations. To record the data for a different scenario or sensor configuration, you can use the following command:

 helperRecordData(scenario, egoVehicle, sensors, fName);
% Create the scenario
[scenario, egoVehicle, sensors] = helperCreateMultipathDrivingScenario;

% Load the recorded data
load('MultiPathRadarScenarioRecording.mat','detectionLog','configurationLog');

Radar Processing Chain: Radar Detections to Track List

In this section, you set up an integrated algorithm to simultaneously filter radar detections and track extended objects. The block diagram illustrates the radar processing chain used in this example.

Next, you learn about each of these steps and the corresponding helper functions.

Doppler analysis

The radar sensors report the measured relative radial velocity of the reflected signals. In this step, you utilize the measured radial velocity of the detections to determine if the target is static or dynamic [1]. In the previous radar scan, a large percentage of the radar detections originate from the static environment around the ego vehicle. Therefore, classifying each detection as static or dynamic greatly helps to improve the understanding of the scene. You use the helper function helperClassifyStaticDynamic to classify each detection as static or dynamic.

Static reflectors

The static environment is also typically responsible for a large percentage of the ghost reflections reported by the radar sensors. After segmenting the data set and finding static detections, you process them to find 2-D line segments in the coordinate frame of the ego vehicle. First, you use the DBSCAN algorithm to cluster the static detections into different clusters around the ego vehicle. Second, you fit a 2-D line segment on each cluster. These fitted line segments define possible reflection surfaces for signals propagating back to the radar. You use the helper function helperFindStaticReflectors to find these 2-D line segments from static detections.

Occlusion analysis

Reflection from surfaces produce detections from the radar sensor that seem to originate behind the reflector. After segmenting the dynamic detections from the radar, you use simple occlusion analysis to determine if the radar detection is occluded behind a possible reflector. Because signals can be reflected by static or dynamic objects, you perform occlusion analysis in two steps. First, dynamic detections are checked against occlusion with the 2-D line segments representing the static reflectors. You use the helper function helperClassifyGhostsUsingReflectors to classify if the detection is occluded by a static reflector. Second, the algorithm uses information about predicted tracks from an extended tracking algorithm to check for occlusion against dynamic objects in the scene. The algorithm uses only confirmed tracks from the tracker to prevent overfiltering of the radar detections in the presence of tentative or false tracks. You use the helper function helperClassifyGhostsUsingTracks to classify if the detection is occluded by a dynamic object.

This entire algorithm for processing radar detections and classifying them is then wrapped into a larger helper function, helperClassifyRadarDetections, which classifies and segments the detection list into four main categories:

  1. Target detections – These detections are classified to originate from real dynamic targets in the scene.

  2. Environment detections – These detections are classified to originate from the static environment.

  3. Ghost (Static) – These detections are classified to originate from dynamic targets but reflected via the static environment.

  4. Ghost (Dynamic) – These detections are classified to originate from dynamic targets but reflected via other dynamic objects.

Setup GGIW-PHD Extended Object Tracker

The target detections are processed with an extended object tracker. In this example, you use a gamma Gaussian inverse Wishart probability hypothesis density (GGIW-PHD) extended object tracker. The GGIW-PHD tracker models the target with an elliptical shape and the measurement model assumes that the detections are uniformly distributed inside the extent of the target. This model allows a target to accept two-bounce ghost detections, which have a higher probability of being misclassified as a real target. These two-bounce ghost detections also report a Doppler measurement that is inconsistent with the actual motion of the target. When these ghost detections and real target detections from the same object are estimated to belong to the same partition of detections, the incorrect Doppler information can potentially cause the track estimate to diverge.

To reduce this problem, the tracker processes the range-rate measurements with higher measurement noise variance to account for this imperfection in the target measurement model. The tracker also uses a combination of a high assignment threshold and low merging threshold. A high assignment threshold allows the tracker to reduce the generation of new components from ghost targets detections, which are misclassified as target detections. A low merging threshold enables the tracker to discard corrected components (hypothesis) of a track, which might have diverged due to correction with ghost detections.

You set up the tracker using the trackerPHD System object™. For more details about extended object trackers, refer to the Extended Object Tracking of Highway Vehicles with Radar and Camera (Automated Driving Toolbox) example.

% Configuration of the sensors from the recording to set up the tracker
[~, sensorConfigurations] = helperAssembleData(detectionLog{1},configurationLog{1});

% Configure the tracker to use the GGIW-PHD filter with constant turn-rate motion model
for i = 1:numel(sensorConfigurations)
    sensorConfigurations{i}.FilterInitializationFcn = @helperInitGGIWFilter;    
    sensorConfigurations{i}.SensorTransformFcn = @ctmeas;
end

% Create the tracker using trackerPHD with Name-value pairs
tracker = trackerPHD(SensorConfigurations = sensorConfigurations,...
     PartitioningFcn = @(x)helperMultipathExamplePartitionFcn(x,2,5),...
     AssignmentThreshold = 450,...
     ExtractionThreshold = 0.8,...
     ConfirmationThreshold = 0.85,...
     MergingThreshold = 25,...
     DeletionThreshold = 1e-3,...
     BirthRate = 1e-4,...
     HasSensorConfigurationsInput = true... 
    );

Run Scenario and Track Objects

Next, you advance the scenario, use the recorded measurements from the sensors, and process them using the previously described algorithm. You analyze the performance of the tracking algorithm by using the generalized optimal subpattern assignment (GOSPA) metric. You also analyze the performance of the classification filtering algorithm by estimating a confusion matrix between true and estimated classification of radar detections. You obtain the true classification information about the detections using the helperTrueClassificationInfo helper function.

% Create trackGOSPAMetric object to calculate GOSPA metric
gospaMetric = trackGOSPAMetric(Distance = 'custom', ...
    DistanceFcn = @helperGOSPADistance, ...
    CutoffDistance = 35);

% Create display for visualization of results
display = helperMultiPathTrackingDisplay;

% Predicted track list for ghost filtering
predictedTracks = objectTrack.empty(0,1);

% Confusion matrix
confMat = zeros(5,5,numel(detectionLog));

% GOSPA metric
gospa = zeros(4,numel(detectionLog));

% Ground truth 
groundTruth = scenario.Actors(2:end);

for i = 1:numel(detectionLog)
    % Advance scene for visualization of ground truth
    advance(scenario);
    
    % Current time
    time = scenario.SimulationTime;
    
    % Detections and sensor configurations
    [detections, configurations] = helperAssembleData(detectionLog{i},configurationLog{i});
    
    % Predict confirmed tracks to current time for classifying ghosts
    if isLocked(tracker)
        predictedTracks = predictTracksToTime(tracker,'confirmed',time);
    end
    
    % Classify radar detections as targets, ghosts, or static environment
    [targets, ghostStatic, ghostDynamic, static, reflectors, classificationInfo] = helperClassifyRadarDetections(detections, egoVehicle, predictedTracks);
    
    % Pass detections from target and sensor configurations to the tracker
    confirmedTracks = tracker(targets, configurations, time);

    % Visualize the results
    display(egoVehicle, sensors, targets, confirmedTracks, ghostStatic, ghostDynamic, static, reflectors);
    
    % Calculate GOSPA metric
    [gospa(1, i),~,~,gospa(2,i),gospa(3,i),gospa(4,i)] = gospaMetric(confirmedTracks, groundTruth);
    
    % Get true classification information and generate confusion matrix
    trueClassificationInfo = helperTrueClassificationInfo(detections);
    confMat(:,:,i) = helperConfusionMatrix(trueClassificationInfo, classificationInfo);
end

Results

Animation and snapshot analysis

The animation that follows shows the result of the radar data processing chain. The black ellipses around vehicles represent estimated tracks. The radar detections are visualized with four different colors depending on their predicted classification from the algorithm. The black dots in the visualization represent static radar target detections. Notice that these detections are overlapped by black lines, which represent the static reflectors found using the DBSCAN algorithm. The maroon markers represent the detections processed by the extended object tracker, while the green and blue markers represent radar detections classified as reflections via static and dynamic objects, respectively. Notice that the tracker is able to maintain a track on all four vehicles during the scenario.

MultipathTrackingExample.gif

Next, you analyze the performance of the algorithm using different snapshots captured during the simulation. The snapshot below is captured at 3 seconds and shows the situation in front of the ego vehicle. At this time, the ego vehicle is approaching the slow-moving truck, and the left radar sensor observes reflections of these objects via the left barrier. These detections appear as mirrored detections of these objects in the barrier. Notice that the black line estimated as a 2-D reflector is in the line of sight of these detections. Therefore, the algorithm is able to correctly classify these detections as ghost targets reflected off static objects.

f = showSnaps(display,1:2,1);
if ~isempty(f)
    ax = findall(f,'Type','Axes','Tag','birdsEyePlotAxes');
    ax.XLim = [-10 30];
    ax.YLim = [-10 20];
end

Figure contains 2 axes objects and other objects of type uipanel. Axes object 1 with xlabel X (m), ylabel Y (m) contains 13 objects of type patch, line, text. One or more of the lines displays its values using only markers These objects represent lane, track, (history), Reflectors, Targets, Ghost (S), Ghost (D), Static. Hidden axes object 2 with xlabel X (m), ylabel Y (m) contains 10 objects of type patch, line.

Next, analyze the performance of the algorithm using the snapshot captured at 4.3 seconds. At this time, the ego vehicle is even closer to the truck and the truck is approximately halfway between the green vehicle and the ego vehicle. During these situations, the left side of the truck acts as a strong reflector and generates ghost detections. The detections on the right half of the green vehicle are from two-bounce detections off of the green vehicle as the signal travels back to the sensor after reflecting off the truck. The algorithm is able to classify these detections as ghost detections generated from dynamic object reflections because the estimated extent of the truck is in the direct line of sight of these detections.

f = showSnaps(display,1:2,2);
if ~isempty(f)
    ax = findall(f,'Type','Axes','Tag','birdsEyePlotAxes');
    ax.XLim = [-10 30];
    ax.YLim = [-10 20];
end

Figure contains 2 axes objects and other objects of type uipanel. Axes object 1 with xlabel X (m), ylabel Y (m) contains 13 objects of type patch, line, text. One or more of the lines displays its values using only markers These objects represent lane, track, (history), Reflectors, Targets, Ghost (S), Ghost (D), Static. Hidden axes object 2 with xlabel X (m), ylabel Y (m) contains 10 objects of type patch, line.

Also notice the passing vehicle denoted by the yellow car on the left of the ego vehicle. The detections, which seem to originate from the nonvisible surface of the yellow vehicle, are two-bounce detections of the barriers, reflected via the front face of the passing vehicle. These ghost detections are misclassified as target detections because they seem to originate from inside the estimated extent of the vehicle. At the same location, the detections that lie beyond the barrier are also two-bounce detections of the front face when the signal is reflected from the barrier and returns to the sensor. Since these detections lie beyond the extent of the track and the track is in the direct line of sight, they are classified as ghost detections from reflections off dynamic objects.

Performance analysis

Quantitatively assess the performance of the tracking algorithm by using the GOSPA metric and its associated components. A lower value of the metric denotes better tracking performance. In the figure below, the Missed-target component of the metric remains zero after a few steps in the beginning, representing establishment delay of the tracker as well as an occluded target. The zero value of the component shows that no targets were missed by the tracker. The False-tracks component of the metric increased around for 1 second around 85th time step. This denotes a false track confirmed by the tracker for a short duration from ghost detections incorrectly classified as a real target.

figure;
plot(gospa','LineWidth',2);
legend('GOSPA','Localization GOSPA','Missed-target GOSPA','False-tracks GOSPA');

Figure contains an axes object. The axes object contains 4 objects of type line. These objects represent GOSPA, Localization GOSPA, Missed-target GOSPA, False-tracks GOSPA.

Similar to the tracking algorithm, you also quantitatively analyze the performance of the radar detection classification algorithm by using a confusion matrix [2]. The rows shown in the table denote the true classification information of the radar detections and the columns represent the predicted classification information. For example, the second element of the first row defines the percentage of target detections predicted as ghosts from static object reflections.

More than 90% of the target detections are classified correctly. However, a small percentage of the target detections are misclassified as ghosts from dynamic reflections. Also, approximately 3% of ghosts from static object reflections and 20% of ghosts from dynamic object reflections are misclassified as targets and sent to the tracker for processing. A common situation when this occurs in this example is when the detections from two-bounce reflections lie inside the estimated extent of the vehicle. Further, the classification algorithm used in this example is not designed to find false alarms or clutter in the scene. Therefore, the fifth column of the confusion matrix is zero. Due to spatial distribution of the false alarms inside the field of view, the majority of false alarm detections are either classified as reflections from static objects or dynamic objects.

% Accumulate confusion matrix over all steps
confusionMatrix = sum(confMat,3);
numElements = sum(confusionMatrix,2);
numElemsTable = array2table(numElements,'VariableNames',{'Number of Detections'},'RowNames',{'Targets','Ghost (S)','Ghost (D)','Environment','Clutter'});
disp('True Information');disp(numElemsTable);
True Information
                   Number of Detections
                   ____________________

    Targets                1969        
    Ghost (S)              3155        
    Ghost (D)               849        
    Environment           27083        
    Clutter                 138        
% Calculate percentages
percentMatrix = confusionMatrix./numElements*100;

percentMatrixTable = array2table(round(percentMatrix,2),'RowNames',{'Targets','Ghost (S)','Ghost (D)','Environment','Clutter'},...
    "VariableNames",{'Targets','Ghost (S)','Ghost (D)', 'Environment','Clutter'});

disp('True vs Predicted Confusion Matrix (%)');disp(percentMatrixTable);
True vs Predicted Confusion Matrix (%)
                   Targets    Ghost (S)    Ghost (D)    Environment    Clutter
                   _______    _________    _________    ___________    _______

    Targets          90.6        0.56         8.43          0.41          0   
    Ghost (S)        3.14       84.03        12.49          0.35          0   
    Ghost (D)       17.79        0.24        81.98             0          0   
    Environment      1.06        2.72         4.04         92.17          0   
    Clutter         21.74       62.32        15.22          0.72          0   

Summary

In this example, you simulated radar detections due to multipath propagation in an urban highway driving scenario. You configured a data processing algorithm to simultaneously filter ghost detections and track vehicles on the highway. You also analyzed the performance of the tracking algorithm and the classification algorithm using the GOSPA metric and confusion matrix.

References

[1] Prophet, Robert, et al. "Instantaneous Ghost Detection Identification in Automotive Scenarios." 2019 IEEE Radar Conference (RadarConf). IEEE, 2019.

[2] Kraus, Florian, et al. "Using machine learning to detect ghost images in automotive radar." 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2020.

Related Topics