Main Content

Extract Vehicle Track List from Recorded Lidar Data for Scenario Generation

This example shows how to extract the vehicle track-list information required for generating a scenarios from raw lidar data.

Track lists are crucial for interpreting the positions and trajectories of the vehicles in a scenario. You can use track lists to extract dynamic actor properties.

In this example, you:

  • Detect the vehicles from recorded lidar data using a pretrained pointPillarsObjectDetector (Lidar Toolbox) object.

  • Generate a vehicle track list from the detected vehicles using a joint probabilistic data association (JPDA) tracker.

  • Format the vehicle track list for use in the scenario generation workflow.

You can use the extracted vehicle track list to generate a scenario from recorded sensor data. For more information about creating scenarios from vehicle track lists, see the Generate Scenario from Actor Track List and GPS Data example. For more information on generating scenarios for RoadRunner Scenario, see the Generate RoadRunner Scenario from Recorded Sensor Data example.

Load Lidar Data

Download a ZIP file containing a subset of sensor data from the PandaSet data set and then unzip the file. This data set contains a sequence of 400 lidar point clouds and lidar sensor information. The size of the downloaded data set is 394 MB.

dataFolder = tempdir;
lidarDataFileName = "PandasetLidarData.zip";
url = "https://ssd.mathworks.com/supportfiles/driving/data/" + lidarDataFileName;
filePath = fullfile(dataFolder,lidarDataFileName);
if ~isfile(filePath)
    websave(filePath,url)
end
unzip(filePath,dataFolder)
dataset = fullfile(dataFolder,"PandasetLidarData");
data = load(fullfile(dataset,"LidarSensorData.mat"));
lidarData = data.LidarData;

lidarData is a table with two columns:

  • timeStamp — Time, in seconds, at which the point cloud data was captured.

  • fileName — Filenames of the point clouds in the data set.

The point clouds are located in the Lidar folder in the dataset directory. Create a table that contains the file paths of these point clouds with their relative timestamps by using the helperUpdateTable function.

lidarFolder = "Lidar";
lidarData  = helperUpdateTable(lidarData,dataset,lidarFolder);

Display the first five entries of lidarData.

lidarData(1:5,:)

Read and display the first point cloud from lidarData.

ptCld = pcread(lidarData.filePath{1});
ax = pcshow(ptCld);
zoom(ax,4)

Download a ZIP file containing the camera images for the same subset of the PandaSet data set, and then unzip the file. In this example, you use the camera data for visual validation of the tracked actors. The size of the downloaded data set is 104 MB.

sensorDataFilename = "PandasetSensorData.zip";
url = "https://ssd.mathworks.com/supportfiles/driving/data/" + sensorDataFilename;
filePath = fullfile(dataFolder,sensorDataFilename);
if ~isfile(filePath)
    websave(filePath,url)
end
unzip(filePath,dataFolder)
dataset = fullfile(dataFolder,"PandasetSensorData");
data = load(fullfile(dataset,"sensorData.mat"));
monocamData = data.CameraData;

monocamData is a table with two columns:

  • timeStamp — Time, in seconds, at which the image data was captured.

  • fileName — Filenames of the images in the data set.

The images are located in the Camera folder in the dataset directory. Create a table that contains the file paths of these images with their relative timestamps by using the helperUpdateTable function.

imageFolder = "Camera";
monocamData  = helperUpdateTable(monocamData,dataset,imageFolder);

Display the first five entries of monocamData.

monocamData(1:5,:)

Read and display the first image from the monocamData.

img = imread(monocamData.filePath{1});
imshow(img)

Detect Vehicles from Lidar Data

In this example, you use a pretrained pointPillarsObjectDetector (Lidar Toolbox) deep neural network model to detect vehicles from lidar data. Download a ZIP file containing the pretrained model, and then unzip it. The size of the downloaded model is 9 MB.

modelFilename = "LidarVehicleDetectorPointPillars.zip";
modelUrl = "https://ssd.mathworks.com/supportfiles/driving/data/" + modelFilename;
filePath = fullfile(dataFolder,modelFilename);
if ~isfile(filePath)
    websave(filePath,modelUrl)
end
unzip(filePath,dataFolder)
modelFolder = fullfile(dataFolder,"LidarVehicleDetectorPointPillars");
model = load(fullfile(modelFolder,"lidarVehicleDetectorPointPillars.mat"));
lidarVehicleDetector = model.detector;

Specify these parameters for the lidar vehicle detection model.

  • Threshold — Confidence score threshold for boundary detection. The model ignores detections with a confidence score less than the threshold value. If you observe false detections, try increasing this value.

  • xLimits — Range of coordinates along the X-axis.

  • yLimits — Range of coordinates along the Y-axis.

params.threshold = 0.4;
params.xLimits = [-12 12];
params.yLimits = [0 60];

Detect vehicles for each point cloud in lidarData by using the helperDetectVehicles function. Note that, depending on your hardware configuration, this function can take a significant amount of time to run.

vehicleDetections = helperDetectVehicles(lidarVehicleDetector,lidarData,params);

vehicleDetections is an M-by-3 table containing the detected vehicle bounding boxes for each point cloud, their detection scores, and their labels. M is the total number of point clouds.

Read a point cloud, and extract the detected vehicle bounding boxes.

fileIdx = 1;
ptCloud = pcread(lidarData.filePath{fileIdx});
bboxes = vehicleDetections.Bboxes{fileIdx};

Overlay the vehicle detections on the point cloud, and display the overlaid point cloud.

posViewCrop = find(ptCloud.Location(:,:,1) < params.xLimits(2) ...
                    & ptCloud.Location(:,:,1) > params.xLimits(1) ...
                    & ptCloud.Location(:,:,2) < params.yLimits(2) ...
                    & ptCloud.Location(:,:,2) > params.yLimits(1));
ptCloud = select(ptCloud,posViewCrop,OutputSize="full");

ax = pcshow(ptCloud.Location);
showShape(cuboid=bboxes,Parent=ax,Opacity=0.1,Color="green",LineWidth=1)

Generate Tracks from Detections

You must assign detections to distinct tracks to determine the entry and exit times of vehicles. If your detections contain noise, you can smooth the noisy detections using a JPDA tracker. To track vehicle detections in this example, you initialize the JPDA tracker using an interacting multiple model (IMM) filter with constant velocity and constant turn motion models.

Define a JPDA tracker using a trackerJPDA (Sensor Fusion and Tracking Toolbox) System object™ with these properties:

  • ConfirmationThreshold — Specified as [4 7], to confirm the track if the lane is detected for at least 4 out of the past 7 detections.

  • DeletionThreshlod — Specified as [3 3], to remove a track if the track is missing from the 3 most recent consecutive detections.

  • AssignmentThreshold — Specified as [50 200], this sets the normalized maximum distance to associate a detection to a track.

vehicleTracker = trackerJPDA(FilterInitializationFcn=@helperMultiClassInitIMMFilter,...
    TrackLogic="History", ...
    AssignmentThreshold=[50 200], ...
    ClutterDensity=1e-12, ...
    ConfirmationThreshold=[4 7], ...
    DeletionThreshold=[3 3], ...
    InitializationThreshold=0.1, ...
    HitMissThreshold=0.2);

Specify the measurement noise measNoise. The measurement noise describes the uncertainty in the measurements.

measNoise = blkdiag(0.25*eye(3),25,eye(3));

Create an object to display the tracks overlaid on the point clouds, and the respective camera frames, by using the helperLidarVisualizer class.

displayObj = helperLidarVisualizer;
initializeDisplay(displayObj)

Process the vehicle detections for each frame, and assign tracks if the detection meets the assignment criteria.

% Get the list of all bounding boxes for each frame from the detector output. 
Bboxes = vehicleDetections.Bboxes;

% Create a placeholder to save the track list data.
tracklist = struct;

for i = 1:size(lidarData,1)
    % Read the current frame.
    ptCloud = pcread(lidarData.filePath{i});
    
    % Get the bounding boxes for current frame.
    bboxes = Bboxes{i};
    
    % Get the timestamp.
    curTimeStamp = lidarData.timeStamp(i);
    tracklist(i).Time = curTimeStamp;
    
    % Convert the bounding boxes to object detections.
    measurements = helperAssembleDetections(bboxes,measNoise,curTimeStamp);

    % Return the updated tracks for the current timestamp.
    [confirmedTracks] = vehicleTracker(measurements,curTimeStamp);
    
    % Extract the bounding boxes and TrackID from the confirmed tracks.
    vehicleBboxes = helperExtractBboxes(confirmedTracks);
    
    % Create a track list from the state information of the confirmed tracks.
    trackerInfo = struct;
    if ~isempty(vehicleBboxes)
        for j = 1:numel(confirmedTracks)
            id = confirmedTracks(j).TrackID;
            vehiclePosition = vehicleBboxes(j,:);
            
            % Create a structure to hold the track information.
            trackerInfo(j,1).TrackID = id;

            % Set the class ID to 1, representing the 'vehicle' class.
            trackerInfo(j,1).ClassID = 1;
            
             % Get the vehicle position in the ego frame.
            trackerInfo(j,1).Position = double(vehiclePosition(:,1:3));
        end

        % Store the track information in the track list.
        tracklist(i).TrackerInfo = trackerInfo;
    else
        tracklist(i).TrackerInfo = [];
    end

    % Update the display with the current point cloud.
    ptCloudUnOrg = removeInvalidPoints(ptCloud);
    updatePointCloud(displayObj,ptCloudUnOrg)
    
    % Update the display with the current front-view image.
    frontViewImage = imread(monocamData.filePath{i});
    updateImage(displayObj,frontViewImage)

    % Update the display, if the tracks are not empty.
    if ~isempty(confirmedTracks)
        updateTracks(displayObj,confirmedTracks)
    end

    drawnow("limitrate")
end

The generated track lists store positions, track IDs, and class IDs for the vehicles detected in each point cloud frame using a structure. Use the struct2table function to convert the vehicle track list structure into a two-column table, as required by the actorprops function. You can use the actorprops function to extract actor properties from vehicle track lists, for use in generating a driving scenario.

Format the track list for use with the actorprops function. Using this function, you can convert an actor track list into the scenario actor information required for creating a scenario. For more information on how to generate a scenario from tracklist, see the Generate Scenario from Actor Track List and GPS Data example. For information on how to generate a scenario for RoadRunner Scenario, see the Generate RoadRunner Scenario from Recorded Sensor Data example.

tracklist = struct2table(tracklist);

Display the first five entries of the table. Note that the first few entries of the TrackerInfo column of the table are empty, as the tracker requires a few frames to initialize before it can produce consistent detections.

tracklist(1:5,:)
ans=5×2 table
       Time       TrackerInfo 
    __________    ____________

    1.5576e+09    {0×0 double}
    1.5576e+09    {0×0 double}
    1.5576e+09    {0×0 double}
    1.5576e+09    {6×1 struct}
    1.5576e+09    {6×1 struct}

Helper Functions

helperExtractBboxes — Extract the vehicle bounding boxes from the tracks.

function bboxes = helperExtractBboxes(tracks)
positionSelector = [1 0 0 0 0 0 0 0 0 0 0;
                    0 0 1 0 0 0 0 0 0 0 0;
                    0 0 0 0 0 1 0 0 0 0 0];
yawSelector = [0 0 0 0 0 0 0 1 0 0 0];

dimensionSelector = [0 0 0 0 0 0 0 0 0 1 0;
                     0 0 0 0 0 0 0 0 1 0 0;
                     0 0 0 0 0 0 0 0 0 0 1];

% Retrieve bounding boxes from the tracks.
positions = getTrackPositions(tracks,positionSelector);
yaws = getTrackPositions(tracks,yawSelector);
dimensions = getTrackPositions(tracks,dimensionSelector);

bboxes = [positions dimensions zeros(size(positions,1),2) yaws];

end

helperAssembleDetections — Convert the detected vehicle bounding boxes to object detections, for use in the tracker.

function mydetections = helperAssembleDetections(bboxes,measNoise,timestamp)
% Assemble bounding boxes as a cell array of objectDetection objects. 

mydetections = cell(size(bboxes,1),1);
for i = 1:size(bboxes,1)
    
    % This example assumes that all vehicles are cars.
    classid = 1;
    lidarModel = [bboxes(i,1:3) bboxes(i,end) bboxes(i,4:6)];

    mydetections{i} = objectDetection(timestamp, ...
        lidarModel',MeasurementNoise=measNoise, ...
        ObjectAttributes=struct("ClassID",classid));
end
end

See Also

Functions

Objects

Related Topics