Label ground truth data in lidar point clouds
The Lidar Labeler app enables you to label objects in a point cloud or a point cloud sequence. The app reads point cloud data from PLY, PCAP, LAS, LAZ, ROS and PCD files. Using the app, you can:
Define cuboid region of interest (ROI) labels and scene labels. Use them to interactively label your ground truth data.
Define attributes for the labels and use them to provide further detail about the labels.
Use built-in algorithms for clustering, ground plane segmentation, automated labeling, and tracking.
Save label definitions, point cloud data, and ground truth data to a session file for future use.
Use the Projected View option to view the labels in top, front and side views simultaneously.
Use the Camera View option to create and reuse custom views of the point cloud data.
Use the Auto Align option to rotate and best fit the cuboid to the cluster.
lidar.syncImageViewer.SyncImageViewer class to sync the app to an external
visualization or analysis tool.
Write, import, and use a custom automation algorithm for automated labeling.
Evaluate the performance of your label automation algorithms with a visual summary.
Export the labeled ground truth as a
groundTruthLidar object. This object can be used for
system verification and training an object detector.
To learn more about this app, see Get Started with the Lidar Labeler.
MATLAB® Toolstrip: On the Apps tab, under Image Processing and Computer Vision, click the app icon.
MATLAB command prompt: Enter
lidarLabeler opens a new session of the app, enabling you to label
ground truth data in point clouds.
opens the app and loads the
lidarLabeler(ptCloudSeqFolder) opens the app and loads the point
cloud sequence from the folder
ptCloudSeqFolder is a string scalar or character vector specifying a
folder that contains point cloud files. The point cloud files must have extensions supported
pcformats, and are loaded in the order returned by the
lidarLabeler(lasSeqFolder) opens the app and loads the LAS sequence
from the folder
lasSeqFolder is a
string scalar or character vector specifying a folder contains LAS files. LAS files must
have extensions supported by
lasformats, and are loaded in the order
returned by the
opens the app and loads both of these components:
A point cloud signal, specified using any of the input argument combinations from previous syntaxes.
An external video or image sequence display tool that is time-synchronized with the specified point cloud signal.
syncImageViewer input is a handle to a
lidar.syncImageViewer.SyncImageViewer class that implements the external
For example, this code opens the app with a point cloud signal and synchronized video visualization tool.
sourceName = fullfile(toolboxdir('lidar'),'lidardata','lcc', ... 'HDL64','pointCloud'); lidarLabeler(sourceName,'SyncImageViewerTargetHandle',@SyncImageDisplay)
lidarLabeler(sessionFile) opens the app and loads a saved app
contains the path and file name of a MAT-file. The MAT-file that
sessionFile points to contains the saved session.
The labels do not support sublabels.
The Label Summary window does not support sublabels.
On the left side of the app, the ROI Labels pane contains the ROI
label definitions that you can mark on the point cloud frames. You can create label
definitions directly from this pane. Alternatively, you can create label definitions
programmatically by using a
labelDefinitionCreatorLidar object and then import these label definitions into
an app session.
The app supports the definition of ROI labels and attributes.
An ROI label is a label that corresponds to an ROI in a signal frame. This table describes the supported label type.
Draw cuboidal ROI labels around objects.
An ROI attribute specifies additional information about an ROI label. For example, in a driving scene, attributes might include the type or color of a vehicle. This table describes the supported attribute types.
|Attribute Type||Sample Attribute Definition||Sample Default Values|
lidar.syncImageViewer.SyncImageViewer class to create a tool for viewing the
image corresponding to the point cloud data.
Remove the ground plane to clearly view the created object labels.
Use the rotate, translate, expand, and shrink options to edit the cuboids after drawing them.
Use the Camera View option to save the a view of the data from the current angle and direction.
To avoid having to relabel ground truth with new labels, organize the labeling scheme you want to use before you begin marking your ground truth.
You can copy and paste the labels between signals that are of the same type.
You can use label automation algorithms to speed up labeling within the app. To create your own label automation algorithm to use within the app, see Create Automation Algorithm for Labeling. You can also use one of the built-in algorithms by following these steps:
Import the data you want to label, and create at least one label definition.
On the app toolstrip, click Select Algorithm and select one of the built-in automation algorithms.
Click Automate, and then follow the automation instructions in the right pane of the automation window.
Track an object through the point cloud frame. To use this algorithm, you must draw a cuboid ROI on an object you wish to track. You can also draw multiple cuboid ROIs to track more than one label. Running the algorithm provides tracking data of the labels that you can accept or reject. You can also undo the run and perform it again.
The step by step procedure is displayed on app when you select the Lidar Object Tracker algorithm.
Estimate cuboid ROIs between point cloud frames by interpolating the ROI locations across the time interval. To use this algorithm, you must draw a cuboid ROI on a minimum of two frames: one at the beginning of the interval and one at the end of the interval. The interpolation algorithm estimates and draws ROIs in the intermediate frames.
Consider a point cloud sequence with 10 frames. The first frame has a cuboid ROI centered at [5, 5, 0]. The 10th frame has a cuboid ROI centered at [25, 25, 0]. At each frame, the algorithm moves the ROI 2 points in the x-direction, 2 points in the y-direction, and 0 points in the z-direction. Therefore, the algorithm centers the ROI at [7, 7, 0] in the second frame, [9, 9, 0] in the third frame, and so on, up to [23, 23, 0] in the second-to-last frame.