Lidar (an acronym for Light Detection and Ranging) is a remote sensing technology that uses pulsed light to collect range measurements to the objects in the surroundings. Lidar sensors emit laser pulses that reflect off objects, allowing them to perceive the structure of their surroundings. The sensors record the reflected light energy, to determine the distances to an object to create a 2D or 3D representation of those surroundings.
Lidar sensors are among the primary sensors for autonomous driving and robotics applications. They enable 3D perception workflows like object detection, and semantic segmentation, as well as navigation workflows like mapping, simultaneous localization and mapping (SLAM), and path planning.
Autonomous systems use multiple sensors such as camera, IMU, and radar in their sensor suite for environmental perception. Lidars can overcome some of the drawbacks of other sensors by providing highly accurate, structural, and 3D information of the surroundings. This advantage contributed to the introduction of lidar sensors into the mainstream perception market.
The market adoption of lidars is driven by three key factors:
- Low-cost lidars
The introduction of low-cost lidars, with enhanced characteristics for range, size, and robustness, have increased the availability of the technology for comparatively low-revenue industrial applications.
- Accurate 3D data
Lidars gather high-density 3D information of the surroundings as point clouds with a higher accuracy than other range sensors like radars and sonars. This, in turn, improves the accuracy of the 3D reconstruction.
- Lidar processing algorithms
The recent developments in lidar processing workflows such as semantic segmentation, object detection and tracking, lidar camera data fusion, and lidar SLAM has enabled the engineering teams to add lidars into their development workflows. You can use tools such as MATLAB® to develop and apply lidar processing algorithms.
- Aerial lidars
- Ground lidars
- Indoor lidars
Aerial lidars are lidar sensors mounted on unmanned aerial vehicles (UAV) or aircrafts. Aerial lidars capture 3D point cloud data of a large terrain that can be used for lidar mapping, feature extraction, terrain classification, and other use cases.
Examples of aerial lidar applications include:
- Agriculture: Lidar technology is extensively used in agriculture for mapping vegetation area, identify exact terrain of the farm and the water catchment area.
- Urban planning: Lidars are used in creating digital surface models (DSMs) or even digital city models (DCMs) of an area, which are used to design a city or to build new infrastructures in an existing city.
- Geological mapping: Lidars can be used to create 3D maps of the Earth’s surface, which can be further used in applications like mining, precision forestry, and oil and gas exploration.
- Aerial navigation and path planning: Lidars are now being used in UAVs to gather live 3D data to navigate autonomously through the surroundings.
Ground lidars can be stationary terrestrial lidars and mobile lidars.
- Stationary terrestrial lidars are lidars mounted on a stationary platform. They are commonly used for land surveys, road surveys, topological mapping, creating digital elevation maps (DEMs), agriculture, and other application. Stationary terrestrial lidars are suited for applications that require detailed and close data capture.
- Mobile lidars are ground lidars attached on a mobile platform like a car or a truck. The most prevalent mobile lidar application is autonomous driving. Lidars mounted on the vehicles captures 3D point cloud data of the surroundings and they are further used in perception and navigation workflows.
Lidars are widely used in indoor robotics applications by mounting them on mobile robots. Apart from 3D lidars, 2D lidars or laser scanners are also used in indoor robotics applications like lidar scanning and mapping. They collect depth information of the surroundings and are then further processed based on the use cases.
Common uses of indoor lidars include:
- Lidar mapping and SLAM: You can use 2D or 3D lidars to create 2D or 3D SLAM and mapping respectively.
- Obstacle detection, collision warning, and avoidance: 2D lidars are widely used to detect obstacles. This data can be further used to create collision warnings or to avoid obstacles.
MATLAB and Lidar Toolbox™ simplify lidar processing tasks. With dedicated tools and functions, MATLAB helps you overcome common challenges in processing lidar data like 3D data types, sparsity of data, invalid points in the data, and high noises.
You can import live and recorded lidar data into MATLAB, implement lidar processing workflows, and create C/C++ and CUDA code to deploy into production.
Some of the important capabilities MATLAB provides in processing lidar point clouds include:
Streaming, Reading, and Writing Lidar Data
The first step in processing any sensor data in MATLAB is to get the data into the MATLAB workspace. You can:
- Stream live data from Velodyne sensors using the Velodyne Lidar Hardware Support Package.
- Read stored point clouds in different file formats like PCD, PLY, PCAP, Ibeo data container, LAS, and LAZ.
- Synthesize lidar data in simulation environments for testing your processing algorithms. UAV Toolbox and Automated Driving Toolbox™ provide lidar sensor models to simulate lidar point clouds.
Lidar Data Processing
You can preprocess lidar data to improve the quality of data and extract basic information from it. Lidar Toolbox provides functionality for downsampling, median filtering, aligning, transforming, and extracting features from point clouds.
Lidar Camera Calibration
MATLAB enables lidar camera calibration to estimate lidar-camera transforms for fusing camera and lidar data. You can further fuse color information in lidar point clouds and estimate 3D bounding boxes in lidar using 2D bounding boxes from a co-located camera.
Deep Learning for Lidar
With MATLAB, you can apply deep learning algorithms for object detection and semantic segmentation on lidar data.
- With just a few lines of code in MATLAB, you can import pretrained semantic segmentation models, including PointSeg and SqueezeSegV2 to segment lidar data. You can also train, evaluate, and deploy your own deep learning models.
- MATLAB enables designing, training, and evaluating robust detectors such as PointPillars networks. You can detect and fit oriented bounding boxes around objects in lidar point clouds.
- The Lidar Labeler app in Lidar Toolbox simplifies point cloud labeling. You can manually add bounding boxes around the objects and apply built-in or custom algorithms to automate lidar point cloud labeling and evaluate automation algorithm performance.
Object Tracking on Point Cloud
MATLAB can unify multiple domains that feed into an end-to-end object tracking workflow. This enables you to read lidar data, preprocess it, apply deep learning to detect objects, track these objects using a predefined tracker, and deploy this on a target hardware.
Point Cloud Registration and SLAM
MATLAB provides functions to register lidar point clouds and build 3D maps using SLAM algorithms. You can extract and match fast point feature histogram (FPFH) descriptors from lidar point clouds and then register point clouds based on the matched features.
You can also implement 3D SLAM algorithms by stitching together lidar point cloud sequences from ground and aerial lidar data.