What Is Lidar-Camera Calibration?
Lidar-camera calibration establishes correspondences between 3-D lidar points and 2-D camera data to fuse the lidar and camera outputs together.
Lidar sensors and cameras are widely used together for 3-D scene reconstruction in applications such as autonomous driving, robotics, and navigation. While a lidar sensor captures the 3-D structural information of an environment, a camera captures the color, texture, and appearance information. The lidar sensor and camera each capture data with respect to their own coordinate system.
Lidar-camera calibration consists of converting the data from a lidar sensor and a camera into the same coordinate system. This enables you to fuse the data from both sensors and accurately identify objects in a scene. This figure shows the fused data.
Lidar-camera calibration consists of intrinsic calibration and extrinsic calibration.
Intrinsic calibration — Estimate the internal parameters of the lidar sensor and camera.
Manufacturers calibrate the intrinsic parameters of their lidar sensors in advance.
You can use the
estimateCameraParameters
function to estimate the intrinsic parameters of the camera, such as focal length, lens distortion, and skew. For more information, see the Single Camera Calibration example.You can also interactively estimate camera parameters using the Camera Calibrator app.
Extrinsic calibration — Estimate the external parameters of the lidar sensor and camera, such as location, orientation, to establish relative rotation and translation between the sensors.
Extrinsic Calibration of Lidar and Camera
The extrinsic calibration of a lidar sensor and camera estimates a rigid transformation between them that establishes a geometric relationship between their coordinate systems. This process uses standard calibration objects, such as planar boards with checkerboard patterns.
This diagram shows the extrinsic calibration process for a lidar sensor and camera using a checkerboard.
The programmatic workflow for extrinsic calibration consists of these steps. Alternatively, you can use the Lidar Camera Calibrator app to interactively perform lidar-camera calibration.
Extract the 3-D information of the checkerboard from both the camera and lidar sensor.
To extract the 3-D checkerboard corners from the camera data, in world coordinates, use the
estimateCheckerboardCorners3d
function.To extract the checkerboard plane from the lidar point cloud data, use the
detectRectangularPlanePoints
function.
Use the checkerboard corners and planes to obtain the rigid transformation matrix, which consists of the rotation R and translation t. You can estimate the rigid transformation matrix by using the
estimateLidarCameraTransform
function. The function returns the transformation as arigidtform3d
object.
You can use the transformation matrix to:
Evaluate the accuracy of your calibration by calculating the error. You can do so either programmatically, using
estimateLidarCameraTransform
, or interactively, using the Lidar Camera Calibrator app.Project lidar points onto an image by using the
projectLidarPointsOnImage
function, as shown in this figure.Fuse the lidar and camera outputs by using the
fuseCameraToLidar
function.Estimate the 3-D bounding boxes in a point cloud based on the 2-D bounding boxes in the corresponding image. For more information, see Detect Vehicles in Lidar Using Image Labels.
References
[1] Zhou, Lipu, Zimo Li, and Michael Kaess. “Automatic Extrinsic Calibration of a Camera and a 3D LiDAR Using Line and Plane Correspondences.” In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5562–69. Madrid: IEEE, 2018. https://doi.org/10.1109/IROS.2018.8593660.
See Also
estimateLidarCameraTransform
| estimateCheckerboardCorners3d
| detectRectangularPlanePoints
| projectLidarPointsOnImage
| fuseCameraToLidar
| bboxCameraToLidar