主要内容

What Is Multi-Camera Calibration?

Multi-camera calibration estimates the intrinsic and extrinsic parameters of a multi-camera system. It ensures that all the cameras capturing the same scene or object agree on the position and orientation of what they are capturing. This agreement is important for applications like 3-D motion capture, 3-D reconstruction, multi-sensor calibration and other photogrammetry applications.

This series of frames, from a study done in partnership with a rehabilitation center that cares for cheetahs, illustrates a calibrated multi-camera system designed to reconstruct the 3-D motion of a cheetah. Researchers use the system to analyze how the cheetah stabilizes itself while inside a moving truck. Each frame captures different angles and aspects of the movements of the cheetah, enabling a comprehensive analysis of its balance mechanisms in a dynamic environment.

To ensure the accuracy of calibration, you must model the cameras mathematically. Modeling the cameras mathematically involves understanding how each camera captures images and how they relate to each other in a common space. By modeling the cameras, you can correct any distortions and align their views accurately. Calibration involves modeling the intrinsic and extrinsic parameters of the cameras.

  • Intrinsic Parameters — Specific to each camera, and include factors such as the focal length, lens distortion, and the optical center. Intrinsic calibration ensures that you accurately estimate the aforementioned parameters of each camera. To learn more about intrinsic calibration, see What Is Camera Calibration?

  • Extrinsic Parameters —Describes the position and orientation of each camera in relation to a common coordinate system. Extrinsic calibration ensures that you accurately estimate the spatial relationship between the cameras enabling you to accurately determine the position and orientation of objects in the scene.

To calculate these parameters, you can capture images of one or more known patterns (such as a ChArUco board) from different angles. The known patterns provide reference points that the calibration algorithm uses to determine the necessary adjustments for each camera. By analyzing how the patterns appear in the images, the algorithm can calculate both the intrinsic and extrinsic parameters. Once calibrated, the cameras can produce a unified, accurate representation of the scene.

Cameras with Overlapping and Non-Overlapping Fields of View

Computer Vision Toolbox™ contains multi-camera calibration algorithms for cameras that have overlapping fields of view, no overlapping fields of view, and a mix of both. Each scenario requires a different approach to ensure accurate calibration.

When multiple cameras have overlapping fields of view, they can see the same parts of the scene. This overlap enables easier calibration because the cameras can use common reference points in the overlapping area. This calibration process typically involves capturing images of a known pattern from different angles, identifying reference points in the overlapping images, and then estimating the intrinsic and extrinsic parameters to align the cameras accurately.

For cameras without overlapping fields of view, calibration is more challenging because the cameras do not share common pattern points. In this situation, the calibration algorithm relies on the relative motion between the cameras and the patterns. For these scenarios, the calibration algorithm uses robot hand-eye calibration [1] to estimate the relative poses of the non-overlapping cameras. The algorithm analyzes the relative motion between the cameras and the observed calibration patterns along the X, Y, and Z axes to determine how the cameras move with respect to each other and the patterns. This motion information enables the system to estimate the extrinsic parameters even though the cameras do not observe the same part of the scene.

Perform Multi-Camera Calibration

To accurately calibrate multiple cameras, whether they have overlapping fields of view, non-overlapping fields of view, or a mix of both, you must capture calibration data for both intrinsic and extrinsic camera calibration. Then, you can load the images, detect calibration patterns, and estimate the camera parameters. Finally, you can evaluate and refine these results.

  1. Specify Calibration Patterns — Use only ChArUco boards or AprilGrid patterns. These patterns provide a higher density of detectable points and unique markers than the checkerboard and circle grid patterns, which improves the precision of the calibration process. ChArUco boards combine checkerboard patterns with ArUco markers, providing both precise corner detection and robust marker identification. AprilGrids also provide a dense array of unique markers that enhance calibration accuracy. Checkerboard and circle grid patterns, while useful for single-camera calibration, lack the unique markers necessary for reliable multi-camera calibration. You can use the generateCharucoBoard function to generate multiple ChArUco boards. For more details, see Calibration Patterns.

    Ideally, use twice the number of patterns as there are cameras in your setup to ensure precise calibration results. Ensure that all marker IDs are unique across all the patterns used in calibration, to guarantee that each marker is distinct and easily identifiable during the calibration process.

  2. Capture Calibration Images — To effectively perform multi-camera calibration, you must capture calibration images for intrinsic and extrinsic calibration separately, each with its own specific data and procedures. For each camera in a multi-camera system, you can use the procedures described in these topics to capture calibration images:

  3. Perform Multi-Camera Calibration — Images collected for intrinsic and extrinsic camera calibration serve different purposes, and you cannot use them interchangeably. Intrinsic calibration focuses on individual camera properties, and extrinsic calibration focuses on the spatial relationship between cameras.

    • Intrinsic Calibration — Use data to estimate the internal parameters of each camera, such as focal length and lens distortion. For each camera in the multi-camera system, load the intrinsic calibration images into the Camera Calibrator app to estimate its intrinsic parameters.

    • Extrinsic Calibration Data — Use data to determine the relative positions and orientations of multiple cameras with respect to each other. This process involves establishing correspondences between the cameras by capturing images where the calibration pattern is visible in multiple camera views simultaneously. A view can also be interpreted as the complete set of images captured by the multi-camera system at a single timestamp.

      Organize extrinsic calibration data as synchronized images in a specific folder hierarchy and use detection methods for single or multiple patterns.

      1. Organize the calibration data — Populate a folder structure, with images from all cameras, grouping together the images taken at the same timestamp. For example, given N cameras and M images per camera, create folders Cam1, Cam2, through CamN, each of which contains Image1, Image2, through ImageM.

        Each folder corresponds to a camera, and contains images captured by that specific camera. Each image with a specific index corresponds to that same timestamp across all cameras. This structure ensures that the calibration process can accurately match images taken simultaneously from different cameras.

      2. Detect Pattern Points in the Images — If the calibration data contains a single pattern, use the detectPatternPoints function to detect the pattern points in the images. If the calibration data contains multiple patterns, use the detectMultiPatternPoints function instead. If using a video sequence, do not use every frame, as this can significantly increase the detection time.

      3. Perform Extrinsic Calibration — Determine the extrinsic parameters of multiple cameras by using the estimateMultiCameraParameters function, determining their relative positions and orientations within a shared coordinate system.

        In multi-camera calibration, you must choose a reference camera as the baseline for the calibration process. The calibration process estimates the relative positions and orientations of all other cameras with respect to the reference camera, establishing a common coordinate system.

  4. Evaluate Calibration — Visualize and assess the accuracy of the calibration by using the showReprojectionErrors function to display the reprojection errors. Evaluate the spatial relationships and orientations of the cameras by using the showExtrinsics function. Compute the root mean square error (RMSE) of the camera poses by using the compareTrajectories function to compute the root mean square error (RMSE) of camera poses, comparing them against the ground truth to ensure precision.

Tips

  • Ensure Proper Camera Setup — Set the focus of each camera for your application, and synchronize all cameras correctly.

  • Capture Images from Different Angles — Capture images of calibration patterns from different angles to improve calibration accuracy.

  • Maintain Consistent Lighting — Avoid shadows and reflections that can affect calibration.

References

[1] Tsai, R.Y., and R.K. Lenz. "A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/Eye Calibration." IEEE Transactions on Robotics and Automation 5, no. 3 (June 1989): 345—58. https://doi.org/10.1109/70.34770.

See Also

Apps

Functions

Topics