estimateCameraIMUTransform
Estimate transformation from camera to IMU sensor using calibration data
Since R2024a
Syntax
Description
tform
=
estimateCameraIMUTransform(
estimates the fixed SE(3) transformation from the camera to the IMU sensor frame using the
distorted image point tracks of a calibration target board captured by the camera, the
pattern points of the calibration target board in the world frame, the intrinsics of the
camera, the IMU measurements corresponding to the calibration images, and the IMU noise
model parameters. The imagePoints
,patternPoints
,imuMeasurements
,cameraIntrinsics
,imuParams
)estimateCameraIMUTransform
function assumes that the camera and
IMU are rigidly attached to each other. For an example, see Estimate Camera-to-IMU Transformation Using Extrinsic Calibration.
By default, this function plots the progress of pose estimation and undistortion. To
disable visualization, set the ShowProgress
property of
options
to "none"
.
[___] = estimateCameraIMUTransform(___,
additionally specifies calibration options.options
)
Input Arguments
Output Arguments
References
[1] Qin, Tong, and Shaojie Shen. “Online Temporal Calibration for Monocular Visual-Inertial Systems,” 3662–69. Madrid, Spain: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018. https://doi.org/10.1109/IROS.2018.8593603.
[2] Furgale, Paul, Joern Rehder, and Siegwart Roland. “Unified Temporal and Spatial Calibration for Multi-Sensor Systems,” 1280–86. Tokyo, Japan: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013. https://doi.org/10.1109/IROS.2013.6696514.
[3] Qin, Tong, Peiliang Li, and Shaojie Shen. “VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator.” IEEE Transactions on Robotics 34, no. 4 (August 2018): 1004–20. https://doi.org/10.1109/TRO.2018.2853729.
Version History
Introduced in R2024a