estimateGravityRotationAndPoseScale
Estimate gravity rotation and pose scale using IMU measurements and factor graph optimization
Since R2023a
Description
The estimateGravityRotationAndPoseScale
function estimates
the gravity rotation and pose scale that helps in transforming input poses to the local
navigation reference frame of IMU using IMU measurements and factor graph optimization. The
gravity rotation transforms the gravity vector from the local navigation reference frame of
IMU to the pose reference frame. The pose scale brings input poses to the metric scale,
similar to IMU measurements.
The accelerometer measurements contain constant gravity acceleration that does not contribute to motion. You must remove this from the measurements for accurate fusion with other sensor data. The input pose reference frame may not always match the local navigation reference frame of IMU, North-East-Down (NED) or East-North-Up (ENU) in which the gravity direction is known. So, it is necessary to transform the input poses to the local navigation frame to remove the known gravity effect. The estimated rotation helps in transforming the input pose reference frame to the local navigation reference frame of IMU.
Monocular camera sensor-based structure from motion (SfM) estimates poses at an unknown scale different from metric measurements obtained by an IMU. The accelerometer readings help estimate scale factor to bring input poses to metric scale similar to IMU measurements.
[
estimates the rotation required to transform the gravity vector from the local navigation
reference frame of IMU (NED or ENU) to the input pose reference frame. The function also
estimates scale, so input poses at an unknown scale can be converted to metric units similar
to those in the IMU measurements.gRot
,scale
,info
] = estimateGravityRotationAndPoseScale(poses
,gyroscopeReadings
,accelerometerReadings
,Name=Value
)
Note
Input poses must be in the initial IMU reference frame unless you specify the
SensorTransform
name-value argument, then the poses can be in a
different frame.
Examples
Input Arguments
Output Arguments
References
[1] Campos, Carlos, Richard Elvira, Juan J. Gomez Rodriguez, Jose M. M. Montiel, and Juan D. Tardos. “ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM.” IEEE Transactions on Robotics 37, no. 6 (December 2021): 1874–90. https://doi.org/10.1109/TRO.2021.3075644.
Extended Capabilities
Version History
Introduced in R2023a