Does the T obtained from the PoseCamera2 in stereo calibration translate Camera 2 in the coordinate system where Camera1's optical center is the origin, or Camera2's ?

3 次查看(过去 30 天)
I want to know the poses and positions of two cameras through stereo calibration. Therefore, I prepared Camera 1 and Camera 2, ran the stereocalibrator in the app, and outputted the result with Camera 1's optical center as the origin. The results are shown in the first image and values ① and ②. On the other hand, using the same images, I outputted the result with Camera 2's optical center as the origin, which are shown in the second image and values ③ and ④.
From the first image:
R=[0.745, -0.192, 0.637; 0.188, 0.979, 0.0761; -0.639, 0.0633, 0.766]...①
T=[-1828, -46.8, 57.3]...②
From the second image:
R=[0.745, 0.188, -0.639; -0.192, 0.979, 0.0633; 0.637, 0.0761, 0.766]...③
T=[1406, -311, 1129]...④
Here, I will show the relationship equation taught by a MATLAB staff member before. If the position of Camera 1 is [R1, t1; 0 1], the position of Camera 2 is [R2, t2; 0 1], and PoseCamera2 is [R, T; 0 1], then the following relationship holds.
R1 = R * R2...⑤
t1 = R * t2 + T...⑥
I will verify if ⑥ is satisfied using the first image and ②, but since Camera 2 is not moved to Camera 1, it does not satisfy the equation. Similarly, verifying with the second image and ④, it is also not a reasonable result. However, in the first image, where Camera 1's optical center is represented as the origin, if you move Camera 2 by ② such that Camera 2's optical center is the origin, the positions of Camera 1 and Camera 2 will match. Similarly, in the second image, if you move Camera 2 by ④ with Camera 2's optical center as the origin, it will align with Camera 1.
Additionally, the number of image pairs used during stereo calibration was 30, and the reprojection error was 0.23 pixels from either image.

回答(1 个)

Aastha
Aastha 2024-9-23
As I understand, you want to resolve the convention that is used in the output of the PoseCamera2, and you also want an explanation for your observations. The observations can be explained as follows:
Let “E1” and “E2” denote the poses from Camera 1 and Camera 2 .
E1 = [R1, t1; 0, 1]; % E1 matrix  
E2 = [R2, t2; 0, 1]; % E2 matrix
In the equations that you have mentioned, the following equality should hold, but based on your figures, you observed that it does not.
E1 = [R, T; 0, 1] * [R2, t2; 0, 1];
E1 = [R * R2, R * t2 + T; 0, 1];
From the observation, the positions of both cameras align when "t2" is shifted by "T". This occurs when "R" and "T" are used to transform the position of camera2 to match the position of camera1, as follows:
% Inverse of the transformation matrix [R, T; 0, 1]  
E1 = inv([R, T; 0, 1]) * E2;
E1 = [R', -R' * T; 0, 1] * [R2, t2; 0, 1];
[R1, t1; 0, 1] = [R' * R2, R' * (t2 - T); 0, 1];
As shown in the code snippet above, when comparing “t1 on the LHS with its corresponding value on the RHS, observe that t1 equals "R' * (t2 - T)". This confirms your observation in Figure1 that "t2" needs to be shifted by "T" for the positions of both cameras to match. The additional rotation aligns the orientations of both cameras. Similarly, the same argument can be made for Figure2.
I hope this helps!

类别

Help CenterFile Exchange 中查找有关 MATLAB Support Package for USB Webcams 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by