Determination of stereo camera world coordinates with respect to calibration target
6 次查看(过去 30 天)
显示 更早的评论
I'm having some issues getting a precise and accurate determination of the location of my cameras. I am defining my world coordinate system with the origin on the calibration target, and want to know the world coordinates of the camera centers relative to this. I take a set of ~20 images, calibrate the cameras using the stereo calibration functions, and then compute the camera centers as:
wocoIdx= 1; % index for the image with the calibration target located where I would like to define my world coordinate system
C1= -stereoParams.CameraParameters1.TranslationVectors(wocoIdx,:);
C2= -stereoParams.CameraParameters2.TranslationVectors(wocoIdx,:);
The issue is that this typically gives me cameras spaced ~111mm apart horizontally, versus the 140mm I have measured manually.
(1) Is camera height vs. camera spacing a limiting factor here? My cameras are about 1960mm from the world coordinate target and spaced ~140mm apart. When calibrating, I translate the calibration target in a range of 200mm towards/away from the camera (this is limited by the depth of field of the setup).
(2) Am I missing another step (rotation?) to obtain the camera centers? I've also tried using the triangulate function but had little success getting an accurate estimate. The approach I used with triangulate is:
undistortedImagePts1= undistortPoints(imagePoints(:,:,wocoIdx,1),stereoParams.CameraParameters1);
undistortedImagePts2= undistortPoints(imagePoints(:,:,wocoIdx,2),stereoParams.CameraParameters2);
C1= triangulate(undistortedImagePts1(1,:),undistortedImagePts2(1,:),stereoParams);
C2= C1 + stereoParams.TranslationOfCamera2';
0 个评论
采纳的回答
Dima Lisin
2015-7-21
Matt's is almost correct. The extrinsics R and t represent the transformation from the world coordinates into camera's coordinates. So t is not the camera center. You have to rotate as well. The only problem with Matt's solution is that it does not follow the vector-matrix multiplication convention used by the Computer Vision System Toolbox. The camera center in the world coordinates is
c = -t * R';
because t is a row vector.
Also, you do not have to use one of your calibration images. You can calibrate your cameras and then take a new picture of a checkerboard, and use the extrinsics function to compute the R and t relative to that board. See this example.
3 个评论
Dima Lisin
2015-7-22
The baseline (distance between cameras) should be proportional to the distance to the object. If the object is too far away, then stereo disparity becomes too small to be measured. This is when you may want to increase the baseline. On the other hand, if the baseline is too wide, then you cannot measure the objects that are close. It is like moving your finger too close to your eyes. This also depends on the resolution of your cameras.
You can easily check if your setup is reasonable. Calibrate your cameras, take a pair of images, rectify them, create a red-cyan anaglyph from the rectified images using stereoAnaglyph function, and display it using imtool. Then use the ruler widget in imtool to measure the disparity between a few pairs of corresponding points. If the disparity is less than a pixel, than you want to increase baseline or move the cameras closer to the object. If the disparity is too large (more than a couple of hundred pixels), then reduce the baseline, or move the object farther away.
更多回答(1 个)
Matt J
2015-7-17
编辑:Matt J
2015-7-17
Am I missing another step (rotation?) to obtain the camera centers?
I think so. The camera center is the null vector of the 3x4 camera matrix or, in non-homogenous coordinates, the center C is given by
C=-R.'*t
where R is the camera rotation matrix and t is the translation vector from the camera extrinsic parameters. It looks like you are interpreting t to be the camera center.
7 个评论
Matt J
2015-7-20
编辑:Matt J
2015-7-20
Another suggestion would be to try the calibration again with the cameras at a smaller depth from the calibration target. I've heard speculation that the decomposition of a camera into extrinsic and intrinsic parameters becomes more ill-posed at large depths. If a smaller depth gives you better accuracy, it could at least tell you whether the camera centers are located in the physical bodies of the camera.
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Camera Calibration 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!