Loss of resolution when transforming an image with camera parameters

4 次查看(过去 30 天)
Dear forum,
I would like to transform a captured image with a checkerboard pattern into the top view using cameraParameters/Pinhole-Model.
My code works so far, but the resolution of the transformed image is very low although I realize that transformations always cause loss of resolution.
My excampe-Code:
% Load dummy-Pictures
images = imageDatastore(fullfile(toolboxdir('vision'),'visiondata', ...
'calibration','slr'));
% Recognize the checkerboard pattern
[imagePoints,boardSize] = detectCheckerboardPoints(images.Files);
% Create the world coordinates
squareSize = 29; % [mm]
worldPoints = generateCheckerboardPoints(boardSize,squareSize);
% Calibrate and create the camera parameters
cameraParams = estimateCameraParameters(imagePoints,worldPoints);
% Load the image to be transformed
imOrig = imread(fullfile(matlabroot,'toolbox','vision','visiondata', ...
'calibration','slr','image9.jpg'));
size(imOrig)
1880 2816 3
imshow(imOrig);
% Undistort image
imUndistorted = undistortImage(imOrig,cameraParams);
% Recognize the checkerboard pattern
[imagePoints,boardSize] = detectCheckerboardPoints(imUndistorted);
% Calculate extrinsic parameters
[R,t] = extrinsics(imagePoints,worldPoints,cameraParams);
% Summarize rotation and translation
R(3, :) = t;
% Calculate the homography matrix:
H = R * cameraParams.IntrinsicMatrix;
% Transform with the inverse homography matrix
[J,Jref] = imwarp(imUndistorted, projective2d(inv(H)));
figure
imshow(J);
So far, the transformation works as intended. When I look at the imref2d properties:
disp(Jref)
imref2d with properties:
XWorldLimits: [-106.33 413.67]
YWorldLimits: [-382.54 257.46]
ImageSize: [640 520]
PixelExtentInWorldX: 1
PixelExtentInWorldY: 1
ImageExtentInWorldX: 520
ImageExtentInWorldY: 640
XIntrinsicLimits: [0.5 520.5]
YIntrinsicLimits: [0.5 640.5]
So the resolution has dropped from 1880x2816 (Input) to 640x520 (Transformed Output).
Is such a big loss normal?
Am I using imwarp the wrong way?
Does it make sense to resize the image?
Thank you so much and greetings
Philipp

回答(2 个)

Ryan Comeau
Ryan Comeau 2020-5-9
Hello, I am going to have a stab at this for ya. When dealing with objects in 2 dimensions, we need to deal with perspective. Remeber that when you're taking an image, you're mapping a 3D space into a 2D plane, that's how we can tell if things are far away in an image. Same trickery they use in Lord of the Rings to make Hobbits.
What I am getting at here is we need to change the shape of the squares on an image to indicate the distance of them from the perspective we're looking at. We know that they're squares, but if you measure them in the image, they're actually non square, they need to be stretched or shrunk to appear further away. So when transforming from a non 90 degree angle, you need to transform them from a general quadrilateral into a square. I hope this makes sense, and I've uploaded and image of 3d street art to help illustrate this effect.
This image has two different resolutions has well, due to the way we map 3D objects in a 2D space.

Johann Matthias Martin
Hello Phillip, I'm from the dbta team. You worked with Mr. Gerke. Your data is very interesting. I have some question regarding your master thesis. Unfortunately I don't have any contact information of you. That's why I'm trying to reach you here. I would be appreciate it if you could contact me j.weigelt@tu-berlin.de

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by