OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

About the rotation and translation vectors that get from Camera matrix with checkerboard

  • Thread starter Thread starter Alan Lee
  • Start date Start date
A

Alan Lee

Guest
Code:
ret
3.5233523642607523
camera_matrix
[[976.95273119   0.         660.46033674]
 [  0.         983.93214293 265.09898857]
 [  0.           0.           1.        ]]
dist
[[ 0.00786213 -1.08703858 -0.03323499 -0.00704253  3.32886628]]
rvecs
[[[-0.15258765]
  [ 0.05568271]
  [-0.00297466]]

 [[-0.1975074 ]
  [ 0.18257839]
  [ 0.01421636]]

 [[-0.63995505]
  [ 0.01228971]
  [ 0.01489291]]

 [[ 0.17243489]
  [ 0.10097482]
  [-0.02850808]]

 [[-0.16032189]
  [-0.4157807 ]
  [ 0.07556575]]

 [[-0.0815965 ]
  [ 0.57730837]
  [-0.01252756]]]
tvecs
[[[-0.1267968 ]
  [-0.04263508]
  [ 0.31507263]]

 [[-0.11701824]
  [ 0.00296928]
  [ 0.5963867 ]]

 [[-0.1100471 ]
  [-0.03971764]
  [ 0.3491404 ]]

 [[-0.12381456]
  [-0.02769804]
  [ 0.28588741]]

 [[-0.10678357]
  [-0.04163386]
  [ 0.30308073]]

 [[-0.09151737]
  [-0.03837504]
  [ 0.44047188]]]

I got 6 vectors in rvecs and tvecs as I used 6 images of checkerboard with different tilting angles. I also understand I need to convert this 3x1 rvec to 3x3 rotation matrix by

Code:
# Convert rotation vector to rotation matrix
R, _ = cv2.Rodrigues(rvec)

My current project is detecting an object (using YOLO and single camera) and calculate & display the distance between camera and the object by comparing actual length of the object and length in pixel image.

Here are my questions:


  1. As one point could be located further than the another point from the camera, and this causes the shorter distance in pixel. To estimate/ calculate more percise & accurate distance, do I need to convert two points in image coordinate (in pixel) into world coordinate/ camera coordinate (in m)?


  2. If world coordinate is required, which 'rotation vector' and 'translation vector' from the camera calibration should I use (or should I use the average of them)?


  3. When undistorting points, do I use same camera matrix for P=, or have to get new camera matrix by using "cv2.getOptimalNewCameraMatrix"?

Code:
# Undistort the pixel coordinates
undistorted_point = cv2.undistortPoints(np.expand_dims(pixel_coords, axis=0), K, dist, P=K)
<pre><code>ret
3.5233523642607523
camera_matrix
[[976.95273119 0. 660.46033674]
[ 0. 983.93214293 265.09898857]
[ 0. 0. 1. ]]
dist
[[ 0.00786213 -1.08703858 -0.03323499 -0.00704253 3.32886628]]
rvecs
[[[-0.15258765]
[ 0.05568271]
[-0.00297466]]

[[-0.1975074 ]
[ 0.18257839]
[ 0.01421636]]

[[-0.63995505]
[ 0.01228971]
[ 0.01489291]]

[[ 0.17243489]
[ 0.10097482]
[-0.02850808]]

[[-0.16032189]
[-0.4157807 ]
[ 0.07556575]]

[[-0.0815965 ]
[ 0.57730837]
[-0.01252756]]]
tvecs
[[[-0.1267968 ]
[-0.04263508]
[ 0.31507263]]

[[-0.11701824]
[ 0.00296928]
[ 0.5963867 ]]

[[-0.1100471 ]
[-0.03971764]
[ 0.3491404 ]]

[[-0.12381456]
[-0.02769804]
[ 0.28588741]]

[[-0.10678357]
[-0.04163386]
[ 0.30308073]]

[[-0.09151737]
[-0.03837504]
[ 0.44047188]]]
</code></pre>
<p>I got 6 vectors in rvecs and tvecs as I used 6 images of checkerboard with different tilting angles.
I also understand I need to convert this 3x1 rvec to 3x3 rotation matrix by</p>
<pre><code># Convert rotation vector to rotation matrix
R, _ = cv2.Rodrigues(rvec)
</code></pre>
<p>My current project is detecting an object (using YOLO and single camera) and calculate & display the distance between camera and the object by comparing actual length of the object and length in pixel image.</p>
<p>Here are my questions:</p>
<ol>
<li><p>As one point could be located further than the another point from the camera, and this causes the shorter distance in pixel. To estimate/ calculate more percise & accurate distance, do I need to convert two points in image coordinate (in pixel) into world coordinate/ camera coordinate (in m)?</p>
</li>
<li><p>If world coordinate is required, which 'rotation vector' and 'translation vector' from the camera calibration should I use (or should I use the average of them)?</p>
</li>
<li><p>When undistorting points, do I use same camera matrix for P=, or have to get new camera matrix by using "cv2.getOptimalNewCameraMatrix"?</p>
</li>
</ol>
<pre><code># Undistort the pixel coordinates
undistorted_point = cv2.undistortPoints(np.expand_dims(pixel_coords, axis=0), K, dist, P=K)
</code></pre>
 

Latest posts

Top