OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

Correct Camera Position from Vectors from SolvePnP in Blender

  • Thread starter Thread starter Fieldie101
  • Start date Start date
F

Fieldie101

Guest
I'm just learning some OpenCV and was playing inside of Blender.

As a test I'm trying to recreate something I saw in another post, but inside the blender. I want to recreate a camera that I created looking at a plane. The image I want to create the image from is: enter image description here

This is what I have for the code in Blender so far:

Code:
import bpy
import cv2
import numpy as np
from mathutils import Matrix, Vector
from scipy.spatial.transform import Rotation

def focalMM_to_focalPixel(focalMM, sensorWidth, imageWidth):
    pixelPitch = sensorWidth / imageWidth
    return focalMM / pixelPitch

# Read Image
im = cv2.imread("assets/cameraView.jpg")

imageWidth = 1920
imageHeight = 1080
imageSize = [imageWidth, imageHeight]

points_2D = np.array([
    (949, 49),
    (1415, 45),
    (962, 913),
    (1398, 977)
], dtype="double")

points_3D = np.array([
    (-.30208, -1.8218, 8.3037),
    (-.30208, -1.8303, 8.3037),
    (-.30208, -1.8218, 1.381),
    (-.30208, -1.8303, 1.381)
])

focalLengthMM = 50
sensorWidth = 36

fLength = focalMM_to_focalPixel(focalLengthMM, sensorWidth, imageWidth)
print("focalLengthPixel", fLength)

K = np.array([
    [fLength, 0, imageWidth/2],
    [0, fLength, imageHeight/2],
    [0, 0, 1]
])
distCoeffs = np.zeros((5, 1))

success, rvecs, tvecs = cv2.solvePnP(points_3D, points_2D, K, distCoeffs, flags=cv2.SOLVEPNP_ITERATIVE)

np_rodrigues = np.asarray(rvecs[:,:], np.float64)
rmat = cv2.Rodrigues(np_rodrigues)[0]
camera_position = -np.matrix(rmat).T @ np.matrix(tvecs)

# Test the solvePnP by projecting the 3D Points to camera
projPoints = cv2.projectPoints(points_3D, rvecs, tvecs, K, distCoeffs)[0]

for p in points_2D:
    cv2.circle(im, (int(p[0]), int(p[1])), 3, (0, 255, 0), -1)

for p in projPoints:
    cv2.circle(im, (int(p[0][0]), int(p[0][1])), 3, (255, 0, 0), -1)

# cv2.imshow("image", im)
# cv2.waitKey(0)

r = Rotation.from_rotvec([rvecs[0][0], rvecs[1][0], rvecs[2][0]])
rot = r.as_euler('xyz', degrees=True)

tx = camera_position[0][0]
ty = camera_position[1][0]
tz = camera_position[2][0]

rx = round(180 - rot[0], 5)
ry = round(rot[1], 5)
rz = round(rot[2], 5)

# Creating the camera in Blender
bpy.ops.object.camera_add()
camera = bpy.context.object
camera.location = (tx, ty, tz)

# Convert rotation from degrees to radians for Blender
camera.rotation_euler = (np.radians(rx), np.radians(ry), np.radians(rz))

# Setting the camera parameters
camera.data.lens = focalLengthMM
camera.data.sensor_width = sensorWidth 
camera.data.sensor_height = sensorWidth * (imageHeight / imageWidth)
camera.data.shift_x = (imageWidth / 2 - K[0, 2]) / imageWidth
camera.data.shift_y = (imageHeight / 2 - K[1, 2]) / imageHeight

I'm confused on how I achieve the proper Rotation and Translation and I'm sure I'm doing something wrong.

The values I get send my camera into a weird spot. Below image the highlighted camera is the generated camera vs the original.

enter image description here

I appreciate any help.
<p>I'm just learning some OpenCV and was playing inside of Blender.</p>
<p>As a test I'm trying to recreate something I saw in another post, but inside the blender. I want to recreate a camera that I created looking at a plane. The image I want to create the image from is:
<a href="https://i.sstatic.net/MFlLCMpB.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>This is what I have for the code in Blender so far:</p>
<pre><code>import bpy
import cv2
import numpy as np
from mathutils import Matrix, Vector
from scipy.spatial.transform import Rotation

def focalMM_to_focalPixel(focalMM, sensorWidth, imageWidth):
pixelPitch = sensorWidth / imageWidth
return focalMM / pixelPitch

# Read Image
im = cv2.imread("assets/cameraView.jpg")

imageWidth = 1920
imageHeight = 1080
imageSize = [imageWidth, imageHeight]

points_2D = np.array([
(949, 49),
(1415, 45),
(962, 913),
(1398, 977)
], dtype="double")

points_3D = np.array([
(-.30208, -1.8218, 8.3037),
(-.30208, -1.8303, 8.3037),
(-.30208, -1.8218, 1.381),
(-.30208, -1.8303, 1.381)
])

focalLengthMM = 50
sensorWidth = 36

fLength = focalMM_to_focalPixel(focalLengthMM, sensorWidth, imageWidth)
print("focalLengthPixel", fLength)

K = np.array([
[fLength, 0, imageWidth/2],
[0, fLength, imageHeight/2],
[0, 0, 1]
])
distCoeffs = np.zeros((5, 1))

success, rvecs, tvecs = cv2.solvePnP(points_3D, points_2D, K, distCoeffs, flags=cv2.SOLVEPNP_ITERATIVE)

np_rodrigues = np.asarray(rvecs[:,:], np.float64)
rmat = cv2.Rodrigues(np_rodrigues)[0]
camera_position = -np.matrix(rmat).T @ np.matrix(tvecs)

# Test the solvePnP by projecting the 3D Points to camera
projPoints = cv2.projectPoints(points_3D, rvecs, tvecs, K, distCoeffs)[0]

for p in points_2D:
cv2.circle(im, (int(p[0]), int(p[1])), 3, (0, 255, 0), -1)

for p in projPoints:
cv2.circle(im, (int(p[0][0]), int(p[0][1])), 3, (255, 0, 0), -1)

# cv2.imshow("image", im)
# cv2.waitKey(0)

r = Rotation.from_rotvec([rvecs[0][0], rvecs[1][0], rvecs[2][0]])
rot = r.as_euler('xyz', degrees=True)

tx = camera_position[0][0]
ty = camera_position[1][0]
tz = camera_position[2][0]

rx = round(180 - rot[0], 5)
ry = round(rot[1], 5)
rz = round(rot[2], 5)

# Creating the camera in Blender
bpy.ops.object.camera_add()
camera = bpy.context.object
camera.location = (tx, ty, tz)

# Convert rotation from degrees to radians for Blender
camera.rotation_euler = (np.radians(rx), np.radians(ry), np.radians(rz))

# Setting the camera parameters
camera.data.lens = focalLengthMM
camera.data.sensor_width = sensorWidth
camera.data.sensor_height = sensorWidth * (imageHeight / imageWidth)
camera.data.shift_x = (imageWidth / 2 - K[0, 2]) / imageWidth
camera.data.shift_y = (imageHeight / 2 - K[1, 2]) / imageHeight

</code></pre>
<p>I'm confused on how I achieve the proper Rotation and Translation and I'm sure I'm doing something wrong.</p>
<p>The values I get send my camera into a weird spot. Below image the highlighted camera is the generated camera vs the original.</p>
<p><a href="https://i.sstatic.net/8qDxHiTK.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>I appreciate any help.</p>
 

Latest posts

Top