I have a problem about setting the camera in the unity to have the same extrinsic parameters as my real world camera to the origin. So let’s me describe my problem step by step.
I have a camera in a real world and Its extrinsic to an aruco plane was calculated. So I use the Aruco plane(You can think as a plane reference) as an world origin(0, 0, 0).
The in my unity, I want to set my camera to have the same extrinsic as my real world camera. So I take an translation to set the Position in unity transform toolbox. And set the rotation by convert my rotation matrix from extrinsic to the euler’s angle and set it to x, y, z rotation.
So I test this by use the unity properties named “camera.cameraToWorldMatrix”. You can treat this as an extrinsic of the unity camera transform to the world coordinates(0, 0, 0). So this value should equal to my extrinsic since it’s use the same rotation euler’s angle. But unfortunately it’s not.
So how can i solve this problem or there’s a simpler way to set the cameraToWorldMatrix equal to my extrinsic parameters.
For more information : All of my coordinates system is respect to opengl (In Opencv coordinates system are negated in y and z axis from opengl)
1.Here’s my extrinsic that i got from the solvePnp() function. This is an extrinsic from camera to my aruco plane. Then I use this [R|T] to get the euler’s angle rotation and translation. And in my image also show my euler and translation. So this extinsic is also my expected extrinsic.
- I use these euler’s angle rotation and translation to set my camera in unity.
Here’s my code to map from the camera coordinates to the aruco plane coordinates
def mapBallToArucoPlane(camera_coordinates, rvec, tvec): # Mapping ball in left camera_coordiantes to the camera_coordinates(Aruco plane coordiantes) # By using the inverse of [rvec|tvec]; Since [rvec|tvec] is the transformation matrix from Aruco->camera, but we need camera->Aruco R = cv2.Rodrigues(rvec) # print(R.shape, cv2.Rodrigues(rvec)) T = np.array(tvec) # print(R.shape, T.shape, np.hstack((R, T)).shape, # np.linalg.pinv(np.hstack((R, T))).shape) arucoPlaneToCameraMatrix = np.vstack((np.hstack((R, T)), np.array([0, 0, 0, 1]))) cameraToArucoPlaneMatrix = np.linalg.inv(np.vstack((np.hstack((R, T)), np.array([0, 0, 0, 1])))) plane_coordinates = cameraToArucoPlaneMatrix @ camera_coordinates plane_coordinates = plane_coordinates/plane_coordinates[-1] return plane_coordinates, arucoPlaneToCameraMatrix
My output extrinsic. (Should be the same as above)
Thank you very much.