You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an ASUS Xtion Pro Live that I want to calibrate and use to guide a robot to perform a process that involves touching surfaces. I need to calibrate this camera and use its intrinsic parameters (e.g focal lengths).
I have noticed that running the calibration process and producing a calibration file in .ros/camera_info will change the published depth camera_info to match the IR image, but the actual depth data itself is not changed. This means that later ROS nodes use a different focal length to project the depth data than is used to create the depth from what I presume is a disparity map from the device.
My question can be posed several ways:
How do I find the camera intrinsics used to compute the depth data?
How do I adjust the openni2 nodes so that I can use the same calibrated focal length for both depth calculation and projection into a point cloud?
How have others dealt with similar situations?
Thanks
The text was updated successfully, but these errors were encountered:
Howdy,
I have an ASUS Xtion Pro Live that I want to calibrate and use to guide a robot to perform a process that involves touching surfaces. I need to calibrate this camera and use its intrinsic parameters (e.g focal lengths).
I have noticed that running the calibration process and producing a calibration file in
.ros/camera_info
will change the published depth camera_info to match the IR image, but the actual depth data itself is not changed. This means that later ROS nodes use a different focal length to project the depth data than is used to create the depth from what I presume is a disparity map from the device.My question can be posed several ways:
Thanks
The text was updated successfully, but these errors were encountered: