Description
Hi, the rotation is consistently off across objects when I execute the grasp pose on the robot. My eye-in-hand calibration matrix seems to be right because I can reconstruct a coherent scene from different camera positions. I can also move the robot to known targets (on a ChArUco board). When I visualize the grasp pose it looks straight with just a small tilt here and there but when I convert the pose from the camera frame to the robot base I get rotations that don't make sense.
If I manually set the rotation to say [0, -pi, 0] then the robot is able to reach the target object. Do I need to do some other alignment? I'm I correct that Anygrasp's predicted poses are in the camera frame and I just need to do a camera to world transform?
I saw this related issue about translation, but we have the opposite case where the translation is correct but the rotation is off. I double-checked that we're using the correct intrinsics.