-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rotation is off #107
Comments
Hi Daniel, I guess there's a transformation between your end effector and the gripper coordinate frame defined in graspnet. You could refer to https://graspnetapi.readthedocs.io/en/latest/grasp_format.html#d-grasp to transform the tcp. |
Thanks for your reply @chenxi-wang! I actually didn't know there was a readthedocs site for Anygrasp, that's super helpful. Just to confirm, Anygrasp assumes that the gripper's x is pointing up, y is pointing right and z points into the scene as shown in this image from the docs. So correcting the coordinates to align with my end effector should fix the problem. My second question concerns the code block right above the coordinate image. It seems to be isolated but important. What is the code block about? ![]() |
Yes, your understanding is right.
It does not matter. I think it is just a typo lol. |
I tried using a simple transformation to swap the x and z axis but that failed. The translation that was accurate before wasn't accurate anymore. We're using the same setup as the experiments in the Anygrasp paper with a camera mounted on the robot arm and the image is taken from the top facing down. I'm wondering why the translation was accurate before if the frame needed to be reoriented. |
Hi, the rotation is consistently off across objects when I execute the grasp pose on the robot. My eye-in-hand calibration matrix seems to be right because I can reconstruct a coherent scene from different camera positions. I can also move the robot to known targets (on a ChArUco board). When I visualize the grasp pose it looks straight with just a small tilt here and there but when I convert the pose from the camera frame to the robot base I get rotations that don't make sense.
If I manually set the rotation to say [0, -pi, 0] then the robot is able to reach the target object. Do I need to do some other alignment? I'm I correct that Anygrasp's predicted poses are in the camera frame and I just need to do a camera to world transform?
I saw this related issue about translation, but we have the opposite case where the translation is correct but the rotation is off. I double-checked that we're using the correct intrinsics.
The text was updated successfully, but these errors were encountered: