Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rotation is off #107

Open
daniekpo opened this issue Mar 20, 2025 · 4 comments
Open

Rotation is off #107

daniekpo opened this issue Mar 20, 2025 · 4 comments

Comments

@daniekpo
Copy link

Hi, the rotation is consistently off across objects when I execute the grasp pose on the robot. My eye-in-hand calibration matrix seems to be right because I can reconstruct a coherent scene from different camera positions. I can also move the robot to known targets (on a ChArUco board). When I visualize the grasp pose it looks straight with just a small tilt here and there but when I convert the pose from the camera frame to the robot base I get rotations that don't make sense.

If I manually set the rotation to say [0, -pi, 0] then the robot is able to reach the target object. Do I need to do some other alignment? I'm I correct that Anygrasp's predicted poses are in the camera frame and I just need to do a camera to world transform?

I saw this related issue about translation, but we have the opposite case where the translation is correct but the rotation is off. I double-checked that we're using the correct intrinsics.

@chenxi-wang
Copy link
Collaborator

Hi Daniel, I guess there's a transformation between your end effector and the gripper coordinate frame defined in graspnet. You could refer to https://graspnetapi.readthedocs.io/en/latest/grasp_format.html#d-grasp to transform the tcp.

@daniekpo
Copy link
Author

Hi Daniel, I guess there's a transformation between your end effector and the gripper coordinate frame defined in graspnet. You could refer to https://graspnetapi.readthedocs.io/en/latest/grasp_format.html#d-grasp to transform the tcp.

Thanks for your reply @chenxi-wang! I actually didn't know there was a readthedocs site for Anygrasp, that's super helpful. Just to confirm, Anygrasp assumes that the gripper's x is pointing up, y is pointing right and z points into the scene as shown in this image from the docs. So correcting the coordinates to align with my end effector should fix the problem.

My second question concerns the code block right above the coordinate image. It seems to be isolated but important. What is the code block about?

Image

@chenxi-wang
Copy link
Collaborator

Just to confirm, Anygrasp assumes that the gripper's x is pointing up, y is pointing right and z points into the scene as shown in this image from the docs.

Yes, your understanding is right.

My second question concerns the code block right above the coordinate image. It seems to be isolated but important. What is the code block about?

It does not matter. I think it is just a typo lol.

@daniekpo
Copy link
Author

I tried using a simple transformation to swap the x and z axis but that failed. The translation that was accurate before wasn't accurate anymore. We're using the same setup as the experiments in the Anygrasp paper with a camera mounted on the robot arm and the image is taken from the top facing down. I'm wondering why the translation was accurate before if the frame needed to be reoriented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants