New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Task Space Mapping for Franka Kitchen environments #141
Comments
Using the qvel directly without normalizing gives close to expected results. Even then, it doesn't exactly follow the same trajectory as the predicted actions, but it comes quite close. Related: #142 |
I haven't tested in cases you are using it now. This might be a good chance for me to test those. I'm traveling this week. I'll be able to take a look next week. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello,
I am trying to set up a task-space training pipeline for the Franka Kitchen environments.
My understanding is that the input to RoboHive environments is in joint-space. In the teleop script the input to the
rpFrankaRobotiqData-v0
environment is simply the normalizedqpos
.This does not work for the Franka Kitchen tasks. I have a trained torchRL agent which predicts the
actions
,qpos
andqvel
. I found that theenv.robot._act_mode
is "vel" for the Franka Kitchen environments, so I expected that theaction
is the normalized qvel i.e.action = env.robot.normalize_actions(qvel)
.This does not work and the robot does not move as expected. What am I doing wrong?
.
PS - This seems like a bug in the
normalize_actions
function. Shouldn't it be:instead of
The text was updated successfully, but these errors were encountered: