Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Target Image creation is not working #2

Open
user24255 opened this issue May 18, 2022 · 3 comments
Open

Target Image creation is not working #2

user24255 opened this issue May 18, 2022 · 3 comments

Comments

@user24255
Copy link

Hello,

I have created a robot and I imported it in the environment.
Screenshot from 2022-05-18 16-10-20
I changed all the actuated joints and the monitored joints in the pipeline. Collect mode is running, but the results are not what they should be. The robot is not completing the task and for some reason the environment is getting spoiled.
replay_buffer_0005
What can be done to correct that?

Thank you

@ogroth
Copy link
Owner

ogroth commented May 23, 2022

Hi Kastellos,
I assume you're trying to collect pick-and-place data? The sub-routines for the individual steps in the manipultion sequence are defined here. Some constants in that script were specifically tuned to work well with the Fetch arm - maybe you need to adjust something there? It also looks like your target pads are spawned off the table. Please check that the constants for task generation, i.e. where the objects are spawned at the beginning of each episode are set correctly in the notebook you're using for the generation. Regarding the rendering problems, I admittedly don't know any solutions off the top of my head. Did you include any custom textures or materials which might not be compatible with the mujoco_py renderer?

@user24255
Copy link
Author

I have not used any custom materials. I made the Kuka arm using the Robosuite environment. In the simulation environment I just changed the robot ( that file ) and I made the changes that were needed in order for the environment to render. Other than that I did not change anything else. In the gym_pickplace_.py I changed the names of the operated joints and the actuated joints . Regarding the spawn position of the objects and the goals they spawn correctly I just provided an image from the environment only. Not the gym_pickplace.py. I assumed that the robot will not work properly because as you said the environment was made for the Fetch arm but I cannot understand why the rendering part will have a problem. Not all the photos are like the one I uploaded
replay_buffer_0001
I get 2/3 correct rendering but the 1/3 is like the one I uploaded previously. Is the Gym.env from the OpenAI gym specifically made for the fetch arm? In your opinion do you think there might be a problem with the actuated joints? Or maybe the fact that I changed the names?

@ogroth
Copy link
Owner

ogroth commented May 25, 2022

Hi Kastellos,
Regarding the rendering, I have admittedly no better idea besides what I've already told you. Maybe there's some problem applying textures to your new meshes? Have you ever tried to remove all textures from the robot and just render those meshes with the default?
Regarding the joints: The environment is actually a little special in a sense that the operation relies on a special robot0:mocap joint to be present in the model at the base of the arm's end-effector. The EEF controls are directly applied to this special mocap joint and the rest of the arm is moved accordingly. Can you double check that this mocap joint sits at the correct position of your imported robot model, i.e. the same relative position w.r.t. the EEF base like in the Fetch as defined here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants