Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using and registering RoboHive environments to the Ray API #103

Open
ChristosPeridis opened this issue Jul 6, 2023 · 11 comments
Open

Using and registering RoboHive environments to the Ray API #103

ChristosPeridis opened this issue Jul 6, 2023 · 11 comments
Labels
enhancement New feature or request

Comments

@ChristosPeridis
Copy link

Hello dear Dr. Vikash,

I hope you and everyone in your family are doing well! For conducting Reinforcement Learning experiments I have been using the Ray API and more specifically the implemented algorithms of RLlib and the Tune library for hyper parameter tuninng. In the past I had integrated and register in the Ray API the CT-graph benchmark, as a custom gym environment. You can see how I have performed this in the example code here

I have developed the following code that works on the same principals as the code developed for the integration of the CT-graph benchmark:

import robohive
import gym
import ray
ray.init()

resources = ray.cluster_resources()
print(resources)

resources = ray.cluster_resources()
print(resources)

def env_creator(env_config={}):
env = gym.make('FrankaPickPlaceFixed-v0')
env.reset()
return env

from ray.tune.registry import register_env
register_env("RoboHive_Pick_Place_0", env_creator)

sac_config = {
"env": "FrankaPickPlaceFixed-v0", # Specify your environment class here
"framework": "torch",
"num_workers": 4,
"num_gpus": 1,
"monitor": True,
# Add more SAC-specific config here
}

from ray import tune

analysis = tune.run(
"SAC",
config=sac_config,
stop={"training_iteration": 100}, # Specify stopping criteria
checkpoint_at_end=True
)

However I am getting the following error:

The above code throughs the following error:

(RolloutWorker pid=318903) ray::RolloutWorker.init() (pid=318903, ip=158.125.234.46, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f878b6db6a0>)
(RolloutWorker pid=318903) File "/home/lunet/cocp5/roboHive200/lib/python3.10/site-packages/ray/rllib/env/utils.py", line 54, in gym_env_creator
(RolloutWorker pid=318903) return gym.make(env_descriptor, **env_context)
(RolloutWorker pid=318903) File "/home/lunet/cocp5/roboHive200/lib/python3.10/site-packages/gym/envs/registration.py", line 156, in make
(RolloutWorker pid=318903) return registry.make(id, **kwargs)
(RolloutWorker pid=318903) File "/home/lunet/cocp5/roboHive200/lib/python3.10/site-packages/gym/envs/registration.py", line 100, in make
(RolloutWorker pid=318903) spec = self.spec(path)
(RolloutWorker pid=318903) File "/home/lunet/cocp5/roboHive200/lib/python3.10/site-packages/gym/envs/registration.py", line 142, in spec
(RolloutWorker pid=318903) raise error.UnregisteredEnv('No registered env with id: {}'.format(id))
(RolloutWorker pid=318903) gym.error.UnregisteredEnv: No registered env with id: FrankaPickPlaceFixed-v0

Why is the system unable to detect the RoboHive registered environment? Is it something hidden in the layer complexity of the API that I have not yet understand? Which tutorials or other material do you suggest me to study in order to better understand the structure of the RoboHive API, how it works and how I could modify it ?

Thank you very much in advance for the valuable help!!!

Kind regards,

Christos Peridis

@vikashplus
Copy link
Owner

Hi Christos,
Thanks for your interest in RoboHive.
All RoboHive environments at its core are vanilla gym environments. So I don't think you need anything more than import robohive to start making and interacting with RoboHive's environments.

I haven't used Ray so it's a bit hard for me to pinpoint. If I have to guess, I feel like the ray worker isn't able to find RoboHive. Two things I'd suggest --

  1. Try loading a default gym env to ensure that gym registration is working well. (Im sure that you tried this already, and this is not the issue)
  2. Before making the env env = gym.make('FrankaPickPlaceFixed-v0'), try import robohive at the line before. My guess will be that this will
    • either fix the issue, as RoboHive import gets forced to the ray worker before they make the env.
    • or it will throw a "can't find robohive" error, at which point you need to check why is RoboHive not available to the workers.

@ChristosPeridis
Copy link
Author

ChristosPeridis commented Jul 13, 2023

Hello dear Dr. Vikash,

How are you? I hope you and everyone in your family are all well and healthy!!! I am sending you this message to inform you that I managed to super pass the above issue with the Ray API. The problem got resolved after creating the conda environment with Python 3.9 (I have given more details regarding this environment on the open issue #102). In this conda environment setup I followed your second advise (the first one I had indeed tested it already) and I inserted the import robohive command inside the env_creator() function, at the line before the gym.make() command. Then the FrankaPickPlaceFixed-v0 environment was successfully created inside the Ray worker. Unfraternally, another error was thrown later regarding the building of the mujoco-py library:

NFO:root:running build_ext
(PPO pid=27623) INFO:root:building 'mujoco_py.cymj' extension
(PPO pid=27623) INFO:root:gcc -pthread -B /home/cocp5/anaconda3/envs/rhRL94/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/cocp5/anaconda3/envs/rhRL94/include -I/home/cocp5/anaconda3/envs/rhRL94/include -fPIC -O2 -isystem /home/cocp5/anaconda3/envs/rhRL94/include -fPIC -I/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py -I/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/binaries/linux/mujoco210/include -I/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/numpy/core/include -I/home/cocp5/anaconda3/envs/rhRL94/include/python3.9 -c /home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/cymj.c -o /home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/generated/_pyxbld_2.0.2.13_39_linuxcpuextensionbuilder/temp.linux-x86_64-cpython-39/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/cymj.o -fopenmp -w
(PPO pid=27623) INFO:root:gcc -pthread -B /home/cocp5/anaconda3/envs/rhRL94/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/cocp5/anaconda3/envs/rhRL94/include -I/home/cocp5/anaconda3/envs/rhRL94/include -fPIC -O2 -isystem /home/cocp5/anaconda3/envs/rhRL94/include -fPIC -I/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py -I/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/binaries/linux/mujoco210/include -I/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/numpy/core/include -I/home/cocp5/anaconda3/envs/rhRL94/include/python3.9 -c /home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/gl/osmesashim.c -o /home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/generated/_pyxbld_2.0.2.13_39_linuxcpuextensionbuilder/temp.linux-x86_64-cpython-39/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/gl/osmesashim.o -fopenmp -w
(PPO pid=27623) INFO:root:gcc -pthread -B /home/cocp5/anaconda3/envs/rhRL94/compiler_compat -shared -Wl,-rpath,/home/cocp5/anaconda3/envs/rhRL94/lib -Wl,-rpath-link,/home/cocp5/anaconda3/envs/rhRL94/lib -L/home/cocp5/anaconda3/envs/rhRL94/lib -L/home/cocp5/anaconda3/envs/rhRL94/lib -Wl,-rpath,/home/cocp5/anaconda3/envs/rhRL94/lib -Wl,-rpath-link,/home/cocp5/anaconda3/envs/rhRL94/lib -L/home/cocp5/anaconda3/envs/rhRL94/lib /home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/generated/_pyxbld_2.0.2.13_39_linuxcpuextensionbuilder/temp.linux-x86_64-cpython-39/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/cymj.o /home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/generated/_pyxbld_2.0.2.13_39_linuxcpuextensionbuilder/temp.linux-x86_64-cpython-39/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/gl/osmesashim.o -L/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/binaries/linux/mujoco210/bin -Wl,--enable-new-dtags,-R/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/binaries/linux/mujoco210/bin -lmujoco210 -lglewosmesa -lOSMesa -lGL -o /home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/generated/_pyxbld_2.0.2.13_39_linuxcpuextensionbuilder/lib.linux-x86_64-cpython-39/mujoco_py/cymj.cpython-39-x86_64-linux-gnu.so -fopenmp
2023-07-12 15:33:05,362 ERROR trial_runner.py:1088 -- Trial PPO_RoboHive_Pick_Place_0_f04a6_00000: Error processing event.
ray.tune.error._TuneNoNextExecutorEventError: Traceback (most recent call last):
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/tune/execution/ray_trial_executor.py", line 1070, in get_next_executor_event
future_result = ray.get(ready_future)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/_private/worker.py", line 2311, in get
raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::PPO.init() (pid=27623, ip=172.21.2.236, repr=PPO)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/init.py", line 15, in
from mujoco_py.builder import cymj, ignore_mujoco_warnings, functions, MujocoException
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/builder.py", line 499, in
cymj = load_cython_ext(mujoco_path)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/builder.py", line 106, in load_cython_ext
mod = load_dynamic_ext('cymj', cext_so_path)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/builder.py", line 125, in load_dynamic_ext
return loader.load_module()
ImportError: /lib/x86_64-linux-gnu/libLLVM-12.so.1: undefined symbol: ffi_type_sint32, version LIBFFI_BASE_7.0

During handling of the above exception, another exception occurred:

ray::PPO.init() (pid=27623, ip=172.21.2.236, repr=PPO)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/algorithms/algorithm.py", line 441, in init
super().init(
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/tune/trainable/trainable.py", line 169, in init
self.setup(copy.deepcopy(self.config))
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/algorithms/algorithm.py", line 566, in setup
self.workers = WorkerSet(
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 169, in init
self._setup(
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 259, in _setup
self._local_worker = self._make_worker(
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 941, in _make_worker
worker = cls(
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 585, in init
self.env = env_creator(copy.deepcopy(self.env_context))
File "", line 3, in env_creator
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/gym/envs/registration.py", line 156, in make
return registry.make(id, **kwargs)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/gym/envs/registration.py", line 101, in make
env = spec.make(kwargs)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/gym/envs/registration.py", line 73, in make
env = cls(_kwargs)
File "/home/cocp5/robohive/robohive/envs/arms/pick_place_v0.py", line 41, in init
super().init(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
File "/home/cocp5/robohive/robohive/envs/env_base.py", line 57, in init
self.sim = SimScene.get_sim(model_path)
File "/home/cocp5/robohive/robohive/physics/sim_scene.py", line 56, in get_sim
return SimScene.create(model_handle=model_handle, backend=SimBackend.MUJOCO_PY)
File "/home/cocp5/robohive/robohive/physics/sim_scene.py", line 42, in create
from robohive.physics import mjpy_sim_scene # type: ignore
File "/home/cocp5/robohive/robohive/physics/mjpy_sim_scene.py", line 15, in
import_utils.mujoco_py_isavailable()
File "/home/cocp5/robohive/robohive/utils/import_utils.py", line 11, in mujoco_py_isavailable
raise ModuleNotFoundError(f"{e}. {help}")
ModuleNotFoundError: /lib/x86_64-linux-gnu/libLLVM-12.so.1: undefined symbol: ffi_type_sint32, version LIBFFI_BASE_7.0.
Options:
(1) follow setup instructions here: https://github.com/openai/mujoco-py/
(2) install mujoco_py via pip (pip install mujoco_py)
(3) install free_mujoco_py via pip (pip install free-mujoco-py)

(PPO pid=27623) 2023-07-12 15:33:05,354 ERROR worker.py:763 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::PPO.init() (pid=27623, ip=172.21.2.236, repr=PPO)
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/init.py", line 15, in
(PPO pid=27623) from mujoco_py.builder import cymj, ignore_mujoco_warnings, functions, MujocoException
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/builder.py", line 499, in
(PPO pid=27623) cymj = load_cython_ext(mujoco_path)
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/builder.py", line 106, in load_cython_ext
(PPO pid=27623) mod = load_dynamic_ext('cymj', cext_so_path)
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/builder.py", line 125, in load_dynamic_ext
(PPO pid=27623) return loader.load_module()
(PPO pid=27623) ImportError: /lib/x86_64-linux-gnu/libLLVM-12.so.1: undefined symbol: ffi_type_sint32, version LIBFFI_BASE_7.0
(PPO pid=27623)
(PPO pid=27623) During handling of the above exception, another exception occurred:
(PPO pid=27623)
(PPO pid=27623) ray::PPO.init() (pid=27623, ip=172.21.2.236, repr=PPO)
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/algorithms/algorithm.py", line 441, in init
(PPO pid=27623) super().init(
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/tune/trainable/trainable.py", line 169, in init
(PPO pid=27623) self.setup(copy.deepcopy(self.config))
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/algorithms/algorithm.py", line 566, in setup
(PPO pid=27623) self.workers = WorkerSet(

File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 169, in init
(PPO pid=27623) self._setup(
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 259, in _setup
(PPO pid=27623) self._local_worker = self._make_worker(
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 941, in _make_worker
(PPO pid=27623) worker = cls(
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 585, in init
(PPO pid=27623) self.env = env_creator(copy.deepcopy(self.env_context))
(PPO pid=27623) File "", line 3, in env_creator
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/gym/envs/registration.py", line 156, in make
(PPO pid=27623) return registry.make(id, **kwargs)
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/gym/envs/registration.py", line 101, in make
(PPO pid=27623) env = spec.make(kwargs)
(PPO pid=27623) File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/gym/envs/registration.py", line 73, in make
(PPO pid=27623) env = cls(_kwargs)
(PPO pid=27623) File "/home/cocp5/robohive/robohive/envs/arms/pick_place_v0.py", line 41, in init
(PPO pid=27623) super().init(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
(PPO pid=27623) File "/home/cocp5/robohive/robohive/envs/env_base.py", line 57, in init
(PPO pid=27623) self.sim = SimScene.get_sim(model_path)
(PPO pid=27623) File "/home/cocp5/robohive/robohive/physics/sim_scene.py", line 56, in get_sim
(PPO pid=27623) return SimScene.create(model_handle=model_handle, backend=SimBackend.MUJOCO_PY)
(PPO pid=27623) File "/home/cocp5/robohive/robohive/physics/sim_scene.py", line 42, in create
(PPO pid=27623) from robohive.physics import mjpy_sim_scene # type: ignore
(PPO pid=27623) File "/home/cocp5/robohive/robohive/physics/mjpy_sim_scene.py", line 15, in
(PPO pid=27623) import_utils.mujoco_py_isavailable()
(PPO pid=27623) File "/home/cocp5/robohive/robohive/utils/import_utils.py", line 11, in mujoco_py_isavailable
(PPO pid=27623) raise ModuleNotFoundError(f"{e}. {help}")
(PPO pid=27623) ModuleNotFoundError: /lib/x86_64-linux-gnu/libLLVM-12.so.1: undefined symbol: ffi_type_sint32, version LIBFFI_BASE_7.0.
(PPO pid=27623) Options:
(PPO pid=27623) (1) follow setup instructions here: https://github.com/openai/mujoco-py/
(PPO pid=27623) (2) install mujoco_py via pip (pip install mujoco_py)
(PPO pid=27623) (3) install free_mujoco_py via pip (pip install free-mujoco-py)
2023-07-12 15:33:05,383 ERROR ray_trial_executor.py:118 -- An exception occurred when trying to stop the Ray actor:Traceback (most recent call last):
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/tune/execution/ray_trial_executor.py", line 109, in _post_stop_cleanup
ray.get(future, timeout=timeout)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/_private/worker.py", line 2311, in get
raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::PPO.init() (pid=27623, ip=172.21.2.236, repr=PPO)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/init.py", line 15, in
from mujoco_py.builder import cymj, ignore_mujoco_warnings, functions, MujocoException
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/builder.py", line 499, in
cymj = load_cython_ext(mujoco_path)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/builder.py", line 106, in load_cython_ext
mod = load_dynamic_ext('cymj', cext_so_path)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/mujoco_py/builder.py", line 125, in load_dynamic_ext
return loader.load_module()
ImportError: /lib/x86_64-linux-gnu/libLLVM-12.so.1: undefined symbol: ffi_type_sint32, version LIBFFI_BASE_7.0

During handling of the above exception, another exception occurred:

ray::PPO.init() (pid=27623, ip=172.21.2.236, repr=PPO)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/algorithms/algorithm.py", line 441, in init
super().init(
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/tune/trainable/trainable.py", line 169, in init
self.setup(copy.deepcopy(self.config))
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/algorithms/algorithm.py", line 566, in setup
self.workers = WorkerSet(
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 169, in init
self._setup(
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 259, in _setup
self._local_worker = self._make_worker(
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 941, in _make_worker
worker = cls(
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 585, in init
self.env = env_creator(copy.deepcopy(self.env_context))
File "", line 3, in env_creator
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/gym/envs/registration.py", line 156, in make
return registry.make(id, **kwargs)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/gym/envs/registration.py", line 101, in make
env = spec.make(kwargs)
File "/home/cocp5/anaconda3/envs/rhRL94/lib/python3.9/site-packages/gym/envs/registration.py", line 73, in make
env = cls(_kwargs)
File "/home/cocp5/robohive/robohive/envs/arms/pick_place_v0.py", line 41, in init
super().init(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
File "/home/cocp5/robohive/robohive/envs/env_base.py", line 57, in init
self.sim = SimScene.get_sim(model_path)
File "/home/cocp5/robohive/robohive/physics/sim_scene.py", line 56, in get_sim
return SimScene.create(model_handle=model_handle, backend=SimBackend.MUJOCO_PY)
File "/home/cocp5/robohive/robohive/physics/sim_scene.py", line 42, in create
from robohive.physics import mjpy_sim_scene # type: ignore
File "/home/cocp5/robohive/robohive/physics/mjpy_sim_scene.py", line 15, in
import_utils.mujoco_py_isavailable()
File "/home/cocp5/robohive/robohive/utils/import_utils.py", line 11, in mujoco_py_isavailable
raise ModuleNotFoundError(f"{e}. {help}")
ModuleNotFoundError: /lib/x86_64-linux-gnu/libLLVM-12.so.1: undefined symbol: ffi_type_sint32, version LIBFFI_BASE_7.0.
Options:
(1) follow setup instructions here: https://github.com/openai/mujoco-py/
(2) install mujoco_py via pip (pip install mujoco_py)
(3) install free_mujoco_py via pip (pip install free-mujoco-py)

I have attempted uninstalling and reinstalling mujoco-py from the cloned GitHub repository, it did not work. I uninstalled and reinstalled the libffi library on my system. It also did not work. Then I specified the path to the library in my environment by setting it to the LD_LIBRARY_PATH using the conda env config vars set command. It did not work either.

I can not understand why this issue occurs. mujoco-py is being build fine when I simply import it, either on a traditional Python .py file or from a Jupyter Notebook. The RoboHive demo commands also work fine (python -m robohive.utils.examine_env -e FrankaReachRandom-v0, python -m robohive.utils.examine_env -e FrankaPickPlaceFixed-v0). Could it be an issue with multi-processing that Ray API uses? Do you have any suggestions on how I might be able to overcome this issue?

Thank you very much in advance for you valuable help and support!!!

Kind regards,

Christos Peridis

@vikashplus
Copy link
Owner

Try the dev branch. It uses the official mujoco bindings by default. Hopefull that will fix this issue.

@ChristosPeridis
Copy link
Author

Dear Dr. Vikash,

Thank you very much for your immediate response!
I will clone the repository from the dev branch and start working on it. I will report back to you as soon as I have any updates!

Thank you very much for your valuable help!!!

Kind regards,

Christos Peridis

@ChristosPeridis
Copy link
Author

ChristosPeridis commented Jul 14, 2023

Hello dear Dr. Vikash,

I hope you are doing well! I am sending you this message in order to update you regarding my progress with the dev branch of the RoboHive API. I managed to successfully clone the dev branch in my WSL2 setup. the I proceed in the creation of a new conda environment to work there with the dev branch. I followed similar steps that I followed for the creation of the conda 3.9 environment that I had provided you with yesterday. However, I faced an issue with the rendering of the environments. I managed to fix the issue by changing the setting of the MUJOCO_GL and PYOPENGL_PLATFORM environment variables from egl, thath the PyTorch website was suggesting, to osmesa. By applying the above changes I managed to run successfully the example commands:

python -m robohive.utils.examine_env -e FrankaPickPlaceFixed-v0 &

python -m robohive.utils.examine_env -e FrankaReachRandom-v0

I further proceed in installing the Ray API. Initially I installed version 2.2.0 as in yesterdays Python 3.9 conda environment. Ray manged successfully to build the RoboHive environment and progressed further. Unfurtunaly, dependency errors did not allow the Ray Tune trials to complete. After conducting further research I installed Ray version 1.6.0 which did not had any dependency errors while running the code. I have made a new environment.yml file for the above mentioned conda environment with the RoboHive API made from the dev branch (version 0.6.0) and Ray API version 1.6.0. I have tested the .yml file and it successfully creates the desired conda environment with no pip or other errors. The new recreated environment has also been tested with the example commands, and everything worked fine. I am attaching that .yml file in this message for your reference.

rhRL3906_RoboHive_dev_Ray160_environment.zip

Unfortunately however, the code did not manage to run much further because the following error occurred:

(pid=296669) 2023-07-14 16:37:53,591 ERROR worker.py:428 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::SAC.init() (pid=296669, ip=172.21.2.236)
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/agents/trainer_template.py", line 136, in init
(pid=296669) Trainer.init(self, config, env, logger_creator)
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/agents/trainer.py", line 592, in init
(pid=296669) super().init(config, logger_creator)
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/tune/trainable.py", line 103, in init
(pid=296669) self.setup(copy.deepcopy(self.config))
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/agents/trainer_template.py", line 146, in setup
(pid=296669) super().setup(config)
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/agents/trainer.py", line 739, in setup
(pid=296669) self._init(self.config, self.env_creator)
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/agents/trainer_template.py", line 170, in _init
(pid=296669) self.workers = self._make_workers(
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/agents/trainer.py", line 821, in _make_workers
(pid=296669) return WorkerSet(
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 103, in init
(pid=296669) self._local_worker = self._make_worker(
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/evaluation/worker_set.py", line 399, in _make_worker
(pid=296669) worker = cls(
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 580, in init
(pid=296669) self._build_policy_map(
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1375, in build_policy_map
(pid=296669) self.policy_map.create_policy(name, orig_cls, obs_space, act_space,
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/policy/policy_map.py", line 136, in create_policy
(pid=296669) self[policy_id] = class
(observation_space, action_space,
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/policy/policy_template.py", line 279, in init
(pid=296669) self._initialize_loss_from_dummy_batch(
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/policy/policy.py", line 746, in _initialize_loss_from_dummy_batch
(pid=296669) self._dummy_batch = self._get_dummy_batch_from_view_requirements(
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/policy/policy.py", line 875, in _get_dummy_batch_from_view_requirements
(pid=296669) ret[view_col] = np.zeros_like([
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/ray/rllib/policy/policy.py", line 876, in
(pid=296669) view_req.space.sample() for _ in range(batch_size)
(pid=296669) File "/home/cocp5/anaconda3/envs/rhRL3906/lib/python3.9/site-packages/gym/spaces/box.py", line 42, in sample
(pid=296669) return self.np_random.uniform(low=self.low, high=high, size=self.shape).astype(self.dtype)
(pid=296669) File "mtrand.pyx", line 1155, in numpy.random.mtrand.RandomState.uniform
(pid=296669) OverflowError: Range exceeds valid bounds

Do you have any suggestions on how I might be able to resolve it ?

Thank you very much for all your valuable help and support!!!

Kind regards,

Christos Peridis

@vikashplus
Copy link
Owner

This seems like an error coming from policy initialization from Ray and SAC side.
I haven't used them so it's a bit hard for me to pinpoint. If I have to guess, ensure that the action limits are properly parsed by the ray and sac. Also, pay attention to gym version.

@ChristosPeridis
Copy link
Author

ChristosPeridis commented Jul 17, 2023

Hello dear Dr. Vikash,

I hope you are doing well! I further investigated the error and it seems that the error occurs when the agent tries to create a "dummy batch" from the view requirements of the policy so to initialize the loss. This procedure takes place in the 'initialize_loss_from_dummy_batch' function. The dummy batch creation involves generating random samples from the observation and action spaces of the environment. The purpose of this is to initialize the loss function of the policy with some initial data .The batch creation procedure takes place in the 'get_dummy_batch_from_view_requirements' which is called by the 'initialize_loss_from_dummy_batch' _ During this procedure, an attempt to generate a random sample from a uniform distribution with bounds that are too large for the numpy's uniform function to handle generates the error.

I have checked the observation that the FrankaPickPlaceFixed-v0 and it returns a Box(63,) of values of type float64. Based on research I have conducted, RLlib expects float32 inputs for its algorithms. Is there any hyperparameter on the configuration of the FrankaPickPlaceFixed-v0 or any utility function on the RoboHive API that would enable to normalize the observation and convert it to a type of float32? I would also like to ask what is the physical meaning behind the Box(63,) observation returned from the environment? Is it possible to use as a returned observation from the environment an image from the simulator ? The action space, which is Box(9,) of float32 does it represent the degrees of freedom of the robot (seven for the arm robot and two for the robot gripper)?

Regarding the Gym version, I am using OpenAI Gym version 0.13.0 which is the one installed with the RoboHive API installation, and the one specified in the env.yaml file in the setup folder of the RoboHive repository. The requirements.txt for the Ray API version 1.6.0, do not specify any specific version dependency for the OpenAI Gym library. Is there anything more that I should check for the gym version based on your experience ?

Thank you very much for all the valuable and continues help and support!!!

I am always at your disposal for any further clarification or queries that might occur.

Kind regards,

Christos Peridis

@vikashplus
Copy link
Owner

RoboHive currently supports action normalization (can be configured during registration) but doesn't support any observation normalization, or observation precision. The rationale is that observations are usually quite heterogenous (linear, angular, pixel, etc.) and it's not obvious is a default choice will work for all observations.

@vmoens: Im curious what do we do in TorchRL? Are there any configurations to expose such choices to the user?

@vikashplus
Copy link
Owner

The env-API/ MDP definitions are designed to abstract out the physical meaning to promote generalization in data-driven methods. However, it's still possible to peel the layer and understand these variables.

  1. Observations: RoboHive has an informative tutorial on how to query env for different entities (state observations, proprioception, exteroception). Please take a look.
  2. Box(63,): You can look at the entire observation_dict here, and the keys that make observatuion vector here.
  3. Visual observations: Visual observations are considered exteroception and can be added by simply passing visual keys as an argument. See here for specs on visual keys, and here for an example.
  4. Action space is represented by the actuated degrees of freedom as defined by the simulation. It's populated automatically from the actuator definitions of the env's mujoco's xml model.

@ChristosPeridis
Copy link
Author

Hello dear Dr. Vikash,

Thank you very much for your immediate response!!! Your instructions and the material you provided me with are very helpful and I have already started studying and working on them. Based on them I will progress further with the integration with the Ray API. I will be keeping you updated regarding my progress.

Thank you very much for your valuable and continues help and support!!!

Kind regards,

Christos Peridis

@vikashplus
Copy link
Owner

Thank you for your effort with the Ray Integration. It's going to be a big win. Keep me in the loop as you make progress.

@vikashplus vikashplus added the enhancement New feature or request label Jul 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants