Robust Robot Walker: Learning Agile Locomotion over Challenge Terrains
Shaoting Zhu, Runhan Huang, Linzhan Mou, Hang Zhao
ICRA 2025
- Create a new Python virtual environment with Python 3.6, 3.7, or 3.8 (Python 3.8 is recommended).
Run the following command to install PyTorch:
pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
- Download and install Isaac Gym Preview 4 from NVIDIA Isaac Gym.
- Install the required Python bindings:
cd isaacgym/python && pip install -e .
- Test the installation by running an example:
cd examples && python 1080_balls_of_solitude.py
- For troubleshooting, check the Isaac Gym documentation at
isaacgym/docs/index.html
.
Clone the repository with the following command:
git clone https://github.com/zst1406217/robust_robot_walker.git
cd robust_robot_walker
cd rsl_rl && pip install -e .
cd legged_gym && pip install -e .
Install the necessary dependencies:
pip install debugpy tqdm numpy==1.19.5 tensorboard==2.0.0 setuptools==58.5.0 protobuf==3.20.0 matplotlib==3.4.0
Download and put the mpc_data_no_command.npy in ./legged_gym
.
- Navigate to the
legged_gym
folder:
cd legged_gym
-
The trained policy is stored in
checkpoint.zip
. Unzip it and place the folder inlegged_gym/logs/rrw_a1
. -
Make sure you're in the
robust_robot_walker/legged_gym
directory, and run the benchmark:
python legged_gym/scripts/track.py --task a1_bartrack --load_run checkpoint --headless
This will output the success rate, average pass time, and average travel distance of 1,000 robots. Due to the random seed and varying GPU devices, results may slightly differ from the paper.
To train the walking policy on a flat plane:
cd legged_gym
Ensure you are in the robust_robot_walker/legged_gym
directory, and then run:
python legged_gym/scripts/train.py --task a1_remotegoal --headless
-
Update the
"load_run"
parameter in thelegged_gym/envs/a1/a1_mix_goal_stage1_config.py
file (line 182) with the log directory from Step 1 (Train walking policy on the plane).For example:
load_run = "Mar14_12-59-47_WalkByRemoteGoal_noResume"
-
Make sure you're in the
robust_robot_walker/legged_gym
folder and run:
python legged_gym/scripts/train.py --task a1_mixgoalstage1 --headless
-
Update the
"load_run"
parameter in thelegged_gym/envs/a1/a1_mix_goal_stage2_config.py
file (line 182) with the log directory from Step 2 (Train robust-robot-walker policy, stage 1). -
Run the following:
python legged_gym/scripts/train.py --task a1_mixgoalstage2 --headless
To test and visualize the walking policy:
cd legged_gym
python legged_gym/scripts/play.py --task a1_mixgoalstage2 --load_run <your_log_dir>
Please refer to the Deploy.md for instructions on deploying to a real robot.
https://github.com/leggedrobotics/legged_gym
https://github.com/ZiwenZhuang/parkour
You can find our paper on arXiv.
If you find this code or find the paper useful for your research, please consider citing:
@article{zhu2024robust,
title={Robust Robot Walker: Learning Agile Locomotion over Tiny Traps},
author={Shaoting, Zhu and Runhan, Huang and Linzhan, Mou and Hang, Zhao},
journal={arXiv preprint arXiv:2409.07409},
year={2024}
}