Object placement in robotic tasks is inherently challenging due to the diversity of object geometries and placement configurations. To address this, we propose AnyPlace, a two-stage method trained entirely on synthetic data, capable of predicting a wide range of feasible placement poses for real-world tasks. Our key insight is that by leveraging a Vision-Language Model (VLM) to identify rough placement locations, we focus only on the relevant regions for local placement, which enables us to train the low-level placement-pose-prediction model to capture diverse placements efficiently.
Yuchi Zhao, Miroslav Bogdanovic, Chengyuan Luo, Steven Tohme, Kourosh Darvish, Alán Aspuru-Guzik, Florian Shkurti, Animesh Garg
We provide the official implementation of AnyPlace including:
- Training AnyPlace low-level pose prediction models
- Evaluation pipeline in IsaacLab for executing pick and place
- Interaction with AnyPlace high-level placement location prediction
Setup for AnyPlace Low-level Pose Prediction Models
- Clone the repo and follow the instruction below:
conda create -n anyplace python=3.8
conda activate anyplace
pip install -r base_requirements.txt
pip install -e .
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
- Install
torch-scatter
/torch-cluster
/knn_cuda
packages
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.0+cu117.html --no-index
pip install torch-cluster -f https://data.pyg.org/whl/torch-1.13.0+cu117.html --no-index
pip install https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl
- Update and source the setup script to set environment variables
source anyplace_env.sh
- Follow instructions at README.md to setup AnyPlace high-level pose location prediction.
- Follow instructions at README.md to setup AnyPlace IsaacLab Pick and Place evaluation pipeline.
- Download the AnyPlace synthetic dataset on Huggingface.
- Configure wandb on your machine.
- Run the following commands to launch the single-task and multi-task training:
# for single-task training
cd training/
python train_full.py -c anyplace_cfgs/vial_inserting/anyplace_diffusion_molmocrop.yaml # config files for different tasks can be found under config/train_cfgs/anyplace_cfgs
# for multi-task training
cd training/
python train_full.py -c anyplace_cfgs/multitask/anyplace_diffusion_molmocrop_mt.yaml
For evaluation, first obtain the predicted placement poses by running AnyPlace models, then execute the predicted placements using our IsaacLab Pick and Place pipeline.
- Setup the meshcat visualizer to visualize the object pointclouds at each diffusion step
meshcat-server # use port 7000 by default
- Download the AnyPlace evaluation dataset on Huggingface, which contains object USD files, RGBD images and object pointclouds.
- Update file path in config files and then run the AnyPlace model by:
cd eval/
python evaluate_official.py -c anyplace_eval/vial_inserting/anyplace_diffusion_molmocrop_multitask.yaml # config files for different tasks can be found under config/full_eval_cfgs/anyplace_eval
- To visualize pointclouds at their final predicted placement poses, first update the data folder path in the 'visualize_placement.py' and then run:
cd eval/
python visualize_placement.py
Follow instruction here to run the AnyPlace IsaacLab Pick and Place evaluation pipeline.
Model checkpoints can also be downloaded on Huggingface.
coming soon......
This repository is released under the MIT license. See LICENSE for additional details.
We would like to thank the authors for their contributions to these repositories and appreciate their efforts in open-sourcing their work:
@misc{zhao2025anyplacelearninggeneralizedobject,
title={AnyPlace: Learning Generalized Object Placement for Robot Manipulation},
author={Yuchi Zhao and Miroslav Bogdanovic and Chengyuan Luo and Steven Tohme and Kourosh Darvish and Alán Aspuru-Guzik and Florian Shkurti and Animesh Garg},
year={2025},
eprint={2502.04531},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2502.04531},
}