Liren Jin1, Xingguang Zhong1, Yue Pan1, Jens Behley1, Cyrill Stachniss1, Marija Popovic2
1 University of Bonn, 2 TU Delft
We test the following setup on Ubuntu20 with CUDA11.8.
Clone ActiveGS repo:
git clone [email protected]:dmar-bonn/active-gs.git
cd active-gs
(optional) For different CUDA versions in your machine, you might need to change the corresponding pytorch version and source in envs/build.sh:
# for example for CUDA 12.1, change the source.
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu121
# you can find more compatible version on https://pytorch.org/get-started/previous-versions/
Create and activate environment:
bash envs/build.sh
conda activate active-gs
Download full Replica dataset:
bash data/replica_download.sh
(optional) If you only want to quickly try one example, rather than the whole dataset, use:
bash data/replica_example.sh
This will only download office0 scene.
Run online mission:
python main.py planner=PLANNER_TYPE scene=SCENE_NAME
# example:
# python main.py planner=confidence scene=replica/office0 use_gui=true
If use_gui is set to true, you should be able to see a GUI running.
To visualize the built GS map:
python visualize.py -G PATH_TO_GS_MAP
- Resume/Pause: click to stop or continue online mission.
- Stop/Record: click to enter camera path recording mode. Any movement of the camera will be recorded and saved in outputs_gui/saved_paths. You can set the ID of camera path to be recorded by choosing number from "Camera Path" and click reset to delete. Click "Fly" in "Camera Follow Options" to control the camera via keyboard (WASD and direction keys).
- Camera Pose: You can save individual camera pose by selecting ID of the camera pose and then clicking "Save". Similarly, click "Load" to move the camera to saved camera poses.
- History Views: move the camera to planned history viewpoints.
- 3D Objects: click to visualize 3D objects. You can see different submaps in "Voxel Map". "Mesh" is only available if corresponding mesh is also loaded.
- Rendering Options: click to show rendering results from Gaussian Splatting map. Only one rendering type among "Depth", "Confidence", "Opacity", "Normal" and "D2N" can be visualized at the same time.
For rendering evaluation, you need to first generate test views for each scene:
python data_generation.py scene=SCENE_NAME
# example:
# python data_generation.py scene=replica/office0
This will create a folder containing files of intrinsics and extrinsics of test views in the selected scene.
For mesh evaluation, you need to first extract meshes from the saved GS map:
python mesh_generation.py planner=PLANNER_TYPE scene=SCENE_NAME
# example:
# python mesh_generation.py planner=confidence scene=replica/office0
This will generate corresponding mesh files for each GS map saved during the mission.
To get the metrics value:
python eval.py planner=PLANNER_TYPE scene=SCENE_NAME test_folder=TEST_FOLDER
# example:
# python eval.py planner=confidence scene=replica/office0 test_folder=dataset/replica/office0
We also provide a shell script to run a complete experiment:
bash run.sh
office0 | office2 | office3 | office4 |
---|---|---|---|
office0.mp4 |
office2.mp4 |
office3.mp4 |
office4.mp4 |
room0 | room1 | room2 | hotel0 |
---|---|---|---|
room0.mp4 |
room1.mp4 |
room2.mp4 |
hotel0.mp4 |
Parts of the code are based on MonoGS and GaussianSurfels. We thank the authors for open-sourcing their code.
Liren Jin, [email protected]
This work has been fully funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy, EXC-2070 – 390732324 (PhenoRob).