Skip to content

PhDinTimeManagement/Splatfinity

Repository files navigation

Splatfinity

Authors (in alphabetical order)

Chloe Xin DAI
Pablo von Baum Garcia
Etienne GILLE

Reference
KAIST-CS479-Assignment3-Gaussian Splatting: Point-Based Radiance Fields

Project Demo

demo animation

Code Structure

This codebase is organized as the following directory tree. Important: It will only look like this after all uncessecary folders are removed (running preprocess.py)

Splatfinity
│
├── camera_input_images (Your Camera input files)
├── data
│   ├── nubuzuki_only_v2
│   │   └── nubuzuki_only_v2.json
│   └── nubuzuki_only_v2.ply 
├── rendering_outputs/nubuzuki_only_v2
├── simple-knn
├── src
│   ├── camera.py
│   ├── constants.py
│   ├── renderer.py
│   ├── rgb_metrics.py
│   ├── scene.py
│   └── sh.py
├── convertor.py
├── path_creator.py
├── preprocess.py
├── render.py
└── README.md

Preprocessing

  • If you only want to render the final output please go to section: Mirror Rendering

1. Environment Setup

conda create -n nerfstudio_env -c conda-forge python=3.10 -y
conda activate nerfstudio_env
pip install nerfstudio
pip install pillow-heif
pip install tqdm
pip install git+https://github.com/nerfstudio-project/[email protected]
conda install -c conda-forge colmap -y
conda install -c conda-forge ffmpeg -y
conda install \
  pytorch==2.5.1 \
  torchvision==0.20.1 \
  torchaudio==2.5.1 \
  pytorch-cuda=11.8 \
  -c pytorch -c nvidia \
  -y

2. Preprocessing Pipeline Options

Option A: Automated Preprocessing (steps 2.1–2.5)

Script only supports GPU: RTX490, A100, A6000 Scene_name for our scene is: nubzuki_only_v2 input_dir you can download the input pictures converted or unconverted: https://drive.google.com/drive/folders/1zehi2jmguVz13y1qFWzGgW9K9I2LFjAj

  python preprocess.py  --remove_all --convert --colmap --train --ply --scene_name "<YOUR_SCENE_NAME>" --input_dir "<PATH_TO_YOUR_FOLDER>" --GPU "<GPU_NAME>"

Option B: Manual Preprocessing (step-by-Step)

2.1 Convert Camera Input Images (Optional HEIC to PNG)

python convertor.py

2.2 Generate Camera Poses with COLMAP

ns-process-data images --data ./camera_input_pics_converted --output-dir ./processed_images_colmap

2.3 GPU Build Configuration

Option 1: for RTX 4090
export MAX_JOBS=1
export TORCH_CUDA_ARCH_LIST="8.9"
Option 2: for A100
export MAX_JOBS=1
export TORCH_CUDA_ARCH_LIST="8.0"
Option 3: for A6000
export MAX_JOBS=1
export TORCH_CUDA_ARCH_LIST="8.6"
export PATH=/usr/local/cuda-12.2/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-12.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

2.4 Train with Splatfacto

ns-train splatfacto --data ./processed_images_colmap

2.5 Dump out the Gaussian Splat

  • Export ply file
ns-export gaussian-splat \
  --load-config outputs/processed_images_colmap/splatfacto/{timestamp}/config.yml \
  --output-dir ./export/splat
  • Rename ply file
mv export/splat/splat.ply export/splat/{rename}.ply
cp export/splat/{rename}.ply data/

Mirror Rendering

1. Activate Conda Environment Same as CS479 Assignment 3

conda deactivate
conda activate cs479-gs

cs479-gs Environment Setup (Optional)

conda create --name cs479-gs python=3.10
conda activate cs479-gs
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
pip install torchmetrics[image]
pip install imageio[ffmpeg]
pip install plyfile tyro==0.6.0 jaxtyping==0.2.36 typeguard==2.13.3
pip install simple-knn/.

2. Scene Path:

python path_creator.py

3. Render the Scene

python render.py

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published