Skip to content

Code for our paper on MTDA for point clouds accepted at CVPR (W) 2023.

License

Notifications You must be signed in to change notification settings

sinAshish/MEnsA_mtda

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MEnsA: Mix-up Ensemble Average for UnsupervisedMulti Target Domain Adaptation on 3D Point Clouds

Ashish Sinha | Jonghyun Choi | Paper | Arxiv | ZENODO

📣 Accepted at Workshop on Learning with Limited Labelled Data (CVPR2023)

📑 Table of Contents

Abstract

Unsupervised domain adaptation (UDA) addresses the problem of distribution shift between the unlabelled target domain and labelled source domain. While the single target domain adaptation (STDA) is well studied in the literature for both 2D and 3D vision tasks, multi-target domain adaptation (MTDA) is barely explored for 3D data despite its wide real-world applications such as autonomous driving systems for various geographical and climatic conditions. We establish an MTDA baseline for 3D point cloud data by proposing to mix the feature representations from all domains together to achieve better domain adaptation performance by an ensemble average, which we call Mixup Ensemble Average or MEnsA. With the mixed representation, we use a domain classifier to improve at distinguishing the feature representations of source domain from those of target domains in a shared latent space. In empirical validations on the challenging PointDA-10 dataset, we showcase a clear benefit of our simple method over previous unsupervised STDA and MTDA methods by large margins (up to 17.10% and 4.76% on averaged over all domain shifts).

Pipeline Overview

image

Proposed Mixup Schema

mixup

3D Point Cloud MTDA Results

results

Ablation wrt $\mathcal{L}$

ablation

Repo Structure

.
├── assets                  # paper figures
├── data                    # root dir for data
│   └── PointDA10_data
│       ├── ModelNet10
│       ├── ScanNet
│       └── ShapeNet
├── Dockerfile              # dockerfile for building the container
├── main.py                 # training script
├── saved
│   ├── ckpt                # tensorboard logs
│   └── logs                # model checkpoints
├── models                  # the models
│   ├── ...
├── prepare_dataset.sh      # fetch dataset
├── README.md
├── requirements.txt        
└── src                     # trainer and utility functions
    └── ...

Dependecies

  • CUDA:10.2
  • CUDNN:7.0
  • Python3
  • Pytorch:1.7.1

Preparing the Dataset

We use the the benchmark point cloud dataset for domain adaptation – PointDA-10 for experimentation. To download the dataset, and prepare the folder structure. Simplya run,

bash prepare_dataset.sh

Running the code

To run the code, for training with default parameters, simply run

python3 main.py

To train a new model with changed hyperparameters, follow this:

python3 main.py -s <SOURCE DATASET>\
                -e <EPOCHS>\
                -b <BATCH SIZE>\
                -g <GPU IDS>\
                -lr <LEARNING RATE>\
                -mixup <SWITCH MIXING>\
                -mix_sep <TO USE BASELINE MIXUP: SEP>\
                -mix_type <MIXUP VARIANTS: {
                    -1 : MEnsA,
                    0 : Mixup A,
                    1 : Mixup B,
                    2 : Mixup C
                }>\
                -seed <SEED VALUE>\
                -r <CHECKPOINT PATH FOR RESUME TRAINING>\
                -log_interval <LOGGING INTERVAL>\
                -save_interval <SAVE INTERVAL>\
                -datadir <PATH TO DATA>\
                -lambda_mix <LOSS WEIGHT FOR MIXING>\
                -lambda_adv <LOSS WEIGHT FOR ADVERSERIAL LOSS>\
                -lambda_mmd <LOSS WEIGHT FOR MMD LOSS>\
                -gamma <GAMMA WEIGHT>

Running the code in docker

  • Create the docker container
    docker build -f Dockerfile -t mtda:pc .
  • Enter the container and mount the dataset
    docker run -it --gpus all -v </path/to/dataset/>:/data mtda:pc
  • Run the code inside the container using
    CUDA_VISIBLE_DEVICES=<GPU ID> python3 main.py -g <GPU ID> -s <SOURCE DATASET> -mixup 
  • Pre-built docker image
    # pull the container from docker hub
    docker pull sinashish/mtda
    # execute the container as usual
    docker run -it --gpus all -v </path/to/dataset/>:/data sinashish/mtda:latest

Cite

If you use any part of the code or find this work useful, please consider citing our work:

@InProceedings{Sinha_2023_CVPR,
    author    = {Sinha, Ashish and Choi, Jonghyun},
    title     = {MEnsA: Mix-Up Ensemble Average for Unsupervised Multi Target Domain Adaptation on 3D Point Clouds},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2023},
    pages     = {4766-4776}
}

Acknowledgements

Some of the code is borrowed from PointDAN.

Languages