Skip to content
/ vsl Public

The implementation of "Learning a Hierarchical Latent-Variable Model of 3D Shapes" [3DV 2018].

Notifications You must be signed in to change notification settings

lorenmt/vsl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VSL - Variational Shape Learner

This repository contains the source code to support the paper: Learning a Hierarchical Latent-Variable Model of 3D Shapes, introduced by Shikun Liu, C. Lee Giles, Alexander G. Ororbia II.

For more visual results, please visit our project page here.

Requirements

VSL was written in python 3.6. In order to run the code, please make sure the following packages have been installed.

  • h5py 2.7
  • matplotlib 1.5
  • mayavi 4.5
  • numpy 1.12
  • scikit-learn 0.18
  • tensorflow 1.0

Most of the above can be directly installed using the pip command. However, we recommend that mayavi, which is used for 3D voxel visualization, is installed using conda environment (for simplicity).

Dataset

We use both 3D shapes from ModelNet and PASCAL 3D+ v1.0 aligned with images in PASCAL VOC 2012 for training our proposed VSL. ModelNet is used for general 3D shape learning including shape generation, interpolation and classification. PASCAL 3D is only used for image reconstruction.

Please download the dataset here: [link].

The above dataset contains files ModelNet10_res30_raw.mat and ModelNet40_res30_raw.mat representing the voxelized version of ModelNet10/40 and PASCAL3D.mat which represents voxelized PASCAL3D+ aligned with images.

Each ModelNet dataset contains a train and test split with each entry having 270001 dimension representing [id|voxel] in [30x30x30] resolution.

PASCAL3D contains image_train, model_train, image_test, model_test which were defined in Kar, et al. Each entry of model again has 270001 dimensions which is similar to that defined in ModelNet and each entry of image has [100,100,3] dimensions representing [100x100] RGB images.

Parameters

We have also included the pre-trained model parameters, which can be downloaded here.

Training VSL

Please download dataset and parameters (if using pre-trained parameters) from the links in the previous sections and extract them in the same folder of this repository.

Please use vsl_main.py for general 3D shape learning experiments, and vsl_imrec.py for image reconstruction experiment. In order to correctly use the hyper-parameters of the pre-trained model and to be consistent with the other experiment settings in the paper, please define hyper-parameters as follows,

ModelNet40 ModelNet10 PASCAL3D (jointly) PASCAL3D (separately)
global_latent_dim 20 10 10 5
local_latent_dim 10 5 5 2
local_latent_num 5 5 5 3
batch_size 200 100 40 5

The implementations are fully commented. For further details, please consult the paper and source code.

Normally, training VSL from scratch requires 40 hours on ModelNet on a fast computer, and requires 20-40 minutes on separately-trained image reconstruction experiment.

Citation

If you found this code/work to be useful in your own research, please considering citing the following:

@inproceedings{liu2018learning,
  title={Learning a hierarchical latent-variable model of 3d shapes},
  author={Liu, Shikun and Giles, Lee and Ororbia, Alexander},
  booktitle={2018 International Conference on 3D Vision (3DV)},
  pages={542--551},
  year={2018},
  organization={IEEE}
}

Contact

If you have any questions, please contact [email protected].

About

The implementation of "Learning a Hierarchical Latent-Variable Model of 3D Shapes" [3DV 2018].

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages