Skip to content

Commit

Permalink
update framework
Browse files Browse the repository at this point in the history
  • Loading branch information
yanx27 committed Nov 26, 2019
1 parent 31deedb commit 5e31f21
Show file tree
Hide file tree
Showing 27 changed files with 2,420 additions and 1,162 deletions.
107 changes: 58 additions & 49 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,72 +2,81 @@

This repo is implementation for [PointNet](http://openaccess.thecvf.com/content_cvpr_2017/papers/Qi_PointNet_Deep_Learning_CVPR_2017_paper.pdf) and [PointNet++](http://papers.nips.cc/paper/7095-pointnet-deep-hierarchical-feature-learning-on-point-sets-in-a-metric-space.pdf) in pytorch.

## Data Preparation
* Download **ModelNet** [here](http://modelnet.cs.princeton.edu/ModelNet40.zip) for classification and **ShapeNet** [here](https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip) for part segmentation. Uncompress the downloaded data in this directory. `./data/ModelNet` and `./data/ShapeNet`.
* Run `download_data.sh` and download prepared **S3DIS** dataset for sematic segmantation and save it in `./data/indoor3d_sem_seg_hdf5_data/`
## Update
* 2019/11/26: (1). Fixed some errors in previous codes and added data augmentation tricks. Now classification can achieve 92.5\%! (2). Added testing codes, including classification and segmentation, and semantic segmentation with visualization. (3). Organized all models into `./models` files for easy use.


## Classification
### PointNet
* python train_clf.py --model_name pointnet
### PointNet++
* python train_clf.py --model_name pointnet2
### Data Preparation
Download alignment **ModelNet** [here](https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip) and save in `data/modelnet40_normal_resampled/`

### Run
```
## Check model in ./models folder
## E.g. pointnet2_msg
python train_cls.py --model pointnet2_cls_msg --normal --log_dir pointnet2_cls_msg
python test_cls.py --normal --log_dir pointnet2_cls_msg
```

### Performance
| Model | Accuracy |
|--|--|
| PointNet (Official) | 89.2|
| PointNet (Pytorch) | **89.4**|
| PointNet++ (Official) | **91.9** |
| PointNet++ (Pytorch) | 91.8 |

* Training Pointnet with 0.001 learning rate in SGD, 24 batchsize and 141 epochs.
* Training Pointnet++ with 0.001 learning rate in SGD, 12 batchsize and 45 epochs.
| PointNet++ (Official) | 91.9 |
|--|--|
| PointNet (Pytorch without normal) | 90.6|
| PointNet (Pytorch with normal) | 91.4|
| PointNet2_ssg (Pytorch without normal) | 92.2|
| PointNet2_ssg (Pytorch with normal) | **92.4**|

## Part Segmentation
### PointNet
* python train_partseg.py --model_name pointnet
### PointNet++
* python train_partseg.py --model_name pointnet2
### Data Preparation
Download alignment **ShapeNet** [here](https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip) and save in `data/shapenetcore_partanno_segmentation_benchmark_v0_normal/`
### Run
```
## Check model in ./models folder
## E.g. pointnet2_msg
python train_partseg.py --model pointnet2_part_seg_msg --normal
python test_partseg.py --normal --log_dir pointnet2_part_seg_msg
```
### Performance
| Model | Inctance avg | Class avg |aero | bag | cap |car |chair |ear phone |guitar | knife | lamp |laptop | motor |mug | pistol |rocket | skate board | table |
|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|
|PointNet (Official) |**83.7**|**80.4** |83.4| 78.7| 82.5| 74.9| 89.6 |73| 91.5| 85.9 |80.8| 95.3| 65.2 |93| 81.2| 57.9| 72.8| 80.6|
|PointNet (Pytorch)| 82.4 |78.4| 81.1 |77.8 |83.7 |74.3 |83.3| 65.7| 90.5 |85.1| 78.1 |94.5 |63.7 |91.7 |80.5|56.2 |73.7 |67.5|
|PointNet++ (Official)|**85.1** |**81.9** |82.4|79 |87.7 |77.3| 90.8| 71.8| 91| 85.9| 83.7| 95.3 |71.6| 94.1 |81.3| 58.7| 76.4| 82.6|
|PointNet++ (Pytorch)| 84.1| 81.6 |82.6| 85.7| 89.3 |78.1|86.8| 68.9 |91.6| 88.9| 83.9 |96.8 |70.1 |95.7 |82.8| 59.8 |76.3 |71.1|

* Training both Pointnet and Pointnet++ with 0.001 learning rate in Adam, 16 batchsize, about 130 epochs and 0.5 learning rate decay every 20/30 epochs.
* **Class avg** is the mean IoU averaged across all object categories, and **inctance avg** is the mean IoU across all objects.
* In official version PointNet, author use 2048 point cloud in training and 3000 point cloud with norm in testing. In official version PointNet++, author use 2048 point cloud with its norm (Bx2048x6) in both training and testing.

| Model | Inctance avg | Class avg
|--|--|--|
|PointNet (Official) |**83.7**|**80.4**
|PointNet (Pytorch)| - |-|
|PointNet++ (Official)|**85.1** |**81.9**
|PointNet++ (Pytorch)| -| -


## Semantic Segmentation
### PointNet
* python train_semseg.py --model_name pointnet
### PointNet++
* python train_semseg.py --model_name pointnet2
### Run
```
## Check model in ./models folder
## E.g. pointnet2_ssg
python train_semseg.py --model pointnet2_sem_seg --normal
python test_semseg.py --normal --log_dir pointnet2_sem_seg
```
### Performance (test on Area_5)
|Model | Mean IOU | ceiling | floor | wall | beam | column | window | door | chair| tabel| bookcase| sofa | board | clutter |
|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|
| PointNet (Official) | 41.09|88.8|**97.33**|69.8|0.05|3.92|**46.26**|10.76|**52.61**|**58.93**|**40.28**|5.85|26.38|33.22|
| PointNet (Pytorch) | **44.43**|**91.1**|96.8|**72.1**|**5.82**|**14.7**|36.03|**37.1**|49.36|50.17|35.99|**14.26**|**33.9**|**40.23**|
| PointNet++ (Official) |N/A | | | | | | | |
| PointNet++ (Pytorch) | **52.28**|91.7|95.9|74.6|0.1|18.9|43.3|31.1|73.1|65.8|51.1|27.5|43.8|53.8|
* Training Pointnet with 0.001 learning rate in Adam, 24 batchsize and 84 epochs.
* Training Pointnet++ with 0.001 learning rate in Adam, 12 batchsize and 67 epochs.
|Model | Mean IOU |
|--|--|
| PointNet (Official) | 41.09|
| PointNet (Pytorch) | -|
| PointNet++ (Official) |N/A |
| PointNet++ (Pytorch) | -|


## Visualization
### Using show3d_balls.py
`cd visualizer`<br>
`bash build.sh #build C++ code for visualization`
```
cd visualizer
bash build.sh #build C++ code for visualization
## run one example
python show3d_balls.py
```
![](/visualizer/pic.png)
### Using pc_utils.py
![](/visualizer/example.jpg)

## TODO
### Using MeshLab
![](/pic/pic2.png)

- [x] PointNet and PointNet++
- [x] Experiment
- [x] Visualization Tool

## Reference By
[halimacc/pointnet3](https://github.com/halimacc/pointnet3)<br>
Expand Down
146 changes: 91 additions & 55 deletions data_utils/ModelNetDataLoader.py
Original file line number Diff line number Diff line change
@@ -1,67 +1,103 @@
import numpy as np
import warnings
import h5py
import os
from torch.utils.data import Dataset
warnings.filterwarnings('ignore')

def load_h5(h5_filename):
f = h5py.File(h5_filename)
data = f['data'][:]
label = f['label'][:]
seg = []
return (data, label, seg)

def load_data(dir,classification = False):
data_train0, label_train0,Seglabel_train0 = load_h5(dir + 'ply_data_train0.h5')
data_train1, label_train1,Seglabel_train1 = load_h5(dir + 'ply_data_train1.h5')
data_train2, label_train2,Seglabel_train2 = load_h5(dir + 'ply_data_train2.h5')
data_train3, label_train3,Seglabel_train3 = load_h5(dir + 'ply_data_train3.h5')
data_train4, label_train4,Seglabel_train4 = load_h5(dir + 'ply_data_train4.h5')
data_test0, label_test0,Seglabel_test0 = load_h5(dir + 'ply_data_test0.h5')
data_test1, label_test1,Seglabel_test1 = load_h5(dir + 'ply_data_test1.h5')
train_data = np.concatenate([data_train0,data_train1,data_train2,data_train3,data_train4])
train_label = np.concatenate([label_train0,label_train1,label_train2,label_train3,label_train4])
train_Seglabel = np.concatenate([Seglabel_train0,Seglabel_train1,Seglabel_train2,Seglabel_train3,Seglabel_train4])
test_data = np.concatenate([data_test0,data_test1])
test_label = np.concatenate([label_test0,label_test1])
test_Seglabel = np.concatenate([Seglabel_test0,Seglabel_test1])

if classification:
return train_data, train_label, test_data, test_label
else:
return train_data, train_Seglabel, test_data, test_Seglabel


def pc_normalize(pc):
centroid = np.mean(pc, axis=0)
pc = pc - centroid
m = np.max(np.sqrt(np.sum(pc**2, axis=1)))
pc = pc / m
return pc

def farthest_point_sample(point, npoint):
"""
Input:
xyz: pointcloud data, [N, D]
npoint: number of samples
Return:
centroids: sampled pointcloud index, [npoint, D]
"""
N, D = point.shape
xyz = point[:,:3]
centroids = np.zeros((npoint,))
distance = np.ones((N,)) * 1e10
farthest = np.random.randint(0, N)
for i in range(npoint):
centroids[i] = farthest
centroid = xyz[farthest, :]
dist = np.sum((xyz - centroid) ** 2, -1)
mask = dist < distance
distance[mask] = dist[mask]
farthest = np.argmax(distance, -1)
point = point[centroids.astype(np.int32)]
return point

class ModelNetDataLoader(Dataset):
def __init__(self, data, labels, rotation = None):
self.data = data
self.labels = labels
self.rotation = rotation
def __init__(self, root, npoint=1024, split='train', uniform=False, normal_channel=True, cache_size=15000):
self.root = root
self.npoints = npoint
self.uniform = uniform
self.catfile = os.path.join(self.root, 'modelnet40_shape_names.txt')

self.cat = [line.rstrip() for line in open(self.catfile)]
self.classes = dict(zip(self.cat, range(len(self.cat))))
self.normal_channel = normal_channel

shape_ids = {}
shape_ids['train'] = [line.rstrip() for line in open(os.path.join(self.root, 'modelnet40_train.txt'))]
shape_ids['test'] = [line.rstrip() for line in open(os.path.join(self.root, 'modelnet40_test.txt'))]

assert (split == 'train' or split == 'test')
shape_names = ['_'.join(x.split('_')[0:-1]) for x in shape_ids[split]]
# list of (shape_name, shape_txt_file_path) tuple
self.datapath = [(shape_names[i], os.path.join(self.root, shape_names[i], shape_ids[split][i]) + '.txt') for i
in range(len(shape_ids[split]))]
print('The size of %s data is %d'%(split,len(self.datapath)))

self.cache_size = cache_size # how many data points to cache in memory
self.cache = {} # from index to (point_set, cls) tuple

def __len__(self):
return len(self.data)

def rotate_point_cloud_by_angle(self, data, rotation_angle):
"""
Rotate the point cloud along up direction with certain angle.
:param batch_data: Nx3 array, original batch of point clouds
:param rotation_angle: range of rotation
:return: Nx3 array, rotated batch of point clouds
"""
cosval = np.cos(rotation_angle)
sinval = np.sin(rotation_angle)
rotation_matrix = np.array([[cosval, 0, sinval],
[0, 1, 0],
[-sinval, 0, cosval]])
rotated_data = np.dot(data, rotation_matrix)

return rotated_data
return len(self.datapath)

def _get_item(self, index):
if index in self.cache:
point_set, cls = self.cache[index]
else:
fn = self.datapath[index]
cls = self.classes[self.datapath[index][0]]
cls = np.array([cls]).astype(np.int32)
point_set = np.loadtxt(fn[1], delimiter=',').astype(np.float32)
if self.uniform:
point_set = farthest_point_sample(point_set, self.npoints)
else:
point_set = point_set[0:self.npoints,:]

point_set[:, 0:3] = pc_normalize(point_set[:, 0:3])

if not self.normal_channel:
point_set = point_set[:, 0:3]

if len(self.cache) < self.cache_size:
self.cache[index] = (point_set, cls)

return point_set, cls

def __getitem__(self, index):
if self.rotation is not None:
pointcloud = self.data[index]
angle = np.random.randint(self.rotation[0], self.rotation[1]) * np.pi / 180
pointcloud = self.rotate_point_cloud_by_angle(pointcloud, angle)
return self._get_item(index)

return pointcloud, self.labels[index]
else:
return self.data[index], self.labels[index]



if __name__ == '__main__':
import torch

data = ModelNetDataLoader('/data/modelnet40_normal_resampled/',split='train', uniform=False, normal_channel=True,)
DataLoader = torch.utils.data.DataLoader(data, batch_size=12, shuffle=True)
for point,label in DataLoader:
print(point.shape)
print(label.shape)
Loading

0 comments on commit 5e31f21

Please sign in to comment.