Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about partition on a large colored point cloud #15

Open
sycmio opened this issue Mar 21, 2018 · 7 comments
Open

Questions about partition on a large colored point cloud #15

sycmio opened this issue Mar 21, 2018 · 7 comments
Labels

Comments

@sycmio
Copy link

sycmio commented Mar 21, 2018

Hi Loic,

When I tried to run partition on a large scale colored point cloud (contains 5000000 points), I got the following error:
/home/ubuntu/capstone/superpoint_graph/partition/provider.py:357: UserWarning: genfromtxt: Empty input file: "/home/ubuntu/capstone/semantic3d/data/test_full/colored.txt" , skip_header=i_rows) Traceback (most recent call last): File "partition/partition_Semantic3D.py", line 93, in <module> xyz, rgb = prune(data_file, args.ver_batch, args.voxel_width) File "/home/ubuntu/capstone/superpoint_graph/partition/provider.py", line 361, in prune xyz_full = np.array(vertices[:, 0:3], dtype='float32') IndexError: too many indices for array
The command I run here is:partition/partition_Semantic3D.py --SEMA3D_PATH $SEMA3D_DIR.You can find my file at:

I also tried to reduce the file size (I created another txt file and copy the first 691892 lines of my original colored point cloud to this new file, you can find it at:) and re-run partition with the same command. This time the error disappeared, but the number of point didn't reduced at all (the log is Reduced from 691892 to 691892 points (37.92%)). I remembered that when I run partition on a point cloud without color (you can find it at ), the point number drastically reduced (the log is Reduced from 635137 to 289 points (0.04%)). Could you please tell me reason of it?

Many Thanks,
Yongchi

@loicland
Copy link
Owner

loicland commented Mar 21, 2018

Right, an error happens if the size of the file is a multiple of the batch size. Good catch! It is fixed in the latest commit.

Also please use the newest commit and use partition.py instead of partition_X.py as stated in the updated README

I see that there is a problem displaying the correct pruning %, it is now also fixed

Now to your point clouds. This has nothing to do with color as far as I can tell . Your first file (691892 points) is already subsampled with at least a 5cm grid (actually about 12 cm), so the pruning does nothing.

The second one (635137) is very small, about 50 cm of length with a huge precision. Hence the pruning decimates the cloud completely.

@sycmio
Copy link
Author

sycmio commented Mar 21, 2018

Thanks for your answer! Now I can run your partition code successfully. But there are still some little problems: 1. The pruning % is still not correct 2. I still got a UserWarning: genfromtxt: Empty input file: although the code can keep running. 3. Seems that the partition on our point cloud ended because the max iteration was reached. Does it mean our point cloud is hard to compute the SPG?

@loicland
Copy link
Owner

loicland commented Mar 22, 2018

  1. you need to recompile libply_c.so

  2. Yes, that's expected. I'll look at how to shut down this warning later.

  3. I wouldn't worry about it, 5 iterations of cut pursuit is more than enough in most case. Is the partition satisfactory? Else post the cut pursuit steps please.

It is usually beneficiary both in processing speed and precision to subsample the input point cloud with --voxel_width in partition.py. You can upsample the results on the original point cloud with the parameter upsample 1 in /partition/visualizer.py

@zeroAska
Copy link

Hi loicland,

I have a similar issue there when running partition.py on semantic3d test_full:

=================
test_full/

1 / 16---> marketplacefeldkirch_station4_intensity_rgb
reading the existing feature file...
reading the existing superpoint graph file...
Timer : 0.0 / 0.0 / 0.0
2 / 16---> stgallencathedral_station6_intensity_rgb
reading the existing feature file...
reading the existing superpoint graph file...
Timer : 0.0 / 0.0 / 0.0
3 / 16---> sg27_station6_intensity_rgb
reading the existing feature file...
reading the existing superpoint graph file...
Timer : 0.0 / 0.0 / 0.0
4 / 16---> marketplacefeldkirch_station7_intensity_rgb
reading the existing feature file...
reading the existing superpoint graph file...
Timer : 0.0 / 0.0 / 0.0
5 / 16---> sg28_station5_xyz_intensity_rgb
reading the existing feature file...
reading the existing superpoint graph file...
Timer : 0.0 / 0.0 / 0.0
6 / 16---> stgallencathedral_station3_intensity_rgb
reading the existing feature file...
reading the existing superpoint graph file...
Timer : 0.0 / 0.0 / 0.0
7 / 16---> test_full
creating the feature file...
Traceback (most recent call last):
File "partition/partition.py", line 132, in
xyz, rgb = read_semantic3d_format(data_file, 0, '', args.voxel_width, args.ver_batch)
File "/home/v9999/perl_code/rvsm/superpoint_graph/partition/provider.py", line 228, in read_semantic3d_format
xyz_full = np.array(vertices[:, 0:3], dtype='float32')
IndexError: too many indices for array

@loicland
Copy link
Owner

loicland commented Apr 27, 2019

Hi,

Can you print the size of vertices just before the bug?

Also priniting the batch number :

print("%d" % (i_batch))

With i_batch a counter incremented at the beginning of the while True: loop

@zeroAska
Copy link

zeroAska commented Apr 27, 2019

Hi loicland,

Here are the prints before the error in provider.py:

7 / 16---> test_full
    creating the feature file...
[provider.py] length of vertices is 15
[provider.py] i_rows is 0, ver_batch is 5000000
Traceback (most recent call last):
  File "partition/partition.py", line 132, in <module>
    xyz, rgb = read_semantic3d_format(data_file, 0, '', args.voxel_width, args.ver_batch)
  File "/home/v9999/perl_code/rvsm/superpoint_graph/partition/provider.py", line 231, in read_semantic3d_format
    xyz_full = np.array(vertices[:, 0:3], dtype='float32')
IndexError: too many indices for array

@loicland loicland reopened this Apr 29, 2019
@loicland
Copy link
Owner

Hi,

can you add the following line 238 of /partition/provider.py, just before xyz_full = ...

print(vertices.shape)

and report the log after:

7 / 16---> test_full
    creating the feature file...

I am trying to reproduce your bug but I need these information.

Did you use the default values for ver_batch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants