Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extension of algorithm #14

Open
lcxiha opened this issue Apr 15, 2023 · 24 comments
Open

Extension of algorithm #14

lcxiha opened this issue Apr 15, 2023 · 24 comments

Comments

@lcxiha
Copy link

lcxiha commented Apr 15, 2023

Hello,Is this algorithm suitable for underwater point cloud map registration? If appropriate, how should the dataset be processed?Is the dataset unlabeled?(For unsupervised learning)

@yewzijian
Copy link
Owner

yewzijian commented Apr 15, 2023 via email

@lcxiha
Copy link
Author

lcxiha commented Apr 17, 2023

   Thanks a lot! There are still some questions I would like to consult: 1. The ground truth (label) is the true transformation R and T between two point clouds, right? 2.Is it the same dataset as the one used in the paper predator?
   Looking forward to your reply.

@yewzijian
Copy link
Owner

yewzijian commented Apr 17, 2023 via email

@lcxiha
Copy link
Author

lcxiha commented Apr 17, 2023

Ok,thanks.If I want to use this algorithm for unlabeled datasets, how can I obtain labels for ground truth? Do you have any good suggestions? Looking forward to your reply.

@lcxiha
Copy link
Author

lcxiha commented Apr 19, 2023

Hi,Does"Although internally it converts them into the ground truth corresponding locations for training, since that’s what the network outputs."refer to Positional encodings?

@yewzijian
Copy link
Owner

Ok,thanks.If I want to use this algorithm for unlabeled datasets, how can I obtain labels for ground truth? Do you have any good suggestions? Looking forward to your reply.

This is tricky and depends on your problem. Some possible solutions are rely on external sensors, e.g. mocap systems, or semi-manual/manual registrations.

Hi,Does"Although internally it converts them into the ground truth corresponding locations for training, since that’s what the network outputs."refer to Positional encodings?

What I meant was that the training loss is based on the point coordinates, and not the rotation/translation.

@lcxiha
Copy link
Author

lcxiha commented Apr 19, 2023

Thank you very much for your patient answer!
Is this algorithm suitable for odometriyKITTI datasets or only point cloud datasets for slam loop back detection?
Looking forward to your reply.

@yewzijian
Copy link
Owner

yewzijian commented Apr 19, 2023 via email

@lcxiha
Copy link
Author

lcxiha commented Apr 19, 2023

What I mean is that I want to use this algorithm to achieve the registration of two overlapping point cloud maps. If we currently know the rotation and translation matrices of our dataset and the overlap rate of two point clouds, we would like to use this algorithm to achieve point cloud map registration.Do you think it feasible?

@yewzijian
Copy link
Owner

yewzijian commented Apr 21, 2023 via email

@lcxiha
Copy link
Author

lcxiha commented Apr 24, 2023

Thank you very much for your patient answer. I have another question: May I ask if the dataset used in the paper is in the global coordinate system or in the carrier coordinate system.Looking forward to your reply.

@yewzijian
Copy link
Owner

yewzijian commented Apr 28, 2023

Not 100% sure, but I suspect the coordinates of the points are w.r.t. to the camera, i.e. origin is where the camera is. Possibly you can consult the paper on the dataset to have a definite answer.

@lcxiha
Copy link
Author

lcxiha commented May 5, 2023

Thanks a lot! Given the rotation matrix and translation matrix, I want to visualize the point cloud registration of two frames with overlapping rate. How do I build this code?

@yewzijian
Copy link
Owner

yewzijian commented May 8, 2023 via email

@lcxiha
Copy link
Author

lcxiha commented May 9, 2023

Sorry, I may not have expressed my question clearly. I want to visualize two point clouds with overlapping frames, but registration is not done with the trained model. Can the alignment of two point clouds (with a certain overlap rate) be performed when the transformation matrix of two point clouds is known?

@yewzijian
Copy link
Owner

yewzijian commented May 9, 2023 via email

@lcxiha
Copy link
Author

lcxiha commented May 15, 2023

I'm sorry, I want to verify if the dataset can be aligned under the provided transformation matrix. Now I know! However, I have a question: why does the test dataset (test-3DMatch_info.pkl and test-3DLoMatch_info.pkl) still have a transformation matrix? Isn't it only the training dataset (train_info. pkl) and the validation dataset (vai_info. pkl) that have labels (transformation matrices)?

@yewzijian
Copy link
Owner

yewzijian commented May 16, 2023 via email

@lcxiha
Copy link
Author

lcxiha commented May 16, 2023

I don't know if my understanding is correct: the transformation matrices in the training dataset and validation dataset are used for training and evaluation, while the transformation matrices in the test dataset are only used for evaluation, that is to evaluate the quality of the registration results of the two point clouds.

@lcxiha
Copy link
Author

lcxiha commented May 16, 2023

And the evaluation result of registration is reflected in the loss function.

@aniket-gupta1
Copy link

@yewzijian Can you share what parameter file you used for training the model on KITTI?

@yewzijian
Copy link
Owner

yewzijian commented Jul 6, 2023 via email

@yewzijian
Copy link
Owner

Hi @aniket-gupta1 , see the following for my Kitti parameters. I didn't really spend much time tuning these parameters though so you might be able to find better ones.
kitti.yaml.zip

@aniket-gupta1
Copy link

@yewzijian Thankyou!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants