Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I want test your own data, how many points can you enter at most each time? #25

Open
wang89280 opened this issue Dec 6, 2023 · 2 comments

Comments

@wang89280
Copy link

I am now using the pre-trained model to predict transformation.

My data are 8000 and 7000 respectively, but when using the model for inference, but it shows that the memory is out of bounds. How should I solve this problem in the code or data.?

My graphics card is RTX3090, changing the graphics card is too expensive for me.

@yewzijian
Copy link
Owner

24GB is sufficient for inference.

The memory heavy portion in the network is the attention layers which takes place only in the coarsest downsampled level. We designed the network to downsample to a few hundred points (~500) in this coarse level, and I suspect that your coarse point cloud has much more points than this. This is due to the point clouds having different scales.

If that is the case, you need to configure the KPConv layers to suit your dataset. Particularly, you may have to adjust the *_radius fields in the configuration file. Nevertheless, since your point clouds are very different from the training dataset, you might not get good results without retraining the network.

@wang89280
Copy link
Author

Cloud you tell me how to modify this code in order to correct result? The data is in the attachment.
des.txt
src.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants