You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am now using the pre-trained model to predict transformation.
My data are 8000 and 7000 respectively, but when using the model for inference, but it shows that the memory is out of bounds. How should I solve this problem in the code or data.?
My graphics card is RTX3090, changing the graphics card is too expensive for me.
The text was updated successfully, but these errors were encountered:
The memory heavy portion in the network is the attention layers which takes place only in the coarsest downsampled level. We designed the network to downsample to a few hundred points (~500) in this coarse level, and I suspect that your coarse point cloud has much more points than this. This is due to the point clouds having different scales.
If that is the case, you need to configure the KPConv layers to suit your dataset. Particularly, you may have to adjust the *_radius fields in the configuration file. Nevertheless, since your point clouds are very different from the training dataset, you might not get good results without retraining the network.
I am now using the pre-trained model to predict transformation.
My data are 8000 and 7000 respectively, but when using the model for inference, but it shows that the memory is out of bounds. How should I solve this problem in the code or data.?
My graphics card is RTX3090, changing the graphics card is too expensive for me.
The text was updated successfully, but these errors were encountered: