-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why is there a big gap between the reproducing results and the paper results? #13
Comments
Thanks for your feedback. There are some other users who also reported the results from |
The results I showed are based on the normal filter. I have tried gipuma also, and I found the point clouds are much smaller than those you provided, so the metrics on ACC and COMP are poor. |
Did you use our default code and hyper-parameters? Some other users told us that using our default code and normal fusion could get good results. |
Yeah, I just run the test_dtu.sh based on the pre-trained model you provided without any changes. I think there is something wrong with the installation of gipuma. |
How about your conda environment (e.g., python version, torch version)? If the torch and python versions are too low, you will get unpleasant results. |
My environment is : |
We haven't encountered this problem before. Even simply using the |
Hi, we have updated the fusion parameters of gipuma fusion method (in test.py), you can try it again. In my experiment, it can get less than 0.305mm overall error. |
Hi Yikang, thanks for your work, I have tried your released dtu checkpoints and updated fusion parameters of gipuma on the DTU evaluation set, the results are 0.3603 (accuracy), 0.2714 (completeness), and 0.316 (mean). I do not change any hyperparameters, my hardware is 3090Ti, my environment are python 3.6, PyTorch 1.10.2, CUDA 11.3, opencv 4.6.0. During the install of gipuma, the arch and code I use is Where the difference may from? Thank you sincerely for your help. Looking forward to your response. |
Hi Guidong, as we have gotten in touch via Wechat, here I leave a simple message as response :) |
Dear Yikang, thanks very much for your prompt response. Together with Yikang, I have made several experiments to verify that when using When using the Thanks again for Yikang's nice work, his contribution, as well as his warm and responsible reply. Wish you all happy New Year! Best regards, |
Dear Guidong: I am glad that my experiences were of help to you, and thanks a lot for your detailed sharing regarding the fusion method. I appreciate the sparks of ideas derived from our conversation, after all, science advances by sharing. Your sincerely, |
Thanks a lot for your kind sharing! Since I have the same cuda and gpu version with yours, I would like to ask what is the best setting of the arch and code? |
Hi, Yes, |
Okkkk. Thanks a lot! |
Thanks again for your sharing!I meet the same problem with you .My hardware is 3090, I refer to the web site you offered and set compute_86 and sm_86.but i get the results 0.3603 (accuracy), 0.2729 (completeness) ,they are very similar to your previous results.So,could you please tell me how did you adjust it ,what the conpute_xx and sm_xx you set finally and maybe some other details can help. |
#38 Edit: I just tried normal fusion
|
I have tried the pre-trained model you offered on DTU dataset. But the results I got are mean_acc=0.299, mean_comp=0.385, overall=0.342, and the results you presented in the paper are mean_acc=0.321, mean_comp=0.289, overall=0.305.
I do not know where the problem is.
The text was updated successfully, but these errors were encountered: