Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is there a big gap between the reproducing results and the paper results? #13

Open
cainsmile opened this issue Aug 13, 2022 · 17 comments

Comments

@cainsmile
Copy link

I have tried the pre-trained model you offered on DTU dataset. But the results I got are mean_acc=0.299, mean_comp=0.385, overall=0.342, and the results you presented in the paper are mean_acc=0.321, mean_comp=0.289, overall=0.305.

I do not know where the problem is.

@DingYikang
Copy link
Collaborator

DingYikang commented Aug 14, 2022

Thanks for your feedback. There are some other users who also reported the results from gipuma fusion is worse than normal fusion. Maybe you can try to use the normal fusion, and the default configuration would produce good results.
Hope this can help you.

@cainsmile
Copy link
Author

The results I showed are based on the normal filter. I have tried gipuma also, and I found the point clouds are much smaller than those you provided, so the metrics on ACC and COMP are poor.

@DingYikang
Copy link
Collaborator

Did you use our default code and hyper-parameters? Some other users told us that using our default code and normal fusion could get good results.

@cainsmile
Copy link
Author

Yeah, I just run the test_dtu.sh based on the pre-trained model you provided without any changes. I think there is something wrong with the installation of gipuma.

@DingYikang
Copy link
Collaborator

DingYikang commented Aug 15, 2022

How about your conda environment (e.g., python version, torch version)? If the torch and python versions are too low, you will get unpleasant results.
Normally, using the default code with normal fusion won't get worse results.

@cainsmile
Copy link
Author

My environment is :
python 3.8
pytorch 1.9.1
CUDA 10.1
opencv 3.3.1

@DingYikang
Copy link
Collaborator

We haven't encountered this problem before. Even simply using the normal fusion can get around 0.31 mean error.
Maybe you can try again to re-install the environment and use the latest code, dataset and ckpt.

@DingYikang
Copy link
Collaborator

DingYikang commented Sep 19, 2022

The results I showed are based on the normal filter. I have tried gipuma also, and I found the point clouds are much smaller than those you provided, so the metrics on ACC and COMP are poor.

Hi, we have updated the fusion parameters of gipuma fusion method (in test.py), you can try it again. In my experiment, it can get less than 0.305mm overall error.
If you still meet problems, please send me an e-mail ([email protected]), I'll help you solve the problems.
Hope this can help you.

@YANG-SOBER
Copy link

Hi Yikang, thanks for your work, I have tried your released dtu checkpoints and updated fusion parameters of gipuma on the DTU evaluation set, the results are 0.3603 (accuracy), 0.2714 (completeness), and 0.316 (mean).

I do not change any hyperparameters, my hardware is 3090Ti, my environment are python 3.6, PyTorch 1.10.2, CUDA 11.3, opencv 4.6.0.

During the install of gipuma, the arch and code I use is compute_86 and sm_86, respectively.

Where the difference may from?

Thank you sincerely for your help.

Looking forward to your response.

@DingYikang
Copy link
Collaborator

Hi Yikang, thanks for your work, I have tried your released dtu checkpoints and updated fusion parameters of gipuma on the DTU evaluation set, the results are 0.3603 (accuracy), 0.2714 (completeness), and 0.316 (mean).

I do not change any hyperparameters, my hardware is 3090Ti, my environment are python 3.6, PyTorch 1.10.2, CUDA 11.3, opencv 4.6.0.

During the install of gipuma, the arch and code I use is compute_86 and sm_86, respectively.

Where the difference may from?

Thank you sincerely for your help.

Looking forward to your response.

Hi Guidong, as we have gotten in touch via Wechat, here I leave a simple message as response :)

@YANG-SOBER
Copy link

Hi Yikang, thanks for your work, I have tried your released dtu checkpoints and updated fusion parameters of gipuma on the DTU evaluation set, the results are 0.3603 (accuracy), 0.2714 (completeness), and 0.316 (mean).
I do not change any hyperparameters, my hardware is 3090Ti, my environment are python 3.6, PyTorch 1.10.2, CUDA 11.3, opencv 4.6.0.
During the install of gipuma, the arch and code I use is compute_86 and sm_86, respectively.
Where the difference may from?
Thank you sincerely for your help.
Looking forward to your response.

Hi Guidong, as we have gotten in touch via Wechat, here I leave a simple message as response :)

Dear Yikang, thanks very much for your prompt response.

Together with Yikang, I have made several experiments to verify that when using normal fusion method, the influence of cuda and torch version is negligible on the same gpu device. The result will change slightly on the different gpu device but with same environment.

When using the gipuma fusion method, the compilation of the gipuma (fusible) may significantly influence the final result, the compilation of which is dependent on the compute capability of the particular cuda and gpu version, please refer to https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.

Thanks again for Yikang's nice work, his contribution, as well as his warm and responsible reply.

Wish you all happy New Year!

Best regards,
Guidong

@DingYikang
Copy link
Collaborator

Hi Yikang, thanks for your work, I have tried your released dtu checkpoints and updated fusion parameters of gipuma on the DTU evaluation set, the results are 0.3603 (accuracy), 0.2714 (completeness), and 0.316 (mean).
I do not change any hyperparameters, my hardware is 3090Ti, my environment are python 3.6, PyTorch 1.10.2, CUDA 11.3, opencv 4.6.0.
During the install of gipuma, the arch and code I use is compute_86 and sm_86, respectively.
Where the difference may from?
Thank you sincerely for your help.
Looking forward to your response.

Hi Guidong, as we have gotten in touch via Wechat, here I leave a simple message as response :)

Dear Yikang, thanks very much for your prompt response.

Together with Yikang, I have made several experiments to verify that when using normal fusion method, the influence of cuda and torch version is negligible on the same gpu device. The result will change slightly on the different gpu device but with same environment.

When using the gipuma fusion method, the compilation of the gipuma (fusible) may significantly influence the final result, the compilation of which is dependent on the compute capability of the particular cuda and gpu version, please refer to https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.

Thanks again for Yikang's nice work, his contribution, as well as his warm and responsible reply.

Wish you all happy New Year!

Best regards, Guidong

Dear Guidong:

I am glad that my experiences were of help to you, and thanks a lot for your detailed sharing regarding the fusion method. I appreciate the sparks of ideas derived from our conversation, after all, science advances by sharing.
May you all have a happy new year!

Your sincerely,
Yikang

@qtz980805
Copy link

Hi Yikang, thanks for your work, I have tried your released dtu checkpoints and updated fusion parameters of gipuma on the DTU evaluation set, the results are 0.3603 (accuracy), 0.2714 (completeness), and 0.316 (mean).
I do not change any hyperparameters, my hardware is 3090Ti, my environment are python 3.6, PyTorch 1.10.2, CUDA 11.3, opencv 4.6.0.
During the install of gipuma, the arch and code I use is compute_86 and sm_86, respectively.
Where the difference may from?
Thank you sincerely for your help.
Looking forward to your response.

Hi Guidong, as we have gotten in touch via Wechat, here I leave a simple message as response :)

Dear Yikang, thanks very much for your prompt response.

Together with Yikang, I have made several experiments to verify that when using normal fusion method, the influence of cuda and torch version is negligible on the same gpu device. The result will change slightly on the different gpu device but with same environment.

When using the gipuma fusion method, the compilation of the gipuma (fusible) may significantly influence the final result, the compilation of which is dependent on the compute capability of the particular cuda and gpu version, please refer to https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.

Thanks again for Yikang's nice work, his contribution, as well as his warm and responsible reply.

Wish you all happy New Year!

Best regards, Guidong

Thanks a lot for your kind sharing! Since I have the same cuda and gpu version with yours, I would like to ask what is the best setting of the arch and code? compute_86 and sm_86?
Thanks again and look forward to your reply.

@YANG-SOBER
Copy link

Hi Yikang, thanks for your work, I have tried your released dtu checkpoints and updated fusion parameters of gipuma on the DTU evaluation set, the results are 0.3603 (accuracy), 0.2714 (completeness), and 0.316 (mean).
I do not change any hyperparameters, my hardware is 3090Ti, my environment are python 3.6, PyTorch 1.10.2, CUDA 11.3, opencv 4.6.0.
During the install of gipuma, the arch and code I use is compute_86 and sm_86, respectively.
Where the difference may from?
Thank you sincerely for your help.
Looking forward to your response.

Hi Guidong, as we have gotten in touch via Wechat, here I leave a simple message as response :)

Dear Yikang, thanks very much for your prompt response.
Together with Yikang, I have made several experiments to verify that when using normal fusion method, the influence of cuda and torch version is negligible on the same gpu device. The result will change slightly on the different gpu device but with same environment.
When using the gipuma fusion method, the compilation of the gipuma (fusible) may significantly influence the final result, the compilation of which is dependent on the compute capability of the particular cuda and gpu version, please refer to https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.
Thanks again for Yikang's nice work, his contribution, as well as his warm and responsible reply.
Wish you all happy New Year!
Best regards, Guidong

Thanks a lot for your kind sharing! Since I have the same cuda and gpu version with yours, I would like to ask what is the best setting of the arch and code? compute_86 and sm_86? Thanks again and look forward to your reply.

Hi, Yes, compute_86 and sm_86 will work, but I don't know if it is the best, you can refer to this post https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.

@qtz980805
Copy link

Hi Yikang, thanks for your work, I have tried your released dtu checkpoints and updated fusion parameters of gipuma on the DTU evaluation set, the results are 0.3603 (accuracy), 0.2714 (completeness), and 0.316 (mean).
I do not change any hyperparameters, my hardware is 3090Ti, my environment are python 3.6, PyTorch 1.10.2, CUDA 11.3, opencv 4.6.0.
During the install of gipuma, the arch and code I use is compute_86 and sm_86, respectively.
Where the difference may from?
Thank you sincerely for your help.
Looking forward to your response.

Hi Guidong, as we have gotten in touch via Wechat, here I leave a simple message as response :)

Dear Yikang, thanks very much for your prompt response.
Together with Yikang, I have made several experiments to verify that when using normal fusion method, the influence of cuda and torch version is negligible on the same gpu device. The result will change slightly on the different gpu device but with same environment.
When using the gipuma fusion method, the compilation of the gipuma (fusible) may significantly influence the final result, the compilation of which is dependent on the compute capability of the particular cuda and gpu version, please refer to https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.
Thanks again for Yikang's nice work, his contribution, as well as his warm and responsible reply.
Wish you all happy New Year!
Best regards, Guidong

Thanks a lot for your kind sharing! Since I have the same cuda and gpu version with yours, I would like to ask what is the best setting of the arch and code? compute_86 and sm_86? Thanks again and look forward to your reply.

Hi, Yes, compute_86 and sm_86 will work, but I don't know if it is the best, you can refer to this post https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.

Okkkk. Thanks a lot!

@ZWEQHLWY
Copy link

ZWEQHLWY commented Apr 7, 2023

Hi Yikang, thanks for your work, I have tried your released dtu checkpoints and updated fusion parameters of gipuma on the DTU evaluation set, the results are 0.3603 (accuracy), 0.2714 (completeness), and 0.316 (mean).

I do not change any hyperparameters, my hardware is 3090Ti, my environment are python 3.6, PyTorch 1.10.2, CUDA 11.3, opencv 4.6.0.

During the install of gipuma, the arch and code I use is compute_86 and sm_86, respectively.

Where the difference may from?

Thank you sincerely for your help.

Looking forward to your response.

Thanks again for your sharing!I meet the same problem with you .My hardware is 3090, I refer to the web site you offered and set compute_86 and sm_86.but i get the results 0.3603 (accuracy), 0.2729 (completeness) ,they are very similar to your previous results.So,could you please tell me how did you adjust it ,what the conpute_xx and sm_xx you set finally and maybe some other details can help.
Looking forward to your reply

@AsDeadAsADodo
Copy link

AsDeadAsADodo commented Oct 9, 2023

Hi Yikang, thanks for your work, I have tried your released dtu checkpoints and updated fusion parameters of gipuma on the DTU evaluation set, the results are 0.3603 (accuracy), 0.2714 (completeness), and 0.316 (mean).
I do not change any hyperparameters, my hardware is 3090Ti, my environment are python 3.6, PyTorch 1.10.2, CUDA 11.3, opencv 4.6.0.
During the install of gipuma, the arch and code I use is compute_86 and sm_86, respectively.
Where the difference may from?
Thank you sincerely for your help.
Looking forward to your response.

Thanks again for your sharing!I meet the same problem with you .My hardware is 3090, I refer to the web site you offered and set compute_86 and sm_86.but i get the results 0.3603 (accuracy), 0.2729 (completeness) ,they are very similar to your previous results.So,could you please tell me how did you adjust it ,what the conpute_xx and sm_xx you set finally and maybe some other details can help. Looking forward to your reply

#38
Same here, have you figure it out? My results based on gipuma(fusibile), similar to yours. Have you tried normal fusion?

Edit: I just tried normal fusion

- Acc. Comp. Overall
paper 0.321 0.289 0.305
reproduced Gipuma 0.364 0.275 0.3195
reproduced Normal 0.3474 0.3206 0.334

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants