Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scripts for Interactive Validation on COCO #127

Open
tjqual opened this issue Feb 24, 2024 · 4 comments
Open

Scripts for Interactive Validation on COCO #127

tjqual opened this issue Feb 24, 2024 · 4 comments

Comments

@tjqual
Copy link

tjqual commented Feb 24, 2024

Hi there,
Thanks for such a great and interesting work. I have been working on to reproduce the results in the paper, and noticed that the released code does not contain the scripts to evaluate on COCO dataset interactively. I tried to use the SimpleClickSampler.py strategy as Pascal dataset, however, the results obtained on COCO is far more lower than the reported numbers in the paper. Could you please povide more details about evaluating on COCO interactive segmentation or kindly release such code? Thanks

@HansPolo113
Copy link

@tjqual
Hello, I was wondering if you have found a solution? I am facing the same issue where I tried to replicate the results from the paper by following the 'Stroke' input in the SEEM's demo, but I found that there is a significant difference in the results.

Moreover, when I feed the GT boxes from COCO dataset into SAM and calculate the mIoU, the results are around 70, which is much higher than the 50 reported in the paper. This is quite puzzling to me.🤨

@tjqual
Copy link
Author

tjqual commented Feb 26, 2024

@1132741589 I just found that there is a supplementary material on the NeurIPS website of this paper. In the supplementary they mentioned the "COCO" interactive results are only based on 600 COCO-mini subet images, not the original whole COCO dataset. I guess that's why the numbers are different from my results obtained on all COCO images. Don't know if you are in the same situation.
Besides, I noticed that for the Pascal evaluation SimpleClick_Sampler(), they add a F.conv() layer with 3x3 kernel after sampling the point. I applied it to the COCO dataset and it improved the SEEM model performance but didn't help the SAM baseline results. Please let me know if you found any more tricks here. Thanks

@HansPolo113
Copy link

@tjqual
Hi. Thank you for the information provided. I indeed had not noticed this before, and I will look more closely into the setting of its test.

Additionally, there is a project called OMG-seg. In their paper, they also compared the coco mIoU with interactive inputs, which is called COCO-SAM in Tab.2. The experimental setup is essentially consistent with the way I did, and the SAM results they measured are also very close to mine. You can refer to the SAM, Semantic SAM, and other results obtained from thier tests.

@tjqual
Copy link
Author

tjqual commented Feb 27, 2024

@1132741589 ! I also noticed these works. Thanks for sharing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants