Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'evaluate_demo_content_openset_multi_with_content_features' and 'evaluate_visual_prompt_refer_multi_with_content_features' #29

Open
Sun-Jing-Kang opened this issue Jul 8, 2024 · 0 comments

Comments

@Sun-Jing-Kang
Copy link

Thank you for your great work!

I have a few questions I'd like to ask you:

Recently, when I reproduce your work on my own dataset, I saw two functions for inference mask named 'evaluate_demo_content_openset_multi_with_content_features' and 'evaluate_visual_prompt_refer_multi_with_content_features'.

In the provided demo, default function use 'evaluate_demo_content_openset_multi_with_content_features', while when I change it to 'evaluate_visual_prompt_refer_multi_with_content_features', the results are poor.

I found that in 'evaluate_demo_content_openset_multi_with_content_features', the tgt come from pretrained weights like 'self.query_feat.weight' and 'self.query_embed.weight', while in 'evaluate_visual_prompt_refer_multi_with_content_features', they come from query position like sam, whether my understanding is correct?

What's the difference between these two methods and how to choose the appropriate method for mask retrieval, and wheather the pretrained weights provided only tend to get better results on the objects already in the training dataset.

Finally, how can I make the algorithm perform well on new objects without retraining the model?

Thank you for your patience and look forward to your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant