Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about clustering #42

Open
CaptainPrice12 opened this issue Mar 30, 2021 · 4 comments
Open

A question about clustering #42

CaptainPrice12 opened this issue Mar 30, 2021 · 4 comments

Comments

@CaptainPrice12
Copy link

Thank you for sharing this work!

Actually, I have a question about the clustering process in mmt_train_kmeans.py file.

dict_f, _ = extract_features(model_1_ema, cluster_loader, print_freq=50) cf_1 = torch.stack(list(dict_f.values())).numpy() dict_f, _ = extract_features(model_2_ema, cluster_loader, print_freq=50) cf_2 = torch.stack(list(dict_f.values())).numpy() cf = (cf_1+cf_2)/2

Here, I find mean-nets of model1 and model2 are used to generate features for clustering and initializing classifiers. But it seems that current models( model1 and model2) in every epoch are used instead of mean-net to compute features for clustering in the MMT paper. Does using mean-net here provide better performance? Could you give some explanations about it? Thanks!

@yxgeee
Copy link
Owner

yxgeee commented Mar 30, 2021

Yes, we adopted mean-nets to extract features for clustering in each epoch. The features extracted by mean-nets are more robust. I did not remember the performance gaps. You could have a try to replace it with current nets.

@CaptainPrice12
Copy link
Author

Got it. Thank you so much for the reply! May I ask one more question?

In the OpenUnReID repo, the implementation of MMT should be MMT+, right? In that repo, source_pretrain/main.py uses only source domain data for pretraining, not like using both source and target(only forward) in the MMT repo.

Meanwhile, for target training, it uses both source(labeled) and target(pseudo label) to conduct MMT training in OpenUnReID with a MoCo-based loss and DSBN, right? Please correct me if there is a misunderstanding. Thanks!

@yxgeee
Copy link
Owner

yxgeee commented Mar 31, 2021

MMT+ in OpenUnReID does not adopt source-domain pre-training. It uses two domains' images to conduct MMT training from scratch (ImageNet pre-trained). The MoCo loss is not adopted and DSBN is used. The MoCo loss only appears in the repo for VisDA challenge.

@CaptainPrice12
Copy link
Author

Thanks for the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants