Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An interesting experiment on how the thres_map affect results #162

Open
leidahhh opened this issue Sep 28, 2022 · 0 comments
Open

An interesting experiment on how the thres_map affect results #162

leidahhh opened this issue Sep 28, 2022 · 0 comments

Comments

@leidahhh
Copy link

**作者您好,我对自适应阈值一块比较感兴趣。在实验中发现一个比较有意思的现象,就是我把阈值损失以及binary的损失都删除了,只保留了模型预测的损失,结果模型的整体性能有了很大的提升。我想跟您讨论一下这个现象,以及您当初设计这个模块的想法,期待您的回复
Hello, the author. I'm interested in adaptive threshold. An interesting phenomenon was found in the experiment, that is, I deleted the threshold loss and binary loss, and only retained the loss predicted by the model. As a result, the overall performance of the model has been greatly improved. I want to discuss this phenomenon with you

删除前的结果:
2022-09-14 07:58:42,295 DBNet.pytorch INFO: [287/1200], train_loss: 0.4967, time: 133.9059, lr: 0.0007836829637320193
2022-09-14 07:58:45,779 DBNet.pytorch INFO: FPS:30.785972625664495
2022-09-14 07:58:45,780 DBNet.pytorch INFO: test: recall: 0.458333, precision: 0.964912, f1: 0.621469
删除后的结果:
2022-09-28 09:09:11,810 DBNet.pytorch INFO: [287/1200], train_loss: 0.1195, time: 145.5552, lr: 0.0007836829637320193
2022-09-28 09:09:33,585 DBNet.pytorch INFO: FPS:34.50438997451759
2022-09-28 09:09:33,589 DBNet.pytorch INFO: test: recall: 0.762254, precision: 0.931540, f1: 0.838438

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant