Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weird evaluation code in evaluate.py #22

Open
jackyjsy opened this issue Feb 14, 2019 · 1 comment
Open

Weird evaluation code in evaluate.py #22

jackyjsy opened this issue Feb 14, 2019 · 1 comment

Comments

@jackyjsy
Copy link

jackyjsy commented Feb 14, 2019

Start from Line #140 in evaluation.py, it seems to me that you are using the groundtruth keypoints to obtain you keypoint estimation, which should not happen when you evaluate your PRN network. This issue makes it an improper evaluation.

A brief summary is that, first, you generate indexes from the old_weights_bbox (groundtruth). Then. you seem to utilize the index to place a window around that groundtruth position and calculate your estimated scores. Then the output keypoints are summarized from the scores.

I found the issue in the PyTorch version. There was another guy found the same issue Issue #17. Then I came here and found the same issue in this Keras version. @mkocabas please respond to our concerns. Thanks!

@abster95
Copy link

abster95 commented Mar 8, 2019

@jackyjsy Seems like they're presenting the Both GT part from the Table3 of their work. Here they show how well their PRN network would assign keypoints to person instances if the keypoint and the person segmentation subnet gave perfect outputs.

I feel like @mkocabas and others should have been a bit more explicit with their intent for this repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants