You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I find that the training set and the validation set are almost identical, which means that a high mean Average Precision (mAP) does not necessarily indicate better detection ability; it could be overfitting.
I have separated the training set and the validation set, using the earlier data as the training set and the remaining data as the validation set. This means the validation set is distinct from the training set, and I have retrained the model. However, I found that the mAP has decreased significantly. This makes me question whether the original split method was reasonable. Looking forward to your reply!
The text was updated successfully, but these errors were encountered:
I find that the training set and the validation set are almost identical, which means that a high mean Average Precision (mAP) does not necessarily indicate better detection ability; it could be overfitting.
I have separated the training set and the validation set, using the earlier data as the training set and the remaining data as the validation set. This means the validation set is distinct from the training set, and I have retrained the model. However, I found that the mAP has decreased significantly. This makes me question whether the original split method was reasonable. Looking forward to your reply!
The text was updated successfully, but these errors were encountered: