-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Second prediction result score #12685
Comments
👋 Hello @flarota, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
Hello! It's quite normal for object detection models like YOLOv8 to occasionally confuse similar classes, especially if the differences are subtle and the model encounters slightly different data during inference than during training. The scores you're seeing, where the model is confident about a wrong class and gives a much lower score to the correct class, could be influenced by a few factors:
You could consider reviewing your training dataset to ensure it’s as diverse and balanced as possible and look into methods of model calibration for better probability estimates. Also, experimenting with the Hope this gives you a good starting point for troubleshooting! |
Hi @glenn-jocher, thank you for the reply. I agree with you and I think the cause is a dataset problem. A collegue of mine found another significant example: in a picture, the same objects are found twice by Yolo, the first time with the "correct class" (I call it "class_A") and a second time with the "wrong class" ("class_B").
The object 1 is correctly predicted in "1-1", and incorrectly in "1-2". The same object! In your opinion, how this situation can happen? Another question: in your point 3, you talked about the final softmax layer. Does Yolo have this layer? Because the sum of scores isn't 1, so I thought it ends with a sigmoid. Thanks |
Hi there! It's indeed intriguing to see how the model behaves differently for the same object within the same image. This could be due to variations in the object's context within the image, slight differences in appearance or angle in different instances, or even overlapping detections where the non-maximum suppression (NMS) hasn't fully resolved which detections to keep. Regarding your question about the softmax layer, YOLO models typically use a sigmoid function in the final layer for object detection tasks, which explains why the scores don't sum up to 1. This is because each class score is treated independently, allowing for multi-label classification. To access the tensor before the final sigmoid activation, you can modify the model's forward function or use hooks in PyTorch. Here’s a quick example of how you might use a forward hook to access the output before the activation: import torch
def get_activation(name):
def hook(model, input, output):
print(f"{name} activation: {output.detach()}")
return hook
model = YOLO('yolov8n.pt') # Load your model
layer_name = 'final_conv' # You need to specify the correct layer name
getattr(model, layer_name).register_forward_hook(get_activation(layer_name))
# Run your model to see the activations
model.predict('path/to/image.jpg') This code sets up a hook that prints the output of a specified layer. You'll need to replace Hope this helps! 😊 |
Search before asking
Question
Hello to everyone.
I would ask a question about Yolo predicted score tensor.
I trained a YOLOv8x-seg model and, after some tests, I found that in some cases it predicts a wrong class for some objects.
Actually this class is very similar to another one I have in my dataset. Usually in the test-set Yolo can classify them correctly, but trying with slight different images (to understand how good Yolo can generalize) it gets confused and swap the two classes.
As the two classes are similar, I expected Yolo can be confused, but I though to find the real class result with a quite high score. In fact my idea was to use not just the first score result, but to check all the score tensor.
Instead this is an example of a tensor score I got
The network is sure with a score like 0.6 (not too high) on the wrong class, but the real one has substantially zero score, like the other classes.
Thanks
Additional
No response
The text was updated successfully, but these errors were encountered: