Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Second prediction result score #12685

Open
1 task done
flarota opened this issue May 14, 2024 · 4 comments
Open
1 task done

Second prediction result score #12685

flarota opened this issue May 14, 2024 · 4 comments
Labels
question Further information is requested

Comments

@flarota
Copy link

flarota commented May 14, 2024

Search before asking

Question

Hello to everyone.
I would ask a question about Yolo predicted score tensor.

I trained a YOLOv8x-seg model and, after some tests, I found that in some cases it predicts a wrong class for some objects.
Actually this class is very similar to another one I have in my dataset. Usually in the test-set Yolo can classify them correctly, but trying with slight different images (to understand how good Yolo can generalize) it gets confused and swap the two classes.
As the two classes are similar, I expected Yolo can be confused, but I though to find the real class result with a quite high score. In fact my idea was to use not just the first score result, but to check all the score tensor.

Instead this is an example of a tensor score I got

Wrong predicted class Real class Other classes
0,6147 0,0047 0,0002 0,0000 0,0000 0,0004

The network is sure with a score like 0.6 (not too high) on the wrong class, but the real one has substantially zero score, like the other classes.

  • how can this be possible?
  • is it caused by something inside the network that "stretch" the score tensor to maximize the first result?

Thanks

Additional

No response

@flarota flarota added the question Further information is requested label May 14, 2024
Copy link

👋 Hello @flarota, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

Hello!

It's quite normal for object detection models like YOLOv8 to occasionally confuse similar classes, especially if the differences are subtle and the model encounters slightly different data during inference than during training.

The scores you're seeing, where the model is confident about a wrong class and gives a much lower score to the correct class, could be influenced by a few factors:

  1. Dataset Imbalance: If one class is more prevalent in the training data, the model might lean towards predicting it more often.
  2. Inter-Class Variability: Similar classes might need more discriminative features, which could be enhanced either by tweaking the architecture or by providing more varied training samples.
  3. Model Confidence Calibration: Sometimes, the softmax layer used in final score prediction could lead to over-confident predictions. This can be addressed by techniques like temperature scaling in the softmax function to refine the confidence estimates.

You could consider reviewing your training dataset to ensure it’s as diverse and balanced as possible and look into methods of model calibration for better probability estimates. Also, experimenting with the --augment train flag and adjusting model hyperparameters could potentially improve the class discrimination.

Hope this gives you a good starting point for troubleshooting!

@flarota
Copy link
Author

flarota commented May 17, 2024

Hi @glenn-jocher, thank you for the reply.

I agree with you and I think the cause is a dataset problem.
But I think it's still interesting to understand why it happens and how to deal with it when you can't fix the dataset.

A collegue of mine found another significant example: in a picture, the same objects are found twice by Yolo, the first time with the "correct class" (I call it "class_A") and a second time with the "wrong class" ("class_B").
In both cases, Yolo was "sure" on the predicted class, giving to the other one a low score.

# Class_A Class_B Other classes
1-1 0,70725 0,0091712 5,4535E-05 1,1915E-09
1-2 0,15818 0,52027 1,1003E-05 2,4418E-08
2-1 0,77648 0,0081475 1,5678E-05 3,4846E-10
2-2 0,42299 0,12249 2,3135E-06 1,1411E-08

The object 1 is correctly predicted in "1-1", and incorrectly in "1-2". The same object!
Same situation in "2-1" and "2-2".

In your opinion, how this situation can happen?

Another question: in your point 3, you talked about the final softmax layer. Does Yolo have this layer? Because the sum of scores isn't 1, so I thought it ends with a sigmoid.
Is it possible to access the tensor before this final layer? I tried to identify this layer in YOLOv8x-seg model but I didn't find it, cause the complexity of the network. Could you suggest a way to do it, maybe with a piece of code?

Thanks

@glenn-jocher
Copy link
Member

Hi there!

It's indeed intriguing to see how the model behaves differently for the same object within the same image. This could be due to variations in the object's context within the image, slight differences in appearance or angle in different instances, or even overlapping detections where the non-maximum suppression (NMS) hasn't fully resolved which detections to keep.

Regarding your question about the softmax layer, YOLO models typically use a sigmoid function in the final layer for object detection tasks, which explains why the scores don't sum up to 1. This is because each class score is treated independently, allowing for multi-label classification.

To access the tensor before the final sigmoid activation, you can modify the model's forward function or use hooks in PyTorch. Here’s a quick example of how you might use a forward hook to access the output before the activation:

import torch

def get_activation(name):
    def hook(model, input, output):
        print(f"{name} activation: {output.detach()}")
    return hook

model = YOLO('yolov8n.pt')  # Load your model
layer_name = 'final_conv'  # You need to specify the correct layer name
getattr(model, layer_name).register_forward_hook(get_activation(layer_name))

# Run your model to see the activations
model.predict('path/to/image.jpg')

This code sets up a hook that prints the output of a specified layer. You'll need to replace 'final_conv' with the actual name of the layer you're interested in.

Hope this helps! 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants