You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
I want to keep track of what and how the transformations are applied to an image. For example, after mosaic, when 4 images are stitched together, I want to know which images were stitched together, what are the pixel values for each part (like the starting and ending pixel values of each sub-image in the image so I know that this part of the mosaic sample belongs to this original image), and which bounding boxes belong to which sub-image. How can I keep track of these?
Also, in yoloworld, keeping a track of only mosaic augmentation will be enough or the bounding boxes are transformed somewhere further as well?
Additional
No response
The text was updated successfully, but these errors were encountered:
👋 Hello @toobatehreem, thank you for your interest in YOLOv5 🚀! This is an automated response to help guide you while an Ultralytics engineer will assist you shortly.
To answer your query, detailed tracking of transformations like mosaic augmentation can require custom logging or debugging in the code. If you'd like us to assist further, please provide more specifics about your setup and code modifications, or a minimum reproducible example to help us better understand your request.
Requirements
Verify that you are using Python>=3.8.0, and all required dependencies are installed correctly. You can check and install dependencies following the instructions in the requirements.txt file in the repository.
Environments
YOLOv5 can be run seamlessly in numerous verified environments:
Notebooks with free GPU.
Cloud platforms like Google Cloud or Amazon AWS.
Local environments configured for Python and PyTorch.
If you are modifying the dataset or training pipeline, please ensure transformations like mosaic are being applied using the components provided in the official repository to keep compatibility.
Next Steps
Please clarify your use case or share an example that includes:
A description of the data being used.
Code snippets showcasing where and how transformations are applied.
Any relevant logs or observations during training.
An Ultralytics team member will review your request shortly. 🚀
toobatehreem
changed the title
Keeping transformations track
Keeping track of transformations in YOLOWorld
Dec 6, 2024
@toobatehreem to track transformations like mosaic augmentation in YOLOv5, you can modify the mosaic function in the datasets.py file. Inside the function, you can log or store details such as source image indexes, their positions in the mosaic, and their corresponding bounding boxes. You can also add debug statements or save intermediate mosaic details for verification.
Regarding further transformations, bounding boxes might be adjusted again depending on additional augmentations like scaling, flipping, or random rotations, which are applied after the mosaic process. To trace these changes, inspect other augmentation functions in datasets.py like random_perspective.
For a deeper understanding, explore and modify the dataset creation pipeline in the source code. Let us know if you encounter specific issues while implementing this!
Search before asking
Question
I want to keep track of what and how the transformations are applied to an image. For example, after mosaic, when 4 images are stitched together, I want to know which images were stitched together, what are the pixel values for each part (like the starting and ending pixel values of each sub-image in the image so I know that this part of the mosaic sample belongs to this original image), and which bounding boxes belong to which sub-image. How can I keep track of these?
Also, in yoloworld, keeping a track of only mosaic augmentation will be enough or the bounding boxes are transformed somewhere further as well?
Additional
No response
The text was updated successfully, but these errors were encountered: