index #8012
Replies: 46 comments 90 replies
-
Hey everyone, Glenn here! 🚀 Dive into our comprehensive guide on YOLOv8, the pinnacle of real-time object detection and image segmentation technology. Whether you're just starting out or you're deep into the machine learning world, this page is your go-to resource for installing, predicting, and training with YOLOv8. Got questions or insights? This is the perfect spot to share your thoughts and learn from others in the community. Let's make the most of YOLOv8 together! 💡👥 |
Beta Was this translation helpful? Give feedback.
-
Hello, I am super new to computer vision, and I want to know if there is a way to isolate the detected texts (like you would isolate in Roboflow, where all the text areas detected will be split) so I can use it better in my text extraction model. Thank you. |
Beta Was this translation helpful? Give feedback.
-
if i pass the model an image if i want to extract class id and class name from result how can i do that |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
I tried using result.show() bit it says it has no obect show and using your
code, it says list has no object pred.
…On Sun, Mar 10, 2024, 3:34 AM Glenn Jocher ***@***.***> wrote:
@Zulkazeem <https://github.com/Zulkazeem> hey there! 👋 It looks like
your code is almost there, but if you're not getting any detections, there
might be a few things to check:
1.
*Model Confidence:* Ensure your model's confidence threshold isn't set
too high, which might prevent detections. Try lowering the conf
argument in your predict call.
2.
*Image Path:* Double-check the image path to ensure it's correct and
the image is accessible.
3.
*Model Compatibility:* Make sure the model you're using is appropriate
for the task. If it's trained on a very different dataset or for a
different task, it might not perform well on your images.
4.
*Looping Through Results:* The way you're iterating through pred and
then r seems a bit off. After calling predict, you should directly
access the detections, like so:
results = pred_model.predict(source=img_path)for result in results:
for *xyxy, conf, cls in result.pred[0]:
# Process each detection here
1. *Visualization:* Before trying to save or further process
detections, simply try visualizing them with result.show() to ensure
detections are being made.
If you've checked these and still face issues, it might be helpful to
share more details or error messages you're encountering. Keep
experimenting, and don't hesitate to reach out for more help! 🚀
—
Reply to this email directly, view it on GitHub
<#8012 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ARQRRZCLUDZ4XTCWWM4NNBDYXQEG3AVCNFSM6AAAAABCZBK6B6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DOMZUGA4TK>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hi, I'm new to running models myself, and the last time I did any image training was about 15 years ago, though I am a Python veteran. I'd like to try running the building footprint models. Do you have a video series that can take me through setting up YOLOv8 and then running the model to extract footprints? |
Beta Was this translation helpful? Give feedback.
-
Hi, Can someone please help? Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
hi, I'm a student and I'm doing this for my undergraduate thesis. I'm implementing yolov8 model in android gallery's search mechanism. the purpose of yolov8 model is to scan media files and return images that has a bounding box label that matches the search query. i can make it work with yolov5s.torchscript.ptl using org.pytorch:pytorch_android_lite:1.10.0 and org.pytorch:pytorch_android_torchvision_lite:1.10.0 but it wont work with yolov8s.torchscript. the yolov5s.torchscript.ptl has a function to load the model and the classes.txt, is the yolov8 model wont need that? |
Beta Was this translation helpful? Give feedback.
-
when i am training yolov8 model i need to store current epoch number to a variable that can be used where ever i want ? |
Beta Was this translation helpful? Give feedback.
-
Hey Glenn, However I just wanna obtain the metrics for the lower half of the images. I tried modifying the labels and annotations files of the validation split to contain only those bounding boxes which are in the lower half. However this doesn't seem to work. Any suggestions? |
Beta Was this translation helpful? Give feedback.
-
Hi, I have a question about the YOLOv8 model. In the pre-trained model, there are labels like "person" and others, but if I create a new model with only the "person" label, will there be a performance difference on my computer between the pre-trained model and the model I create? |
Beta Was this translation helpful? Give feedback.
-
In yolo v8.1 I can't find confusion matrix and results.png? Where is it stored?? This is how I started my training: %cd /kaggle/working/HOME/YOLO_V8_OUTPUT !yolo train model=yolov8l.pt data=/kaggle/working/HOME/FRUITS_AND_VEGITABLES_NITHIN-6/data.yaml epochs=100 imgsz=640 patience = 10 device=0,1 project=/kaggle/working/HOME/YOLO_V8_OUTPUT |
Beta Was this translation helpful? Give feedback.
-
I am tasked with developing a shelf management system tailored for a specific brand. This system aims to automate the process of sales personnel visiting stores to assess product stock levels and required replenishments. Utilizing object detection, I intend to accurately count the products on the shelves and inform the salesperson of the quantities needed to refill. One major challenge to address is product occlusion, where items may partially or fully obscure others, complicating accurate counting. I'm particularly interested in exploring how YOLOv8, a popular object detection model, can be employed to tackle this problem effectively. Any guidance or insights on implementing such a solution would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
Hi @glenn-jocher can you please have a look at this google doc, I have tried to explain the problem through screenshots, i am facing after fine tuning the model, I would really appreciate your kind guidance. https://docs.google.com/document/d/1WJ5SBdunWSqyd3FjgYgrZn2KjeYxezlel2LeXspmWAQ/edit?usp=sharing |
Beta Was this translation helpful? Give feedback.
-
你好,使用贝叶斯优化搜索yolov8的超参数,以达到一个好的效果。这么做有没有意义。我来优化lr0,momentum,weight_decay这个三个超参数行不行。还用不用加入更多的超参数。 |
Beta Was this translation helpful? Give feedback.
-
I have been trying to use deepsort with older yolov5 weight files but after installing and attempting to run from command prompt- I get this issue: (boxmot-py3.11) (base) C:\Users\flagu\Documents\yolo_tracking>python tracking/track.py --yolo-model best.pt Is there a way that I could specify and use my custom trained weights to track? |
Beta Was this translation helpful? Give feedback.
-
hi everyone Im trying to train in yolov8 with multi-gpu and i'm using the 'device=0,1' hyperparameter.... but i found that yolov5 has a 'Multi-GPU DistributedDataParallel Mode' using the '-m torch.distributed.run --nproc_per_node 2' hyperparameters.... is there a way i can use this options in yolov8? i could not find any similar hyperparameters in https://docs.ultralytics.com/usage/cfg/#train-settings |
Beta Was this translation helpful? Give feedback.
-
您好,我想知道yolov8能做到测量视频里面或者相片里面的物体长度吗 |
Beta Was this translation helpful? Give feedback.
-
Hey I want to detect only a specific class from the video using pretrained yolo. Is there any way to do that by specifying the class name directly rather than specifying the class id like : model.predict(source="ultralytics/assets/", save=True, classes=[0,1]) |
Beta Was this translation helpful? Give feedback.
-
Hi guys, |
Beta Was this translation helpful? Give feedback.
-
I'm confused on why my confusion matrix shows the predicted classes as background, am I doing something wrong with labeling the data? I've using roboflow to export my custom data set. |
Beta Was this translation helpful? Give feedback.
-
i have trained yolov8 to detect shelf and perticualr product inside shelf and empty space in shelf. |
Beta Was this translation helpful? Give feedback.
-
Would YOLOV8 be a suitable model for detecting objects within LIDAR birds-eye-view images. If so, are there any recommendations when training a model on the dataset? |
Beta Was this translation helpful? Give feedback.
-
Hiya again, are there any comparisons between the performance of YOLOV8 and two-stage object detectors? |
Beta Was this translation helpful? Give feedback.
-
Hello, may I ask how YOLOv8 uses bytetrack to track targets and determine their movement direction? |
Beta Was this translation helpful? Give feedback.
-
Hello, can anyone explain me what is difference between modes and task in yolov8, especially predict as modes and detect as task. I dont undestand about this |
Beta Was this translation helpful? Give feedback.
-
disculpe , me podría a entender por que me bota el tamaño de la imagen distinto , utilizo esto Predict with the modelresults = model.track(path0,show =False, imgsz=[640, 640] ), pero mi resultado es de esta forma image 1/1 C:\MI-CARRERA\Semestre7\SIS330\imagenes\72\72_01_inicio..jpg: 640x384 5 persons, 1002.7ms |
Beta Was this translation helpful? Give feedback.
-
Hello, I want to train my custom face dataset on yolo v8. The annotation is in the yolo format (0 0.06582725 0.36396989604685215 0.039814999999999996 0.1125) the code provided in the notebook is not very helpful to me, i will be glad if you somebody can help me through this. THANK YOU! |
Beta Was this translation helpful? Give feedback.
-
hey, as a beginner hard to understand, can you say where i want to start? |
Beta Was this translation helpful? Give feedback.
-
index
Explore a complete guide to Ultralytics YOLOv8, a high-speed, high-accuracy object detection & image segmentation model. Installation, prediction, training tutorials and more.
https://docs.ultralytics.com/
Beta Was this translation helpful? Give feedback.
All reactions