You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hello everyone, I just used yolo-nas to do object detection with video but I got an error like this:
[2024-04-22 17:37:21] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it [2024-04-22 17:37:22,397] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs. [WARNING]No module named 'pycocotools' [2024-04-22 17:37:32] WARNING - env_sanity_check.py - Failed to verify operating system: Deci officially supports only Linux kernels. Some features may not work as expected. [2024-04-22 17:37:33] WARNING - checkpoint_utils.py - :warning: The pre-trained models provided by SuperGradients may have their own licenses or terms and conditions derived from the dataset used for pre-training. It is your responsibility to determine whether you have permission to use the models for your use case. The model you have requested was pre-trained on the coco dataset, published under the following terms: https://cocodataset.org/#termsofuse [2024-04-22 17:37:33] INFO - checkpoint_utils.py - License Notification: YOLO-NAS pre-trained weights are subjected to the specific license terms and conditions detailed in https://github.com/Deci-AI/super-gradients/blob/master/LICENSE.YOLONAS.md By downloading the pre-trained weight files you agree to comply with these terms. [2024-04-22 17:37:33] INFO - checkpoint_utils.py - Successfully loaded pretrained weights for architecture yolo_nas_s C:\Users\MSI GF 63\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\amp\autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn( [2024-04-22 17:37:33] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting fuse_model=FalseTraceback (most recent call last): File "D:\5.1 YOLO_NAS_Object_Tracking_Pycharm\YOLO_NAS_Object_Tracking_Pycharm\Lecture6_Object_Detection_YOLONAS_Video\object_detection_video.py", line 32, in <module> result = list(model.predict(frame, conf=0.35))[0] TypeError: 'ImageDetectionPrediction' object is not iterable
and here is the program code:
`import cv2
import torch
from super_gradients.training import models
import numpy as np
import math
cap = cv2.VideoCapture("../Video/video1.mp4")
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
model = models.get('yolo_nas_s', pretrained_weights="coco").to(device)
Programming HelpProgramming languages, open source, and software development.
2 participants
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Body
hello everyone, I just used yolo-nas to do object detection with video but I got an error like this:
[2024-04-22 17:37:21] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it [2024-04-22 17:37:22,397] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs. [WARNING]No module named 'pycocotools' [2024-04-22 17:37:32] WARNING - env_sanity_check.py - Failed to verify operating system: Deci officially supports only Linux kernels. Some features may not work as expected. [2024-04-22 17:37:33] WARNING - checkpoint_utils.py - :warning: The pre-trained models provided by SuperGradients may have their own licenses or terms and conditions derived from the dataset used for pre-training. It is your responsibility to determine whether you have permission to use the models for your use case. The model you have requested was pre-trained on the coco dataset, published under the following terms: https://cocodataset.org/#termsofuse [2024-04-22 17:37:33] INFO - checkpoint_utils.py - License Notification: YOLO-NAS pre-trained weights are subjected to the specific license terms and conditions detailed in https://github.com/Deci-AI/super-gradients/blob/master/LICENSE.YOLONAS.md By downloading the pre-trained weight files you agree to comply with these terms. [2024-04-22 17:37:33] INFO - checkpoint_utils.py - Successfully loaded pretrained weights for architecture yolo_nas_s C:\Users\MSI GF 63\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\amp\autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn( [2024-04-22 17:37:33] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting
fuse_model=FalseTraceback (most recent call last): File "D:\5.1 YOLO_NAS_Object_Tracking_Pycharm\YOLO_NAS_Object_Tracking_Pycharm\Lecture6_Object_Detection_YOLONAS_Video\object_detection_video.py", line 32, in <module> result = list(model.predict(frame, conf=0.35))[0] TypeError: 'ImageDetectionPrediction' object is not iterable
and here is the program code:
`import cv2
import torch
from super_gradients.training import models
import numpy as np
import math
cap = cv2.VideoCapture("../Video/video1.mp4")
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
model = models.get('yolo_nas_s', pretrained_weights="coco").to(device)
count = 0
classNames = ["person", "bicycle", "car", "motorbike", "aeroplane", "bus", "train", "truck", "boat",
"traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat",
"dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella",
"handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat",
"baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup",
"fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli",
"carrot", "hot dog", "pizza", "donut", "cake", "chair", "sofa", "pottedplant", "bed",
"diningtable", "toilet", "tvmonitor", "laptop", "mouse", "remote", "keyboard", "cell phone",
"microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors",
"teddy bear", "hair drier", "toothbrush"
]
out = cv2.VideoWriter('Output.avi', cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), 10, (frame_width, frame_height))
while True:
ret, frame = cap.read()
count += 1
if ret:
result = list(model.predict(frame, conf=0.35))[0]
bbox_xyxys = result.prediction.bboxes_xyxy.tolist()
confidences = result.prediction.confidence
labels = result.prediction.labels.tolist()
for (bbox_xyxy, confidence, cls) in zip(bbox_xyxys, confidences, labels):
bbox = np.array(bbox_xyxy)
x1, y1, x2, y2 = bbox[0], bbox[1], bbox[2], bbox[3]
x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)
classname = int(cls)
class_name = classNames[classname]
conf = math.ceil((confidence*100))/100
label = f'{class_name}{conf}'
print("Frame N", count, "", x1, y1,x2, y2)
t_size = cv2.getTextSize(label, 0, fontScale = 1, thickness=2)[0]
c2 = x1 + t_size[0], y1 - t_size[1] -3
cv2.rectangle(frame, (x1, y1), c2, [255, 0, 255], -1, cv2.LINE_AA)
cv2.putText(frame, label, (x1, y1-2), 0, 1, [255, 255, 255], thickness=1, lineType = cv2.LINE_AA)
cv2.rectangle(frame, (x1, y1), (x2, y2), (255, 0, 255), 3)
resize_frame = cv2.resize(frame, (0, 0), fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA)
out.write(frame)
cv2.imshow("Frame", resize_frame)
if cv2.waitKey(1) & 0xFF == ord('1'):
break
else:
break
out.release()
cap.release()
cv2.destroyAllWindows()`
I am using super-gradients version 3.7.0. can you guys help me solve this error code?
Guidelines
Beta Was this translation helpful? Give feedback.
All reactions