-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provided model doesn't work [HAILO8] #81
Comments
Hi @almex-m-bulbul, Regards, |
@omerwer Hailo chip is connected. Same issue has been addressed my other teammate on a different machine as well. $ hailortcli scan
Hailo Devices:
[-] Device: 0000:04:00.0
$ hailortcli --version
HailoRT-CLI version 4.15.0
It's perfectly working when checked with other models $ hailortcli benchmark repvgg_a0_person_reid_2048.hef
Starting Measurements...
Measuring FPS in hw_only mode
Network repvgg_a0_person_reid_2048/repvgg_a0_person_reid_2048: 100% | 15382 | FPS: 1025.34 | ETA: 00:00:00
Measuring FPS and Power in streaming mode
[HailoRT] [warning] Using the overcurrent protection dvm for power measurement will disable the overcurrent protection.
If only taking one measurement, the protection will resume automatically.
If doing continuous measurement, to enable overcurrent protection again you have to stop the power measurement on this dvm.
Network repvgg_a0_person_reid_2048/repvgg_a0_person_reid_2048: 100% | 15386 | FPS: 1025.34 | ETA: 00:00:00
Measuring HW Latency
Network repvgg_a0_person_reid_2048/repvgg_a0_person_reid_2048: 100% | 6335 | HW Latency: 1.93 ms | ETA: 00:00:00
=======
Summary
=======
FPS (hw_only) = 1025.42
(streaming) = 1025.36
Latency (hw) = 1.93251 ms
Device 0000:04:00.0:
Power in streaming mode (average) = 1.68351 W
(max) = 1.70233 W |
Hi @almex-m-bulbul, Regards, |
We are trying to use the ' facial landmark detection model` on Hailo8 device, apparently it's not working.
model : tddfa_mobilenet_v1.hef
link: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.10.0/hailo8/tddfa_mobilenet_v1.hef
log:
I have tried to compile the onnx model, the compilation goes smoothly, but while ruuning it shows the same error.
The text was updated successfully, but these errors were encountered: