You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The problem is that the feature vectors obtained in the onnx model are completely different from the Nvidia TRT version, and I don't know where the problem lies.
It should be noted that if you use the align and crop method, the results are slightly different from the case where you just send the cropped face image without alignment, but in the TRT the results are completely different.
I tried several different methods, such as changing the color channel or the order of the channels and how the network blob was created, but I didn't get the right output.
I am trying to use the model's own alignment and cropping mode and send the image with 15 facial coordinates to it, but currently the output in normal mode is completely wrong.
The text was updated successfully, but these errors were encountered:
Hello guys, thank you all for everything.
I am trying to use the s_face face_recognition model in tensorRT mode :
https://github.com/opencv/opencv_zoo/tree/main/models/face_recognition_sface
To start, I checked the model with the netron tool to get the model inputs and outputs :
Now, let's compare two photos for a test :
This is my program at this address
https://github.com/sayyid-abolfazl/sface_trt
and its output is not correct
![trt](https://private-user-images.githubusercontent.com/185664274/399157261-b37c167b-1910-4701-a1ee-2f888fb575f8.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk3MzA4NTksIm5iZiI6MTczOTczMDU1OSwicGF0aCI6Ii8xODU2NjQyNzQvMzk5MTU3MjYxLWIzN2MxNjdiLTE5MTAtNDcwMS1hMWVlLTJmODg4ZmI1NzVmOC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjE2JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxNlQxODI5MTlaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0xMjQzNmUyYjEzNjU5NDA1ZGY4MDdiMjFjNTI3MTdmMjNiMWUyMGFkODNmMjE0MzY4NjIwZmUyZTZhZGMxOGJiJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.fBAxP5RztPNzGV4L9Mnnx-A4Hsv4PbEpVu7AG36sSEw)
The problem is that the feature vectors obtained in the onnx model are completely different from the Nvidia TRT version, and I don't know where the problem lies.
https://github.com/opencv/opencv_zoo/blob/main/models/face_recognition_sface/demo.cpp
It should be noted that if you use the align and crop method, the results are slightly different from the case where you just send the cropped face image without alignment, but in the TRT the results are completely different.
I tried several different methods, such as changing the color channel or the order of the channels and how the network blob was created, but I didn't get the right output.
I am trying to use the model's own alignment and cropping mode and send the image with 15 facial coordinates to it, but currently the output in normal mode is completely wrong.
The text was updated successfully, but these errors were encountered: