-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploy the yolort on mobile by react-native-pytorch-core #378
Comments
Hi @JohnZcp , Thanks for reporting this problem to us. And is there any minimal codes to reproduce this bug here? As such we could resolve this issue more quickly. |
@zhiqwang I just got the response, and the developer will share repo in this post soon. |
Hi @zhiqwang, @JohnZcp reached out and asked if I could provide a repo for the crash when loading the I created a repro example based on the PyTorch Android HelloWorldApp demo. You can access the GitHub repo https://github.com/raedle/c/tree/yolov5s/HelloWorldApp The detailed steps are below or in the HelloWorldApp Let me know if you have any questions! QuickstartHelloWorld is a simple image classification application that demonstrates how to use PyTorch Android API. 1. ModelThe TorchScript yolort model is part of the repo in 2. Cloning from GitHub
3. Build and install debug buildIf Android SDK and Android NDK are already installed you can install this application to the connected android device or emulator with:
We recommend you to open this project in Android Studio 3.5.1+ (At the moment PyTorch Android and demo applications use android gradle plugin of version 3.5.0, which is supported only by Android Studio version 3.5.1 and higher), 4. Gradle dependenciesPyTorch Android is added to the HelloWorld as gradle dependencies in build.gradle:
Where
The JNI bits of the PyTorch Android dependency will not be used, but instead it extracts the PyTorch C++ frontend API (i.e., 5. Get Crash ReportThe app will instacrash on start with the following error:
Start logcat to receive crash report
Then run the 6. When it does not crash anymoreIf the model works with PyTorch Mobile, the app should not crash. Additionally, the
7. Alternative ModelCheck that alternative model works. Go to
with
Rebuild debug build and install it again with
The app will load successfully without crashing |
@zhiqwang How is the progress? Did you find the reason for the bug? |
🐛 Describe the bug
This is the post follow the recent discussion about the problem of deploying yolort on mobile. Here is the link to the recent discussion: facebookresearch/playtorch#10 . I received the feedback from pytorch live developer:
'''
The TorchScripted yolort model fails inference with the following error:
terminating with uncaught exception of type c10::Error: isIntList()INTERNAL ASSERT FAILED at "../../../../src/main/cpp/libtorch_include/x86/ATen/core/ivalue_inl.h":1808, please report a bug to PyTorch. Expected IntList but got GenericList
Exception raised from toIntList at ../../../../src/main/cpp/libtorch_include/x86/ATen/core/ivalue_inl.h:1808 (most recent call first):
(no backtrace available)" failed'
'''
It seems like these is a problem the datatype conflict between IntList and GenericList, and it is able to be solved from either sides. Is there anyone familiar with this error? I would like to deliver the message between yolort team and Pytorch Mobile team.
Versions
Collecting environment information...
PyTorch version: 1.10.0+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.12.0
Libc version: glibc-2.26
Python version: 3.7.13 (default, Mar 16 2022, 17:37:17) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: False
CUDA runtime version: 11.1.105
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.10.0+cu111
[pip3] torchaudio==0.10.0+cu111
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.11.0
[pip3] torchvision==0.11.1+cu111
[conda] Could not collect
The text was updated successfully, but these errors were encountered: