You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the README of the project CUDA_BEVFusion, the wrote "The data in the performance table was obtained by us on the Nvidia Orin platform, using TensorRT-8.6, cuda-11.4 and cudnn8.6 statistics."
when I try to build it with Cuda 11.4 and tensorRT 8.5.2, it askes for the libcublas.so.12, which is part of Cuda-12,
What is wrong in my code
My machine is jetson AGX orin, aarch64
L4T: 35.3.1
ubuntu 20.24
JetPack: 5.1.1
python 3.8
CUDA: 11.4.314
TensorRT: 8.5.2.2
Is there anything wrong with the documentation, how to fix the problem
The text was updated successfully, but these errors were encountered:
I feel that there is some mistakes in the documentation of CUDA_BEVFusion project , because TensorRT 8.6 with Cuda 11.4 is not supported for jetson orin until now. the supported one is Cuda 12 with tensorrt 8.6.
I'm I right?
In the README of the project CUDA_BEVFusion, the wrote "The data in the performance table was obtained by us on the Nvidia Orin platform, using TensorRT-8.6, cuda-11.4 and cudnn8.6 statistics."
when I try to build it with Cuda 11.4 and tensorRT 8.5.2, it askes for the libcublas.so.12, which is part of Cuda-12,
What is wrong in my code
My machine is jetson AGX orin, aarch64
L4T: 35.3.1
ubuntu 20.24
JetPack: 5.1.1
python 3.8
CUDA: 11.4.314
TensorRT: 8.5.2.2
Is there anything wrong with the documentation, how to fix the problem
The text was updated successfully, but these errors were encountered: