We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I'm having a similar problem in C++ using TensorRT and CUDA. My onnx model has 2 inputs and one output. My inference code is:
m_context = std::unique_ptr<nvinfer1::IExecutionContext>(m_engine->createExecutionContext()); // Set input dimensions (as input shape is dynamic) m_context->setInputShape("im0", dims); m_context->setInputShape("im1", dims); status = m_context->setTensorAddress("im0", im0_device_ptr), assert(status); status = m_context->setTensorAddress("im1", im1_device_ptr), assert(status); status = m_context->setTensorAddress("disparity_map", out_disparity_map_device_ptr), assert(status); status = m_context->enqueueV3(stream), assert(status);
The error I'm getting is:
[ERROR] [] TensorRT error: IExecutionContext::enqueueV3: Error Code 1: Cask (Cask Pooling Runner Execute Failure)
I've also tried changing enqueueV3 to executeV2 - still same issue.
I'm 99.99% sure im0_devce_ptr, im1_device_ptr and out_disparity_map_device_ptr are valid pointers to device memory.
im0_devce_ptr
im1_device_ptr
out_disparity_map_device_ptr
The thing is, I have the exact similar code in Python + PyTorch and it works using the same engine.
TensorRT Version: 10.8
NVIDIA GPU: RTX 2060 SUPER
NVIDIA Driver Version: 560.35.05
CUDA Version: 12.6
CUDNN Version: -
Operating System: Ubuntu 24.04
Python Version (if applicable): 3.11
Tensorflow Version (if applicable):
PyTorch Version (if applicable): 2.5.1
Baremetal or Container (if so, version): Baremetal
Model link:
https://drive.google.com/file/d/108rgI-m2-3Xg17vRDdvXf3yS-dX8fFHV/view?usp=drive_link
Commands or scripts:
Have you tried the latest release?:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):
polygraphy run <model.onnx> --onnxrt
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Description
I'm having a similar problem in C++ using TensorRT and CUDA. My onnx model has 2 inputs and one output. My inference code is:
The error I'm getting is:
I've also tried changing enqueueV3 to executeV2 - still same issue.
I'm 99.99% sure
im0_devce_ptr
,im1_device_ptr
andout_disparity_map_device_ptr
are valid pointers to device memory.The thing is, I have the exact similar code in Python + PyTorch and it works using the same engine.
Environment
TensorRT Version: 10.8
NVIDIA GPU: RTX 2060 SUPER
NVIDIA Driver Version: 560.35.05
CUDA Version: 12.6
CUDNN Version: -
Operating System: Ubuntu 24.04
Python Version (if applicable): 3.11
Tensorflow Version (if applicable):
PyTorch Version (if applicable): 2.5.1
Baremetal or Container (if so, version): Baremetal
Relevant Files
Model link:
https://drive.google.com/file/d/108rgI-m2-3Xg17vRDdvXf3yS-dX8fFHV/view?usp=drive_link
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
):The text was updated successfully, but these errors were encountered: