Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XXX failure of TensorRT X.Y when running XXX on GPU XXX #4362

Open
loryruta opened this issue Feb 21, 2025 · 0 comments
Open

XXX failure of TensorRT X.Y when running XXX on GPU XXX #4362

loryruta opened this issue Feb 21, 2025 · 0 comments

Comments

@loryruta
Copy link

Description

I'm having a similar problem in C++ using TensorRT and CUDA. My onnx model has 2 inputs and one output. My inference code is:

m_context = std::unique_ptr<nvinfer1::IExecutionContext>(m_engine->createExecutionContext());

// Set input dimensions (as input shape is dynamic)
m_context->setInputShape("im0", dims);
m_context->setInputShape("im1", dims);

status = m_context->setTensorAddress("im0", im0_device_ptr), assert(status);
status = m_context->setTensorAddress("im1", im1_device_ptr), assert(status);
status = m_context->setTensorAddress("disparity_map", out_disparity_map_device_ptr), assert(status);

status = m_context->enqueueV3(stream), assert(status);

The error I'm getting is:

[ERROR] [] TensorRT error: IExecutionContext::enqueueV3: Error Code 1: Cask (Cask Pooling Runner Execute Failure)

I've also tried changing enqueueV3 to executeV2 - still same issue.

I'm 99.99% sure im0_devce_ptr, im1_device_ptr and out_disparity_map_device_ptr are valid pointers to device memory.

The thing is, I have the exact similar code in Python + PyTorch and it works using the same engine.

Environment

TensorRT Version: 10.8

NVIDIA GPU: RTX 2060 SUPER

NVIDIA Driver Version: 560.35.05

CUDA Version: 12.6

CUDNN Version: -

Operating System: Ubuntu 24.04

Python Version (if applicable): 3.11

Tensorflow Version (if applicable):

PyTorch Version (if applicable): 2.5.1

Baremetal or Container (if so, version): Baremetal

Relevant Files

Model link:

https://drive.google.com/file/d/108rgI-m2-3Xg17vRDdvXf3yS-dX8fFHV/view?usp=drive_link

Steps To Reproduce

Commands or scripts:

Have you tried the latest release?:

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant