Skip to content

cudnn dot_product_attention encounters Failed to capture gpu graph when running on 2 NVIDIA A6000 Ada cards #27599

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Methylamphetamine opened this issue Mar 30, 2025 · 8 comments
Labels
bug Something isn't working NVIDIA GPU Issues specific to NVIDIA GPUs

Comments

@Methylamphetamine
Copy link

Description

Hi,

I was trying using the jax.nn.dot_product_attention with the implementation=cudnn for the flax.linen.MultiHeadDotProductAttention. However, I encountered error message as following when I attempted to run on multiple NVIDIA GPUs. The code works fine on only 1 GPU, though. Here is a minimal reproducing example from my end.

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'

import jax
from jax import jit, numpy as jnp, grad, vmap, random, lax
from jax.experimental.shard_map import shard_map
from jax.sharding import Mesh, PartitionSpec as P, NamedSharding
from jaxtyping import Array
from typing import Callable, Any, Optional

from flax import linen as nn

import functools

jax.print_environment_info()

print('============================================================')
print('Start testing')
print('============================================================')

mesh = Mesh(jax.devices(), ('data'))

def default_attention_fn(query: Array,
                        key: Array,
                        value: Array,
                        bias: Optional[Array] = None,
                        mask: Optional[Array] = None,
                        broadcast_dropout: bool = True,
                        dropout_rng: Optional[None] = None,
                        dropout_rate: float = 0.,
                        deterministic: bool = False,
                        dtype: Optional[None] = None,
                        precision: Any = None):
    
    dtype = value.dtype
    query = query.astype(jnp.bfloat16)
    key = key.astype(jnp.bfloat16)
    value = value.astype(jnp.bfloat16)

    out = jax.nn.dot_product_attention(query, key, value, bias=bias, mask=mask, scale=None, is_causal=None, query_seq_lengths=None, key_value_seq_lengths=None, local_window_size=None, implementation='cudnn')

    return out.astype(dtype)




class Attention(nn.Module):
    @nn.compact
    def __call__(self, x, training):
        return nn.MultiHeadAttention(num_heads=4, attention_fn=default_attention_fn, deterministic=not training)(x)


key = random.PRNGKey(0)

x = random.normal(key, (4, 1024, 512))

net = Attention()

@jit
@functools.partial(
        shard_map,
        in_specs =(P(), P('data')),
        out_specs=P(),
        mesh=mesh,
        check_rep=False
)
def init_fn(key, x):
    return net.init(key, x, training=False)
@jit
def loss_fn(params, x, key):
    out = net.apply(params, x, training=True, rngs={'dropout': key})
    return jnp.sum(out**2)



x = jax.device_put(x, NamedSharding(mesh, P('data')))

params = init_fn(key, x)


loss_fn(params, x, key)

print('Test passed')

And here is the error message.

============================================================
Start testing
============================================================
E0330 13:48:04.005107   26518 pjrt_stream_executor_client.cc:3077] Execution of replica 0 failed: INTERNAL: Failed to capture gpu graph: execute(handle, plan->get_raw_desc(), variant_pack_descriptor.get_ptr()) failed with message: plan.getEnginePtr()->execute(vars, handle->streamId), and code: CUDNN_STATUS_EXECUTION_FAILED
in external/xla/xla/stream_executor/cuda/cuda_dnn.cc(8679): 'graph_.execute(cudnn.handle(), tensor_to_ptr_map, workspace.opaque())' 
2025-03-30 13:48:14.004812: E external/xla/xla/service/rendezvous.cc:99] This thread has been waiting for `thunk initialization completion for device ordinal 0; run_id=1145151835` for 10 seconds and may be stuck. Expected 2 threads to join the rendezvous, but not all of them arrived on time.
F0330 13:48:14.005213   26400 pjrt_stream_executor_client.cc:3255] Replicated computation launch failed, but not all replicas terminated. Aborting process to work around deadlock. Failure message (there may have been multiple failures, see the error log for all failures): 

Failed to capture gpu graph: execute(handle, plan->get_raw_desc(), variant_pack_descriptor.get_ptr()) failed with message: plan.getEnginePtr()->execute(vars, handle->streamId), and code: CUDNN_STATUS_EXECUTION_FAILED
in external/xla/xla/stream_executor/cuda/cuda_dnn.cc(8679): 'graph_.execute(cudnn.handle(), tensor_to_ptr_map, workspace.opaque())' 
*** Check failure stack trace: ***
    @     0x7f217e467874  absl::lts_20230802::log_internal::LogMessage::SendToLog()
    @     0x7f217e4676e4  absl::lts_20230802::log_internal::LogMessage::Flush()
    @     0x7f217e467c19  absl::lts_20230802::log_internal::LogMessageFatal::~LogMessageFatal()
    @     0x7f2176150b84  xla::PjRtStreamExecutorLoadedExecutable::Execute()
    @     0x7f21760bf666  pjrt::PJRT_LoadedExecutable_Execute()
    @     0x7f218a263c32  xla::PjRtCApiLoadedExecutable::Execute()
    @     0x7f219174b539  xla::ifrt::PjRtLoadedExecutable::Execute()
    @     0x7f21907792e8  xla::(anonymous namespace)::ExecuteShardedOnLocalDevicesInternal<>()
    @     0x7f219077a8ec  xla::PyLoadedExecutable::ExecuteSharded()
    @     0x7f218a124975  xla::ValueOrThrowWrapper<>::operator()()
    @     0x7f218a1247bd  nanobind::detail::func_create<>()::{lambda()#1}::__invoke()
    @     0x7f21925c1ec8  nanobind::detail::nb_func_vectorcall_complex()
    @     0x56135638eeac  PyObject_Vectorcall
Aborted (core dumped)

Thank you in advance for the help!

System info (python version, jaxlib version, accelerator, etc.)

jax:    0.5.3
jaxlib: 0.5.3
numpy:  2.1.3
python: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0]
device info: NVIDIA RTX 6000 Ada Generation-2, 2 local devices"
process_count: 1
platform: uname_result(system='Linux', node='envious.seas.upenn.edu', release='6.4.0-150600.23.33-default', version='#1 SMP PREEMPT_DYNAMIC Thu Jan  9 14:10:22 UTC 2025 (ba46628)', machine='x86_64')


$ nvidia-smi
Sun Mar 30 13:46:31 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.57.01              Driver Version: 565.57.01      CUDA Version: 12.7     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX 6000 Ada Gene...    On  |   00000000:04:00.0 Off |                  Off |
| 30%   27C    P0             26W /  300W |     440MiB /  49140MiB |      3%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA RTX 6000 Ada Gene...    On  |   00000000:43:00.0 Off |                  Off |
| 30%   32C    P0             23W /  300W |     436MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA RTX 6000 Ada Gene...    On  |   00000000:88:00.0 Off |                  Off |
| 30%   31C    P8             21W /  300W |       2MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   3  NVIDIA RTX 6000 Ada Gene...    On  |   00000000:C4:00.0 Off |                  Off |
| 30%   30C    P8             21W /  300W |       2MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A     24996      C   python                                        430MiB |
|    1   N/A  N/A     24996      C   python                                        426MiB |
+-----------------------------------------------------------------------------------------+

@Methylamphetamine Methylamphetamine added the bug Something isn't working label Mar 30, 2025
@superbobry superbobry added the NVIDIA GPU Issues specific to NVIDIA GPUs label Mar 31, 2025
@Cjkkkk
Copy link
Contributor

Cjkkkk commented Mar 31, 2025

couldn't reproduce on A100/H100. Will try to find a A6000 machine to test.

@Methylamphetamine
Copy link
Author

Methylamphetamine commented Mar 31, 2025

I can't reproduce the error on H200 either. However, on a A6000 machine same error occurs. Please see the sys info and error message below.

jax:    0.5.0
jaxlib: 0.5.0
numpy:  1.26.1
python: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0]
device info: NVIDIA RTX A6000-2, 2 local devices"
process_count: 1
platform: uname_result(system='Linux', node='blade.seas.upenn.edu', release='6.4.0-150600.23.33-default', version='#1 SMP PREEMPT_DYNAMIC Thu Jan  9 14:10:22 UTC 2025 (ba46628)', machine='x86_64')


$ nvidia-smi
Mon Mar 31 16:57:11 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.02              Driver Version: 555.42.02      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX A6000               On  |   00000000:01:00.0 Off |                  Off |
| 30%   34C    P2             40W /  300W |     274MiB /  49140MiB |      2%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA RTX A6000               On  |   00000000:25:00.0 Off |                  Off |
| 30%   33C    P2             27W /  300W |     270MiB /  49140MiB |      2%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA RTX A6000               On  |   00000000:41:00.0 Off |                  Off |
| 30%   33C    P8             25W /  300W |       2MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   3  NVIDIA RTX A6000               On  |   00000000:61:00.0 Off |                  Off |
| 30%   33C    P8             27W /  300W |       2MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   4  NVIDIA RTX A6000               On  |   00000000:81:00.0 Off |                  Off |
| 30%   40C    P8             25W /  300W |       2MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   5  NVIDIA RTX A6000               On  |   00000000:A1:00.0 Off |                  Off |
| 60%   83C    P2            291W /  300W |   36902MiB /  49140MiB |    100%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   6  NVIDIA RTX A6000               On  |   00000000:C1:00.0 Off |                  Off |
| 30%   58C    P2             87W /  300W |   36880MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   7  NVIDIA RTX A6000               On  |   00000000:E1:00.0 Off |                  Off |
| 30%   34C    P8             21W /  300W |       2MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+


============================================================
Start testing
============================================================
E0331 16:57:18.024739  120005 pjrt_stream_executor_client.cc:3045] Execution of replica 0 failed: INTERNAL: Failed to capture gpu graph: execute(handle, plan->get_raw_desc(), variant_pack_descriptor.get_ptr()) failed with message: plan.getEnginePtr()->execute(vars, handle->streamId), and code: CUDNN_STATUS_EXECUTION_FAILED
in external/xla/xla/stream_executor/cuda/cuda_dnn.cc(8572): 'graph_.execute(cudnn.handle(), tensor_to_ptr_map, workspace.opaque())'
2025-03-31 16:57:28.024034: E external/xla/xla/service/rendezvous.cc:98] This thread has been waiting for `thunk initialization completion for device ordinal 0; run_id=202549451` for 10 seconds and may be stuck. Expected 2 threads to join the rendezvous, but not all of them arrived on time.
F0331 16:57:28.024942  119476 pjrt_stream_executor_client.cc:3222] Replicated computation launch failed, but not all replicas terminated. Aborting process to work around deadlock. Failure message (there may have been multiple failures, see the error log for all failures):

Failed to capture gpu graph: execute(handle, plan->get_raw_desc(), variant_pack_descriptor.get_ptr()) failed with message: plan.getEnginePtr()->execute(vars, handle->streamId), and code: CUDNN_STATUS_EXECUTION_FAILED
in external/xla/xla/stream_executor/cuda/cuda_dnn.cc(8572): 'graph_.execute(cudnn.handle(), tensor_to_ptr_map, workspace.opaque())'
*** Check failure stack trace: ***
    @     0x7f0365fe38f4  absl::lts_20230802::log_internal::LogMessage::SendToLog()
    @     0x7f0365fe3764  absl::lts_20230802::log_internal::LogMessage::Flush()
    @     0x7f0365fe3c99  absl::lts_20230802::log_internal::LogMessageFatal::~LogMessageFatal()
    @     0x7f035e166836  xla::PjRtStreamExecutorLoadedExecutable::Execute()
    @     0x7f035e0dcf7f  pjrt::PJRT_LoadedExecutable_Execute()
    @     0x7f03716660e2  xla::PjRtCApiLoadedExecutable::Execute()
    @     0x7f03786055d3  xla::ifrt::PjRtLoadedExecutable::Execute()
    @     0x7f037776e469  xla::(anonymous namespace)::ExecuteShardedOnLocalDevicesInternal<>()
    @     0x7f037776fa20  xla::PyLoadedExecutable::ExecuteSharded()
    @     0x7f0371519675  xla::ValueOrThrowWrapper<>::operator()()
    @     0x7f03715194bd  nanobind::detail::func_create<>()::{lambda()#1}::__invoke()
    @     0x7f03785d5278  nanobind::detail::nb_func_vectorcall_complex()
    @     0x562bb1ed0eac  PyObject_Vectorcall
Aborted (core dumped)

@Cjkkkk
Copy link
Contributor

Cjkkkk commented Apr 1, 2025

Hi, could you try with

CUDNN_FRONTEND_LOG_FILE=fe.log CUDNN_FRONTEND_LOG_INFO=1 CUDNN_LOGLEVEL_DBG=3 CUDNN_LOGDEST_DBG=be.log

to generate the cudnn logs for me?

@Methylamphetamine
Copy link
Author

Hi, could you try with

CUDNN_FRONTEND_LOG_FILE=fe.log CUDNN_FRONTEND_LOG_INFO=1 CUDNN_LOGLEVEL_DBG=3 CUDNN_LOGDEST_DBG=be.log

to generate the cudnn logs for me?

Sure I have attached the generated fe.log and be.log. Please let me know if I have missed anything.

be.log
fe.log

@Cjkkkk
Copy link
Contributor

Cjkkkk commented Apr 1, 2025

could you also try running with compute sanitizer?

/usr/local/cuda/bin/compute-sanitizer python3 test.py

@Methylamphetamine
Copy link
Author

could you also try running with compute sanitizer?

/usr/local/cuda/bin/compute-sanitizer python3 test.py

Here you go

========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:initTransportsRank(ncclComm*, ncclComm*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:460 [0xaabd8]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):165 [0x6ac0c]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame: [0xa758b]
=========                in
=========     Host Frame: [0x12ea27]
=========                in
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame: [0xaaa8b]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):183 [0x6ab58]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in
=========     Host Frame: [0x12ea27]
=========                in
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:initTransportsRank(ncclComm*, ncclComm*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:460 [0xaabd8]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):165 [0x6ac0c]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in
=========     Host Frame: [0x12ea27]
=========                in
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame: [0xaaa8b]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):183 [0x6ab58]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in
=========     Host Frame: [0x12ea27]
=========                in
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:initTransportsRank(ncclComm*, ncclComm*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:460 [0xaabd8]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):165 [0x6ac0c]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in
=========     Host Frame: [0x12ea27]
=========                in
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame: [0xaaa8b]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):183 [0x6ab58]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in
=========     Host Frame: [0x12ea27]
=========                in
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:initTransportsRank(ncclComm*, ncclComm*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:460 [0xaabd8]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):165 [0x6ac0c]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in
=========     Host Frame: [0x12ea27]
=========                in
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame: [0xaaa8b]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):183 [0x6ab58]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in
=========     Host Frame: [0x12ea27]
=========                in
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:initTransportsRank(ncclComm*, ncclComm*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:460 [0xaabd8]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):165 [0x6ac0c]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in cudaGetLastError
=========     Host Frame:cudaErrorPeerAccessAlreadyEnabled in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x12ea27]
=========                in cudaGetLastError
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:peer access is already enabled in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:511 [0xaaa8b]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):183 [0x6ab58]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in cudaGetLastError
=========     Host Frame:cudaErrorPeerAccessAlreadyEnabled in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x12ea27]
=========                in cudaGetLastError
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:initTransportsRank(ncclComm*, ncclComm*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:460 [0xaabd8]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):165 [0x6ac0c]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in cudaGetLastError
=========     Host Frame:cudaErrorPeerAccessAlreadyEnabled in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x12ea27]
=========                in cudaGetLastError
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:peer access is already enabled in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:511 [0xaaa8b]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):183 [0x6ab58]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in cudaGetLastError
=========     Host Frame:cudaErrorPeerAccessAlreadyEnabled in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x12ea27]
=========                in cudaGetLastError
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:initTransportsRank(ncclComm*, ncclComm*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:460 [0xaabd8]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):165 [0x6ac0c]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in cudaGetLastError
=========     Host Frame:cudaErrorPeerAccessAlreadyEnabled in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x12ea27]
=========                in cudaGetLastError
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:peer access is already enabled in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:511 [0xaaa8b]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):183 [0x6ab58]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in cudaGetLastError
=========     Host Frame:cudaErrorPeerAccessAlreadyEnabled in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x12ea27]
=========                in cudaGetLastError
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:initTransportsRank(ncclComm*, ncclComm*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:460 [0xaabd8]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):165 [0x6ac0c]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in cudaGetLastError
=========     Host Frame:cudaErrorPeerAccessAlreadyEnabled in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x12ea27]
=========                in cudaGetLastError
=========
========= Program hit error 704 on CUDA API call.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x405465]
=========                in transport/p2p.cc
=========     Host Frame: [0xf41ae]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclTransportP2pSetup(ncclComm*, ncclTopoGraph*, int, int*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:307 [0xaa894]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:peer access is already enabled in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/transport.cc:511 [0xaaa8b]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/init.cc in ncclCommInitRankFunc(ncclAsyncJob*):183 [0x6ab58]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:ncclAsyncJobMain(void*) in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1263 [0x57128]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:start_thread in /dvs/p4/build/sw/gpgpu/nccl/gitfusion/stable/src/group.cc:1548 [0x58e36]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:/lib64/libc.so.6 in __GI___clone3:62 [0x4e9e6]
=========                in p2pSendConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*)
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in cudaGetLastError
=========     Host Frame:cudaErrorPeerAccessAlreadyEnabled in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x12ea27]
=========                in cudaGetLastError
=========
========= Program hit CUDA_ERROR_INVALID_HANDLE (error 400) due to "invalid resource handle" on CUDA API call to cuLaunchKernel.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x2c9c46]
=========                in transport/p2p.cc
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xe842b]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/../../nvidia/cudnn/lib/libcudnn_engines_runtime_compiled.so.9
=========     Host Frame:cudnn::backend::execute(cudnnContext*, cudnn::backend::ExecutionPlan const&, cudnn::backend::VariantPack&) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x128134]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/../../nvidia/cudnn/lib/libcudnn_graph.so.9
=========     Host Frame:cudnnBackendExecute in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x1295de]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/../../nvidia/cudnn/lib/libcudnn_graph.so.9
=========     Host Frame:cudnn_frontend::detail::execute(cudnnContext*, cudnn_frontend::ExecutionPlan_v8*, std::vector<void*, std::allocator<void*> >&, std::vector<long, std::allocator<long> > const&, void*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x4be9699]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:cudnn_frontend::ICudnn::execute_cudnn_plan_with_uid(cudnnContext*, std::unordered_map<long, void*, std::hash<long>, std::equal_to<long>, std::allocator<std::pair<long const, void*> > > const&, void*, long) const in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x8486ca0]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:cudnn_frontend::graph::Graph::execute(cudnnContext*, std::unordered_map<long, void*, std::hash<long>, std::equal_to<long>, std::allocator<std::pair<long const, void*> > >&, void*) const in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x8385e56]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:stream_executor::gpu::CudnnGraph::Execute(stream_executor::Stream&, absl::lts_20230802::Span<stream_executor::DeviceMemoryBase>, long) const in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x8384bf5]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:absl::lts_20230802::Status absl::lts_20230802::functional_internal::InvokeObject<xla::gpu::CuDnnCmd::Record(xla::gpu::Thunk::ExecuteParams const&, xla::gpu::CommandBufferCmd::RecordParams const&, stream_executor::CommandBuffer*)::$_1, absl::lts_20230802::Status, stream_executor::Stream*>(absl::lts_20230802::functional_internal::VoidPtr, absl::lts_20230802::functional_internal::ForwardT<stream_executor::Stream*>::type) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb5438b]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:absl::lts_20230802::Status absl::lts_20230802::internal_any_invocable::LocalInvoker<false, absl::lts_20230802::Status, absl::lts_20230802::FunctionRef<absl::lts_20230802::Status (stream_executor::Stream*)>&, stream_executor::Stream*>(absl::lts_20230802::internal_any_invocable::TypeErasedState*, absl::lts_20230802::internal_any_invocable::ForwardedParameter<stream_executor::Stream*>::type) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb52f11]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:absl::lts_20230802::Status absl::lts_20230802::internal_any_invocable::LocalInvoker<false, absl::lts_20230802::Status, stream_executor::TraceCommandBufferFactory::Create(stream_executor::StreamExecutor*, stream_executor::Stream*, absl::lts_20230802::AnyInvocable<absl::lts_20230802::Status (stream_executor::Stream*)>, stream_executor::CommandBuffer::Mode)::$_0&>(absl::lts_20230802::internal_any_invocable::TypeErasedState*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb55618]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:stream_executor::gpu::CudaCommandBuffer::Trace(stream_executor::Stream*, absl::lts_20230802::AnyInvocable<absl::lts_20230802::Status ()>) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x8303783]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:stream_executor::TraceCommandBufferFactory::Create(stream_executor::StreamExecutor*, stream_executor::Stream*, absl::lts_20230802::AnyInvocable<absl::lts_20230802::Status (stream_executor::Stream*)>, stream_executor::CommandBuffer::Mode) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb55454]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::TracedCommandBuffer::GetOrTraceCommandBuffer(xla::gpu::BufferAllocations const*, stream_executor::StreamExecutor*, stream_executor::Stream*, absl::lts_20230802::FunctionRef<absl::lts_20230802::Status (stream_executor::Stream*)>) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb3d8a4]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::TracedCommandBufferCmd::AddTracedCommandBuffer(xla::gpu::Thunk::ExecuteParams const&, xla::gpu::CommandBufferCmd::RecordParams const&, stream_executor::CommandBuffer*, absl::lts_20230802::FunctionRef<absl::lts_20230802::Status (stream_executor::Stream*)>) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb3df64]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::CuDnnCmd::Record(xla::gpu::Thunk::ExecuteParams const&, xla::gpu::CommandBufferCmd::RecordParams const&, stream_executor::CommandBuffer*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb4646e]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::CommandBufferCmdSequence::Record(xla::gpu::Thunk::ExecuteParams const&, xla::gpu::CommandBufferCmd::RecordParams const&, stream_executor::CommandBuffer*, xla::gpu::CommandBufferCmdSequence::RecordMode) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb3c725]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::CommandBufferThunk::Initialize(xla::gpu::Thunk::InitializeParams const&) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb38890]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::SequentialThunk::Initialize(xla::gpu::Thunk::InitializeParams const&) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x481e91e]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::GpuExecutable::ExecuteAsyncOnStreamImpl(xla::ServiceExecutableRunOptions const*, std::variant<absl::lts_20230802::Span<xla::ShapedBuffer const* const>, absl::lts_20230802::Span<xla::ExecutionInput> >) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x48162ac]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::GpuExecutable::ExecuteAsyncOnStream(xla::ServiceExecutableRunOptions const*, std::vector<xla::ExecutionInput, std::allocator<xla::ExecutionInput> >) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x481441e]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::Executable::ExecuteAsyncOnStreamWrapper(xla::ServiceExecutableRunOptions const*, std::vector<xla::ExecutionInput, std::allocator<xla::ExecutionInput> >) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x4d7b812]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::LocalExecutable::RunAsync(absl::lts_20230802::Span<xla::Shape const* const>, std::vector<xla::ExecutionInput, std::allocator<xla::ExecutionInput> >, xla::ExecutableRunOptions) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x7bf143]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::LocalExecutable::RunAsync(std::vector<xla::ExecutionInput, std::allocator<xla::ExecutionInput> >, xla::ExecutableRunOptions) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x7bf985]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::PjRtStreamExecutorLoadedExecutable::EnqueueExecution(absl::lts_20230802::Span<xla::PjRtBuffer* const>, int, int, int, xla::RunId const&, xla::ExecuteOptions const&, xla::PjRtDevice*, std::vector<xla::PjRtStreamExecutorBuffer::ScopedHold, std::allocator<xla::PjRtStreamExecutorBuffer::ScopedHold> >*, std::shared_ptr<xla::DeviceAssignment>, std::vector<std::function<void ()>, std::allocator<std::function<void ()> > >&) const in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x75fb61]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::PjRtStreamExecutorLoadedExecutable::ExecuteHelper(absl::lts_20230802::Span<xla::PjRtBuffer* const>, int, int, xla::RunId const&, xla::ExecuteOptions const&, bool, xla::PjRtDevice*) const in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x762df5]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:void absl::lts_20230802::internal_any_invocable::RemoteInvoker<false, void, xla::PjRtStreamExecutorLoadedExecutable::Execute(absl::lts_20230802::Span<std::vector<xla::PjRtBuffer*, std::allocator<xla::PjRtBuffer*> > const>, xla::ExecuteOptions const&, std::optional<std::vector<xla::PjRtFuture<void>, std::allocator<xla::PjRtFuture<void> > > >&)::$_1&&>(absl::lts_20230802::internal_any_invocable::TypeErasedState*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x7818c6]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::WorkerThread::WorkLoop() in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x7c89d1]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:tsl::(anonymous namespace)::PThread::ThreadFn(void*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x84d54ca]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in cudaGetLastError
=========     Host Frame:cudaErrorPeerAccessAlreadyEnabled in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x12ea27]
=========                in cudaGetLastError
=========
========= Program hit CUDA_ERROR_INVALID_HANDLE (error 400) due to "invalid resource handle" on CUDA API call to cuLaunchKernel.
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x2c9c46]
=========                in transport/p2p.cc
=========     Host Frame:p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5] in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xe842b]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/../../nvidia/cudnn/lib/libcudnn_engines_runtime_compiled.so.9
=========     Host Frame:cudnn::backend::execute(cudnnContext*, cudnn::backend::ExecutionPlan const&, cudnn::backend::VariantPack&) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x128134]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/../../nvidia/cudnn/lib/libcudnn_graph.so.9
=========     Host Frame:cudnnBackendExecute in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x1295de]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/../../nvidia/cudnn/lib/libcudnn_graph.so.9
=========     Host Frame:cudnn_frontend::detail::execute(cudnnContext*, cudnn_frontend::ExecutionPlan_v8*, std::vector<void*, std::allocator<void*> >&, std::vector<long, std::allocator<long> > const&, void*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x4be9699]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:cudnn_frontend::ICudnn::execute_cudnn_plan_with_uid(cudnnContext*, std::unordered_map<long, void*, std::hash<long>, std::equal_to<long>, std::allocator<std::pair<long const, void*> > > const&, void*, long) const in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x8486ca0]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:cudnn_frontend::graph::Graph::execute(cudnnContext*, std::unordered_map<long, void*, std::hash<long>, std::equal_to<long>, std::allocator<std::pair<long const, void*> > >&, void*) const in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x8385e56]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:stream_executor::gpu::CudnnGraph::Execute(stream_executor::Stream&, absl::lts_20230802::Span<stream_executor::DeviceMemoryBase>, long) const in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x8384c47]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:absl::lts_20230802::Status absl::lts_20230802::functional_internal::InvokeObject<xla::gpu::CuDnnCmd::Record(xla::gpu::Thunk::ExecuteParams const&, xla::gpu::CommandBufferCmd::RecordParams const&, stream_executor::CommandBuffer*)::$_1, absl::lts_20230802::Status, stream_executor::Stream*>(absl::lts_20230802::functional_internal::VoidPtr, absl::lts_20230802::functional_internal::ForwardT<stream_executor::Stream*>::type) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb5438b]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:absl::lts_20230802::Status absl::lts_20230802::internal_any_invocable::LocalInvoker<false, absl::lts_20230802::Status, absl::lts_20230802::FunctionRef<absl::lts_20230802::Status (stream_executor::Stream*)>&, stream_executor::Stream*>(absl::lts_20230802::internal_any_invocable::TypeErasedState*, absl::lts_20230802::internal_any_invocable::ForwardedParameter<stream_executor::Stream*>::type) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb52f11]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:absl::lts_20230802::Status absl::lts_20230802::internal_any_invocable::LocalInvoker<false, absl::lts_20230802::Status, stream_executor::TraceCommandBufferFactory::Create(stream_executor::StreamExecutor*, stream_executor::Stream*, absl::lts_20230802::AnyInvocable<absl::lts_20230802::Status (stream_executor::Stream*)>, stream_executor::CommandBuffer::Mode)::$_0&>(absl::lts_20230802::internal_any_invocable::TypeErasedState*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb55618]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:stream_executor::gpu::CudaCommandBuffer::Trace(stream_executor::Stream*, absl::lts_20230802::AnyInvocable<absl::lts_20230802::Status ()>) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x8303783]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:stream_executor::TraceCommandBufferFactory::Create(stream_executor::StreamExecutor*, stream_executor::Stream*, absl::lts_20230802::AnyInvocable<absl::lts_20230802::Status (stream_executor::Stream*)>, stream_executor::CommandBuffer::Mode) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb55454]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::TracedCommandBuffer::GetOrTraceCommandBuffer(xla::gpu::BufferAllocations const*, stream_executor::StreamExecutor*, stream_executor::Stream*, absl::lts_20230802::FunctionRef<absl::lts_20230802::Status (stream_executor::Stream*)>) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb3d8a4]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::TracedCommandBufferCmd::AddTracedCommandBuffer(xla::gpu::Thunk::ExecuteParams const&, xla::gpu::CommandBufferCmd::RecordParams const&, stream_executor::CommandBuffer*, absl::lts_20230802::FunctionRef<absl::lts_20230802::Status (stream_executor::Stream*)>) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb3df64]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::CuDnnCmd::Record(xla::gpu::Thunk::ExecuteParams const&, xla::gpu::CommandBufferCmd::RecordParams const&, stream_executor::CommandBuffer*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb4646e]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::CommandBufferCmdSequence::Record(xla::gpu::Thunk::ExecuteParams const&, xla::gpu::CommandBufferCmd::RecordParams const&, stream_executor::CommandBuffer*, xla::gpu::CommandBufferCmdSequence::RecordMode) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb3c725]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::CommandBufferThunk::Initialize(xla::gpu::Thunk::InitializeParams const&) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xb38890]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::SequentialThunk::Initialize(xla::gpu::Thunk::InitializeParams const&) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x481e91e]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::GpuExecutable::ExecuteAsyncOnStreamImpl(xla::ServiceExecutableRunOptions const*, std::variant<absl::lts_20230802::Span<xla::ShapedBuffer const* const>, absl::lts_20230802::Span<xla::ExecutionInput> >) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x48162ac]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::gpu::GpuExecutable::ExecuteAsyncOnStream(xla::ServiceExecutableRunOptions const*, std::vector<xla::ExecutionInput, std::allocator<xla::ExecutionInput> >) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x481441e]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::Executable::ExecuteAsyncOnStreamWrapper(xla::ServiceExecutableRunOptions const*, std::vector<xla::ExecutionInput, std::allocator<xla::ExecutionInput> >) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x4d7b812]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::LocalExecutable::RunAsync(absl::lts_20230802::Span<xla::Shape const* const>, std::vector<xla::ExecutionInput, std::allocator<xla::ExecutionInput> >, xla::ExecutableRunOptions) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x7bf143]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::LocalExecutable::RunAsync(std::vector<xla::ExecutionInput, std::allocator<xla::ExecutionInput> >, xla::ExecutableRunOptions) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x7bf985]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::PjRtStreamExecutorLoadedExecutable::EnqueueExecution(absl::lts_20230802::Span<xla::PjRtBuffer* const>, int, int, int, xla::RunId const&, xla::ExecuteOptions const&, xla::PjRtDevice*, std::vector<xla::PjRtStreamExecutorBuffer::ScopedHold, std::allocator<xla::PjRtStreamExecutorBuffer::ScopedHold> >*, std::shared_ptr<xla::DeviceAssignment>, std::vector<std::function<void ()>, std::allocator<std::function<void ()> > >&) const in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x75fb61]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::PjRtStreamExecutorLoadedExecutable::ExecuteHelper(absl::lts_20230802::Span<xla::PjRtBuffer* const>, int, int, xla::RunId const&, xla::ExecuteOptions const&, bool, xla::PjRtDevice*) const in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x762df5]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:void absl::lts_20230802::internal_any_invocable::RemoteInvoker<false, void, xla::PjRtStreamExecutorLoadedExecutable::Execute(absl::lts_20230802::Span<std::vector<xla::PjRtBuffer*, std::allocator<xla::PjRtBuffer*> > const>, xla::ExecuteOptions const&, std::optional<std::vector<xla::PjRtFuture<void>, std::allocator<xla::PjRtFuture<void> > > >&)::$_1&&>(absl::lts_20230802::internal_any_invocable::TypeErasedState*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x7818c6]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:xla::WorkerThread::WorkLoop() in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x7c89d1]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:tsl::(anonymous namespace)::PThread::ThreadFn(void*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x84d54ca]
=========                in /scratch/wangh19/anaconda3/envs/sota/lib/python3.11/site-packages/jax_plugins/xla_cuda12/xla_cuda_plugin.so
=========     Host Frame:p2pRecvConnect(ncclComm*, ncclConnect*, int, int, ncclConnector*) in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0xa758b]
=========                in cudaGetLastError
=========     Host Frame:cudaErrorPeerAccessAlreadyEnabled in p2pMap(ncclComm*, ncclProxyConnector*, ncclPeerInfo*, ncclPeerInfo*, ncclP2pBuff*, void**, void**) [clone .isra.5]:0 [0x12ea27]
=========                in cudaGetLastError
=========
E0401 18:50:34.459616  247561 pjrt_stream_executor_client.cc:3045] Execution of replica 0 failed: INTERNAL: Failed to capture gpu graph: execute(handle, plan->get_raw_desc(), variant_pack_descriptor.get_ptr()) failed with message: plan.getEnginePtr()->execute(vars, handle->streamId), and code: CUDNN_STATUS_EXECUTION_FAILED
in external/xla/xla/stream_executor/cuda/cuda_dnn.cc(8572): 'graph_.execute(cudnn.handle(), tensor_to_ptr_map, workspace.opaque())'
2025-04-01 18:50:43.880254: E external/xla/xla/service/rendezvous.cc:98] This thread has been waiting for `thunk initialization completion for device ordinal 0; run_id=-1494226316` for 10 seconds and may be stuck. Expected 2 threads to join the rendezvous, but not all of them arrived on time.
F0401 18:50:44.459808  247001 pjrt_stream_executor_client.cc:3222] Replicated computation launch failed, but not all replicas terminated. Aborting process to work around deadlock. Failure message (there may have been multiple failures, see the error log for all failures):

Failed to capture gpu graph: execute(handle, plan->get_raw_desc(), variant_pack_descriptor.get_ptr()) failed with message: plan.getEnginePtr()->execute(vars, handle->streamId), and code: CUDNN_STATUS_EXECUTION_FAILED
in external/xla/xla/stream_executor/cuda/cuda_dnn.cc(8572): 'graph_.execute(cudnn.handle(), tensor_to_ptr_map, workspace.opaque())'
*** Check failure stack trace: ***
    @     0x7f27e39e38f4  absl::lts_20230802::log_internal::LogMessage::SendToLog()
    @     0x7f27e39e3764  absl::lts_20230802::log_internal::LogMessage::Flush()
    @     0x7f27e39e3c99  absl::lts_20230802::log_internal::LogMessageFatal::~LogMessageFatal()
    @     0x7f27dbb66836  xla::PjRtStreamExecutorLoadedExecutable::Execute()
    @     0x7f27dbadcf7f  pjrt::PJRT_LoadedExecutable_Execute()
    @     0x7f27ef0660e2  xla::PjRtCApiLoadedExecutable::Execute()
    @     0x7f27f60055d3  xla::ifrt::PjRtLoadedExecutable::Execute()
    @     0x7f27f516e469  xla::(anonymous namespace)::ExecuteShardedOnLocalDevicesInternal<>()
    @     0x7f27f516fa20  xla::PyLoadedExecutable::ExecuteSharded()
    @     0x7f27eef19675  xla::ValueOrThrowWrapper<>::operator()()
    @     0x7f27eef194bd  nanobind::detail::func_create<>()::{lambda()#1}::__invoke()
    @     0x7f27f5fd5278  nanobind::detail::nb_func_vectorcall_complex()
    @     0x55b75280aeac  PyObject_Vectorcall
========= Error: process didn't terminate successfully
========= Target application returned an error
========= ERROR SUMMARY: 18 errors

@Cjkkkk
Copy link
Contributor

Cjkkkk commented Apr 2, 2025

Unfortunately I don't have access to the A6000 cards right now and the error messages aren't clear about the errors. I tried on similar card A40 but it passed. I noticed that you are using cudnn 9.1, could you try with cudnn 9.8 if possible?

@Methylamphetamine
Copy link
Author

Unfortunately I don't have access to the A6000 cards right now and the error messages aren't clear about the errors. I tried on similar card A40 but it passed. I noticed that you are using cudnn 9.1, could you try with cudnn 9.8 if possible?

Updating cudnn to 9.8.0 does solve the issue. Looks like my conda has had some conflict in different cudnn versions. Thank you @Cjkkkk for the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working NVIDIA GPU Issues specific to NVIDIA GPUs
Projects
None yet
Development

No branches or pull requests

3 participants