Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DO NOT MERGE] Log test results to file #6627

Draft
wants to merge 66 commits into
base: master
Choose a base branch
from
Draft

Conversation

tohtana
Copy link
Contributor

@tohtana tohtana commented Oct 15, 2024

This PR logs the test runs to a file. This is useful for debugging tests that occasionally fail or get stuck.
The logging is enabled when RUNNING_TEST_LOG_FILE is set.

@tohtana
Copy link
Contributor Author

tohtana commented Oct 16, 2024

The run https://github.com/microsoft/DeepSpeed/actions/runs/11356676744/job/31588289596?pr=6627 got stuck. The log is like

[xdist_worker=0][TestCPULionGPUError][test_cpu_lion_gpu_error] Running with 2 processes
[xdist_worker=3][TestRead][test_parallel_read[True-True-True-True]] Running with 1 processes
[xdist_worker=2][TestLegacyCurriculumScheduler][test_fixed_discrete] Running with 2 processes
[xdist_worker=1][TestZeROUniversalCheckpointDP][test_dp_world_size_2to2[True-True-3-dtype1]] Running with 2 processes
[xdist_worker=1][TestZeROUniversalCheckpointDP][test_dp_world_size_2to2[True-True-3-dtype1]] Finished with 2 processes. elapsed_time=11.41s
[xdist_worker=1][TestZeROUniversalCheckpointDP][test_dp_world_size_2to4[False-True-3-dtype0]] Running with 4 processes
[xdist_worker=3][TestRead][test_parallel_read[True-True-True-True]] Finished with 1 processes. elapsed_time=48.19s
[xdist_worker=3][TestRead][test_async_read[False-True-False-True]] Running with 1 processes
[xdist_worker=1][TestZeROUniversalCheckpointDP][test_dp_world_size_2to4[False-True-3-dtype0]] Finished with 4 processes. elapsed_time=13.50s
[xdist_worker=2][TestLegacyCurriculumScheduler][test_fixed_discrete] Finished with 2 processes. elapsed_time=53.14s
[xdist_worker=2][TestLegacyCurriculumScheduler][test_fixed_linear] Running with 2 processes
[xdist_worker=3][TestRead][test_async_read[False-True-False-True]] Finished with 1 processes. elapsed_time=9.73s
[xdist_worker=0][TestCPULionGPUError][test_cpu_lion_gpu_error] Finished with 2 processes. elapsed_time=58.58s
[xdist_worker=3][TestRead][test_async_read[True-True-False-True]] Running with 1 processes
[xdist_worker=0][TestCPULion][test_fused_lion_equal[1048576-fp16]] Running with 1 processes
[xdist_worker=1][TestZeROUniversalCheckpointDP][test_dp_world_size_2to4[True-False-3-dtype2]] Running with 4 processes

We expect all the tests shows a single pair of Running and Finished but the following show something different (multiple Running/Finished pairs, Failed, and no Finished).

key: [xdist_worker=3][TestDistInferenceAllReduce][test[dtype2]] value: ['Running', 'Finished', 'Running', 'Finished', 'Running', 'Finished']
key: [xdist_worker=3][TestDistInferenceAllReduce][test[dtype0]] value: ['Running', 'Finished', 'Running', 'Finished', 'Running', 'Finished']
key: [xdist_worker=3][TestDistInferenceAllReduce][test[dtype1]] value: ['Running', 'Finished', 'Running', 'Finished', 'Running', 'Finished']
key: [xdist_worker=3][TestDistAllReduce][test] value: ['Running', 'Finished', 'Running', 'Finished', 'Running', 'Finished']
key: [xdist_worker=1][TestFP16OptimizerForMoE][test_fused_gradnorm] value: ['Running', 'Failed']
key: [xdist_worker=0][TestQuantizedInt][test_zero3_int4_quantized_initialization_nvme_offload] value: ['Running']

The error with TestFP16OptimizerForMoE.

[xdist_worker=1][TestFP16OptimizerForMoE][test_fused_gradnorm] Failed with 2 processes. elapsed_time=606.12s exc_type=<class 'torch.distributed.DistBackendError'> exc_val=[1] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '1', but store->get('1') got error: Connection reset by peer
Exception raised from recvBytes at ../torch/csrc/distributed/c10d/Utils.hpp:672 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f213425af86 in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x5d011ce (0x7f216f2b01ce in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x2c7 (0x7f216f2aaa67 in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: c10d::TCPStore::doGet(std::string const&) + 0x32 (0x7f216f2aad92 in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #4: c10d::TCPStore::get(std::string const&) + 0xa1 (0x7f216f2abf81 in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f216f2609d1 in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f216f2609d1 in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #7: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f216f2609d1 in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #8: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xaf (0x7f213555691f in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #9: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, c10::Device&, c10d::OpType, int, bool) + 0x114c (0x7f21355626fc in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #10: <unknown function> + 0x11dbb5f (0x7f213556ab5f in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #11: c10d::ProcessGroupNCCL::allreduce_impl(at::Tensor&, c10d::AllreduceOptions const&) + 0x10 (0x7f213556bf90 in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #12: c10d::ProcessGroupNCCL::barrier(c10d::BarrierOptions const&) + 0x69c (0x7f213557909c in /tmp/azureml/cr/j/0e8eb6da52274c4cbb364c23c2f2e33b/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)

@tohtana tohtana marked this pull request as draft October 22, 2024 06:53
@tohtana tohtana changed the title Log test results to file [DO NOT MERGE] Log test results to file Oct 22, 2024
@tohtana
Copy link
Contributor Author

tohtana commented Oct 23, 2024

Tests still get stuck. I merged master to investigate remaining issues.

@tohtana
Copy link
Contributor Author

tohtana commented Oct 24, 2024

Processes on runner

$ ps -ef | grep aiscuser | grep pytest
aiscuser 2142175       7  0 Oct22 pts/2    00:00:00 [pytest-xdist running] unit/pipe/test_pipe_module.py::TestPipeModuleSequential::test[False]
aiscuser 2197415       7  0 Oct22 pts/2    00:00:00 [pytest-xdist running] unit/inference/quantization/test_intX_quantization.py::TestQuantizedInt::test_zero3_int4_post_init_quant_cpu_offload[8bits]
aiscuser 2214907       7  0 Oct22 pts/2    00:00:00 [pytest-xdist running] unit/compression/test_dequantization.py::TestDequantization::test_dequantize
aiscuser 2236794       7  0 Oct22 pts/2    00:00:00 [pytest-xdist running] unit/runtime/utils/test_partition.py::TestPartitionedTensor::test
aiscuser 2783952 2783951  0 16:43 pts/2    00:00:12 /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/pytest --color=yes --durations=0 --verbose -rF --forked -n 4 unit/ --torch_ver=2.5 --cuda_ver=12.1
aiscuser 2784003 2783952  0 16:43 pts/2    00:00:23 [pytest-xdist running] unit/comm/test_dist.py::TestInit::test
aiscuser 2784006 2783952  0 16:43 pts/2    00:00:29 [pytest-xdist idle]
aiscuser 2784009 2783952  0 16:43 pts/2    00:00:28 [pytest-xdist idle]
aiscuser 2784012 2783952  0 16:43 pts/2    00:00:28 [pytest-xdist idle]
aiscuser 2862445 2784003  0 17:13 pts/2    00:00:00 [pytest-xdist running] unit/comm/test_dist.py::TestInit::test

@tohtana
Copy link
Contributor Author

tohtana commented Oct 24, 2024

We still have an issue with process group even after switching to file store.
c10d::FileStore::get threw an error in the following case.

[xdist_worker=1][TestPartitionedTensor][test] [pid=2236799,master_port=30500,local_rank=2,num_procs=4 [exec _dist_run] Failed with 4 processes. elapsed_time=1801.18s exc_type=<class 'torch.distributed.DistBackendError'> exc_val=[2] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '0', but store->get('0') got error: Timeout waiting for key: default_pg/0//1//cuda//0 after 1800000 ms
Exception raised from get at ../torch/csrc/distributed/c10d/FileStore.cpp:384 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fba12fd4446 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7fba12f7e6e4 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10d::FileStore::get(std::string const&) + 0xccc (0x7fba4e620b3c in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fba4e654bc1 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #4: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fba4e654bc1 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fba4e654bc1 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fba4e654bc1 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #7: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xaf (0x7fba1431aeaf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #8: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, c10::Device&, c10d::OpType, int, bool) + 0xfbd (0x7fba14326e4d in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #9: c10d::ProcessGroupNCCL::broadcast(std::vector<at::Tensor, std::allocator<at::Tensor> >&, c10d::BroadcastOptions const&) + 0x6eb (0x7fba1433404b in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #10: <unknown function> + 0x5f8f8e6 (0x7fba4e6488e6 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #11: <unknown function> + 0x5f9ab96 (0x7fba4e653b96 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #12: <unknown function> + 0x55b224b (0x7fba4dc6b24b in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #13: <unknown function> + 0x55afad9 (0x7fba4dc68ad9 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #14: <unknown function> + 0x1a8c3f8 (0x7fba4a1453f8 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #15: <unknown function> + 0x5f9f9ba (0x7fba4e6589ba in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #16: <unknown function> + 0x5faf59c (0x7fba4e66859c in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #17: <unknown function> + 0xdf99f5 (0x7fba5e1179f5 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #18: <unknown function> + 0x4cb474 (0x7fba5d7e9474 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #19: /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python() [0x4fdc87]
frame #20: _PyObject_MakeTpCall + 0x25b (0x4f741b in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #21: /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python() [0x509cbf]
frame #22: _PyEval_EvalFrameDefault + 0x4b26 (0x4f2c16 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #23: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #24: PyObject_Call + 0xb8 (0x50a508 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #25: _PyEval_EvalFrameDefault + 0x2b79 (0x4f0c69 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #26: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #27: _PyEval_EvalFrameDefault + 0x13b3 (0x4ef4a3 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #28: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #29: PyObject_Call + 0xb8 (0x50a508 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #30: _PyEval_EvalFrameDefault + 0x2b79 (0x4f0c69 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #31: /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python() [0x5099ce]
frame #32: _PyEval_EvalFrameDefault + 0x13b3 (0x4ef4a3 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #33: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #34: PyObject_Call + 0xb8 (0x50a508 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #35: _PyEval_EvalFrameDefault + 0x2b79 (0x4f0c69 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #36: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #37: _PyEval_EvalFrameDefault + 0x13b3 (0x4ef4a3 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #38: /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python() [0x509c57]
frame #39: _PyEval_EvalFrameDefault + 0x2b79 (0x4f0c69 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #40: /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python() [0x509c57]
frame #41: _PyEval_EvalFrameDefault + 0x2b79 (0x4f0c69 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #42: /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python() [0x509b26]
frame #43: /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python() [0x600a59]
frame #44: /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python() [0x516b14]
frame #45: /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python() [0x54eaf0]
frame #46: _PyEval_EvalFrameDefault + 0x31f (0x4ee40f in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #47: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #48: _PyEval_EvalFrameDefault + 0x2b79 (0x4f0c69 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #49: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #50: _PyEval_EvalFrameDefault + 0x2b79 (0x4f0c69 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #51: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #52: _PyEval_EvalFrameDefault + 0x731 (0x4ee821 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #53: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #54: _PyEval_EvalFrameDefault + 0x731 (0x4ee821 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #55: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #56: _PyEval_EvalFrameDefault + 0x4b26 (0x4f2c16 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #57: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #58: _PyEval_EvalFrameDefault + 0x31f (0x4ee40f in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #59: _PyFunction_Vectorcall + 0x6f (0x4fe0cf in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #60: PyObject_Call + 0xb8 (0x50a508 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #61: _PyEval_EvalFrameDefault + 0x2b79 (0x4f0c69 in /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python)
frame #62: /tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/bin/python() [0x5950f2]
. This may indicate a possible application crash on rank 0 or a network set up issue.   File "/tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/tests/unit/common.py", line 411, in _dist_run
    self.run(**self._fixture_kwargs)
  File "/tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/tests/unit/common.py", line 569, in run
    self._current_test(**fixture_kwargs)
  File "/tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/tests/unit/runtime/utils/test_partition.py", line 32, in test
    dist.broadcast(full, src=0, group=group)
  File "/tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 117, in log_wrapper
    return func(*args, **kwargs)
  File "/tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 224, in broadcast
    return cdb.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)
  File "/tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
    return fn(*args, **kwargs)
  File "/tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/deepspeed/comm/torch.py", line 200, in broadcast
    return torch.distributed.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)
  File "/tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 83, in wrapper
    return func(*args, **kwargs)
  File "/tmp/azureml/cr/j/afa0cce3d09645338aa6c1690e718803/exe/wd/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2421, in broadcast
    work = group.broadcast([tensor], opts)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants