Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.OutOfMemoryError: CUDA out of memory. while performing peft curation with sdg on default configs #520

Open
mohit5tech opened this issue Feb 5, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@mohit5tech
Copy link

Steps/Code to Reproduce Bug
Please provide minimal steps or a code snippet to reproduce the bug.

Using dataset size of 749,000 samples.
Running fine-tuning on allenai/tulu-3-sft-olmo-2-mixture.
Utilizing the latest NVIDIA drivers.

2025-02-05 07:27:31,421 - distributed.worker - ERROR - Compute Failed
Key: ('lambda-619f7ac64f13a38ca6c6546e6af3af28', 10)
State: executing
Task: <Task ('lambda-619f7ac64f13a38ca6c6546e6af3af28', 10) reify(...)>
Exception: "OutOfMemoryError('CUDA out of memory. Tried to allocate 59.96 GiB. GPU 0 has a total capacity of 79.10 GiB of which 17.46 GiB is free. Process 363562 has 61.61 GiB memory in use. Of the allocated memory 60.08 GiB is allocated by PyTorch, and 284.75 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)')"
Traceback: ' File "/usr/local/lib/python3.10/dist-packages/dask/bag/core.py", line 1875, in reify\n seq = list(seq)\n File "/usr/local/lib/python3.10/dist-packages/dask/bag/core.py", line 2063, in next\n return self.f(*vals)\n File "/usr/local/lib/python3.10/dist-packages/nemo_curator/modules/semantic_dedup.py", line 524, in \n lambda cluster_id: get_semantic_matches_per_cluster(\n File "/usr/local/lib/python3.10/dist-packages/nemo_curator/utils/semdedup_utils.py", line 272, in get_semantic_matches_per_cluster\n M, M1 = _semdedup(cluster_reps, "cuda")\n File "/usr/local/lib/python3.10/dist-packages/nemo_curator/utils/semdedup_utils.py", line 193, in _semdedup\n triu_sim_mat = torch.triu(pair_w_sim_matrix, diagonal=1)\n'

#####################

How I launch this script on multi GPU to avoid cuda out of memory

@mohit5tech mohit5tech added the bug Something isn't working label Feb 5, 2025
@ayushdg
Copy link
Collaborator

ayushdg commented Feb 6, 2025

cc: @VibhuJawa , @sarahyurick & @ruchaa-apte in case you have suggestions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants