Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solved #2

Open
muses0229 opened this issue Jul 10, 2024 · 8 comments
Open

Solved #2

muses0229 opened this issue Jul 10, 2024 · 8 comments

Comments

@muses0229
Copy link

muses0229 commented Jul 10, 2024

No description provided.

@apchenstu
Copy link
Collaborator

Great question. I used 4 A100-40G GPUs with a batch size of 3. I haven't tried training on a single GPU, but I believe the current implementation supports small batch sizes such as 1 or 2.

@yosun
Copy link

yosun commented Jul 11, 2024

yejr - did you get it to work on RTX3090

@ChenYutongTHU
Copy link

yejr - did you get it to work on RTX3090

Hi. I also experienced the problem.
When running on RTX3090,
CUDA out of memory. Tried to allocate 131062.89 GiB. GPU 0 has a total capacty of 23.69 GiB of which 22.45 GiB is free. Including non-PyTorch memory, this process has 1.24 GiB memory in use. Of the allocated memory 826.90 MiB is allocated by PyTorch,
When running on RTX4090, it has no OOM issues

@apchenstu
Copy link
Collaborator

Hmm, 131062.89 GiB seems to be a bug in the code, generally caused by matrix shape misalignment, where did occur this error?

@ChenYutongTHU
Copy link

Hmm, 131062.89 GiB seems to be a bug in the code, generally caused by matrix shape misalignment, where did occur this error?

It happens in 2DGS rasterization. I used the same code for two types of GPU, using the same conda environment, in slurm. [However, it may be because that the 2DGS-diff package is installed in a 4090-gpu environment. When running in a 3090-gpu env, it has some problems.

@apchenstu
Copy link
Collaborator

Do you have the issue when using 4090?

@ChenYutongTHU
Copy link

ChenYutongTHU commented Aug 26, 2024 via email

@apchenstu
Copy link
Collaborator

I see, could you please try to reinstall using 3090

@muses0229 muses0229 changed the title Memory requirements Solved Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants