CUDA out of memory #195
-
|
I trained a model using I didn't have any trouble running out of memory using Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
|
Hi @adam-norris, yes, higher l's are expected to have poor scaling. The scaling goes as If you're sure you want/need The other thing is obviously to reduce the batch size (often 1 actually work really well if you're training on forces). Let us know if you have other questions. |
Beta Was this translation helpful? Give feedback.
-
Does this mean you are using |
Beta Was this translation helpful? Give feedback.
Does this mean you are using
nequip-evaluate?nequip-evaluateuses its own batch size, taken from a command-line option (seenequip-evaluate --help), and the default is pretty big to give people decent speed by default. For a bigger model, like thel = 3one you are using, though, you may need to set a lower batch size at the command line when you runnequip-evaluate.