Autograd/autocast error when training with OE #555
-
|
I have updated to 0.15.0 from 0.7.1 to try training with OpenEquivariance as described here: https://nequip.readthedocs.io/en/latest/guide/accelerations/openequivariance.html |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
|
Hi @apoletayev , I'm guessing that PyTorch 2.6 might not have Sorry for the inconvenience, are there specific barriers to compiling more recent PyTorch versions from source (if you're already compiling PyTorch from source anyway)? Another thing to try is perhaps to downgrade the OEQ version, but I recall making some changes to our OEQ integration over the past few NequIP versions for compatibility. Potentially one thing for us to do is nail down maybe a NequIP vs OEQ vs PyTorch version compatibility table or something that may be helpful for users. |
Beta Was this translation helpful? Give feedback.
-
|
Sorry about this, yes, this seems to be a PyTorch versioning issue; PT2.7 should work fine. The API evolves so rapidly that supporting multiple PyTorch versions quickly spirals into a big engineering effort. We might take a look at adding a guard for this in our code for PT2.6. |
Beta Was this translation helpful? Give feedback.
Sorry about this, yes, this seems to be a PyTorch versioning issue; PT2.7 should work fine. The API evolves so rapidly that supporting multiple PyTorch versions quickly spirals into a big engineering effort.
We might take a look at adding a guard for this in our code for PT2.6.