-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Passing ensemble predictions to MD engines #650
Comments
In i-PI this should be implemented at the level of the metatensor driver, but yes should be feasible. Davide and Matthias are best positioned to help. |
I'm not sure I understand what you mean here. If we are changing property name in the output, this will have to be a different TensorMap. I also don't think that changing behavior based on property names is desirable, since this makes the whole thing more complex to explain to users and engine developers alike. My favorite solution here would be to do another output name alltogether, with its own specification. So for example In general, what's the use case for this? Do we actually need to full ensemble of predictions, or will this always be used to compute the mean and standard deviation for uncertainty quantification purposes? |
@ceriottm can expand, but passing all the individual predictions is necessary. |
Given #648, I thought it was a good idea to open an issue for the forwarding of ensemble predictions to MD engines, which we will need relatively soon.
The design could be as simple as having a new optional property name (something like
ensemble_member
) for any registered output inmetatensor.torch.atomistic
, and then passing this to the metatensor interface to the MD engine, which can decide how to handle it.i-PI should be well-equipped for this. ASE is also very customizable so we should be able to make the ensemble predictions available to the user relatively easily. I don't know LAMMPS very well, but perhaps we could code something in our driver, or perhaps leave it alone and error out if ensemble predictions are received.
@Luthaf @ceriottm
The text was updated successfully, but these errors were encountered: