Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

metatensor-operations finite difference test for add #479

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

agoscinski
Copy link
Contributor

@agoscinski agoscinski commented Feb 2, 2024

This PR adds utilities for doing finite difference tests for the metatensor-operations and uses them in the simple example of the add operation.
This is the first PR addressing issue #84
The idea of the testing utilities originates from issue #469

In the next run on this PR I want to dispatch the testing utilities so they can also be used for torch, but before that I would like to have some general feedback if this is going in a good direction.

TODOs:

  • The documentation is still needs some work. Also would add the testing utils to the doc.
  • function cartesian_linear needs to be implemented

Contributor (creator of pull-request) checklist

  • Tests updated (for new features and bugfixes)?
  • Documentation updated (for new features)?
  • Issue referenced (for PRs that solve an issue)?

Reviewer checklist

  • CHANGELOG updated with public API or any other important changes?

馃摎 Documentation preview 馃摎: https://metatensor--479.org.readthedocs.build/en/479/


馃摎 Download documentation preview for this pull-request

Copy link

github-actions bot commented Feb 2, 2024

Here is a pre-built version of the code in this pull request: wheels.zip, you can install it locally by unzipping wheels.zip and using pip to install the file matching your system

Copy link
Contributor

@DavideTisi DavideTisi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very good i think it is also very clear



def finite_differences(
function: Callable[[Union[np.ndarray, TorchTensor], bool], TensorMap],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about

Suggested change
function: Callable[[Union[np.ndarray, TorchTensor], bool], TensorMap],
function: Callable[[TensorMap], TensorMap],

And then in this function define

def full_function():
    data = cartesian_cubic()
    return function(data)

This should remove a lot of assumptions from the API!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, we potentially need multiple inputs:

Suggested change
function: Callable[[Union[np.ndarray, TorchTensor], bool], TensorMap],
function: Callable[[TensorMap, TensorMap], TensorMap],

and give the function two different random inputs (the function is free to ignore one of them for e.g. pow). Ideally two inputs with different set of gradient samples would be nice, but we can start with identical set of gradients.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will try it out to see how well this works.

Copy link
Contributor Author

@agoscinski agoscinski Feb 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure about the two TensorMap's as input as I don't see for the momemt what kind of bugs this could catch which are not covered by the current version, and adding support for two tensor maps makes everything a bit more cumbersome. The idea in the current implementation is that you still can create two different sets of gradients that inherit from the same input, so the finite difference test is easier to perform. In the test for add we first create two different gradients by applying cartesian_cubic and cartesian_linear on this input to then pass it the TensorMap to add. So we are testing this add(f(X), g(X)). I am not sure what kind of bug add(f(X), g(Y)) would catch in addition.

What I see is missing in the current design is a way to also test operations that affect the samples as sort and reduce over samples does.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants