-
Notifications
You must be signed in to change notification settings - Fork 33
Closed
Labels
enhancementNew feature or requestNew feature or request
Milestone
Description
You can see the discrepancy if you install numpy-2.x, and then run in unit tests:
pytest -vs test_analyze.py::test_analyze_ff
When calculated JSON is compared to PRE-calculated one, the difference is quite big - sometimes more than 1%. It has probably almost no scientific impact, but it is suspicious and I would not leave the code as it was. For the comparison: error in OP values is less than 1e-6. Now it leads to the fail of unit-tests when using numpy-2.x, and I would say that it is correct. Probably it can trigger fail in other build environments, but I hasn't seen yet.
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request