You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running the failing cases using Matlab 2024 to reproduce the results.
The core issue seems to be the stochastic sampling nature of the code. For certain input files, the output seems to vary on each run and this is obviously problematic for a simple comparison based test framework.
The two failing tests are US-Syv* and US-ARM*.
Comparing our refactored matlab code generated reports to the provided reports for the different sites may not be possible in some cases.
Options:
For testing our matlab refactored code:
Pass the same bootstrap data to both refactored and non-refactored version matlab ustart cp (setting a seed may be sufficient in producing the same output reports)
For testing our python translation:
Pass the same bootstrap data to matlab ustar_cp and python ustar_cp and ensure that all functions produce the same numerical output to a few significant figures
Test that the distribution of samples which are sampled from the site data is similar when using python vs matlab
Also should note whether using Matlab 2024 vs 2018a is important for the testing.
The text was updated successfully, but these errors were encountered:
Regression tests discrepancies.
Running the failing cases using Matlab 2024 to reproduce the results.
The core issue seems to be the stochastic sampling nature of the code. For certain input files, the output seems to vary on each run and this is obviously problematic for a simple comparison based test framework.
The two failing tests are US-Syv* and US-ARM*.
Options:
For testing our matlab refactored code:
For testing our python translation:
Also should note whether using Matlab 2024 vs 2018a is important for the testing.
The text was updated successfully, but these errors were encountered: