Replies: 3 comments 2 replies
-
My initial thought is just to make sure whatever is done also works as expected when running under |
Beta Was this translation helpful? Give feedback.
-
If you didn't know.. I think the obvious method is to use the "catch_warnings" context manager Sometime back, we added such a context manager to all our gallery/example tests, to ensure that the examples don't raise deprecation warnings. It seems that Pytest can filter warnings, also at the module level, but I think we need to commit to Pytest as our standard framework before we can use this. |
Beta Was this translation helpful? Give feedback.
-
This is now an active issue: #5466 |
Beta Was this translation helpful? Give feedback.
-
Running the Iris tests generates a number of warnings that are expected, e.g.
/iris/lib/iris/fileformats/_nc_load_rules/helpers.py:645: UserWarning: Ignoring netCDF variable 'time' invalid units 'wibble' warnings.warn(msg)
There are also others that are less clearly expected. The number of these errors makes it likely one might miss a new or unexpected warning (e.g. a numpy deprecation, as spotted in #4374)
In discussion with @trexfeathers yesterday, we considered that the expected warnings could be suppressed such that it's easier to spot important errors. We could additionally test to make sure that code caused the warnings it's meant to cause, and/or have unsuppressed warnings cause tests to fail so that we notice them.
Is this something that seems valuable enough to make an issue and start working towards implementing (initially picking off warnings as tests are revisited for other reasons, then pushing to get the remainder at the end)?
Beta Was this translation helpful? Give feedback.
All reactions