You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Check that the response distribution has the correct parameters? (test_expected_response_codegen might already do this to some degree. Perhaps that can be reworked along the lines of test_mu_correctness -- using .fitted('expectation') to generate actual values, and comparing against the output of the mean method of a pyro distribution.)
Extend code gen tests to check that the response is observed, and that it comes from the expected family?
The text was updated successfully, but these errors were encountered:
I like this because such tests are essentially deterministic, and they're easy to write. While it wouldn't guarantee that generated models have the correct semantics, it would give us confidence that all backends compute mu in the same way, and provide reassurance when making changes to e.g. code generation. (e.g. #10.)
Eventually we might even consider generating the expected functions from the model definition itself. I guess this would be of most interest if we were also generating model descriptions in statistical notation (#33). If these shared a common implementation (you'd need generate something like the expected function when generating the math description) then these tests would help convince us that the math and code we generate are consistent.
mu
is computed as the correct function of latents & data? (Add tests to check correctness of model'smu
computation #52)test_expected_response_codegen
might already do this to some degree. Perhaps that can be reworked along the lines oftest_mu_correctness
-- using.fitted('expectation')
to generate actual values, and comparing against the output of themean
method of a pyro distribution.)The text was updated successfully, but these errors were encountered: