-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1159 add diffusive abm and smm #1162
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1162 +/- ##
==========================================
+ Coverage 97.13% 97.15% +0.02%
==========================================
Files 149 154 +5
Lines 13999 14208 +209
==========================================
+ Hits 13598 13804 +206
- Misses 401 404 +3 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I seem to barely document code I write under a deadline. I will continue this with more suggestions later, but feel free to edit them - or add your own documentation, there is a lot to do...
The directory structure and naming looks pretty good. I have not checked the tests yet
cpp/tests/test_d_abm_model.cpp
Outdated
QuadWellModel<InfectionState>::Agent a1{Eigen::Vector2d{-1, 1}, InfectionState::S}; | ||
QuadWellModel<InfectionState>::Agent a2{Eigen::Vector2d{-1.2, 1}, InfectionState::I}; | ||
QuadWellModel<InfectionState> qw({a1, a2}, {{InfectionState::S, | ||
InfectionState::E, | ||
mio::dabm::Region(0), | ||
0.1, | ||
{InfectionState::C, InfectionState::I}, | ||
{1, 0.5}}}); | ||
EXPECT_EQ(qw.adoption_rate(a1, InfectionState::E), 0.025); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other branches that are not checked are
- the rate for a2 to E is 0
- another isolated infected does not influence the rate
- the rate of a1 scales correctly with more nearby infected and non-infected
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add descriptions to the test what it is you check? Otherwise it is hard to tell what is going on here.
This is true for all added tests. For example, in TestQuadWell.well(_index ?) one comment at the top like "check that well_index returns indices in the correct order" would be enough.
Here, in TestQuadWell.adoptionRate, you could write down what the agents and rates are used for, and/or explain what the EXPECTs do. Like "check that a1 is affected by a2, but not the for off a3" for the first EXPECT call, and so on.
Co-authored-by: reneSchm <[email protected]>
#ifndef ADOPTIONRATE_H | ||
#define ADOPTIONRATE_H |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#ifndef ADOPTIONRATE_H | |
#define ADOPTIONRATE_H | |
#ifndef MIO_EPI_ADOPTIONRATE_H | |
#define MIO_EPI_ADOPTIONRATE_H |
Status to; // j | ||
mio::regions::Region region; // k | ||
ScalarType factor; // gammahat_{ij}^k | ||
std::vector<Influence<Status>> influences; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
std::vector<Influence<Status>> influences; | |
std::vector<Influence<Status>> influences; // influences[tau] = ( Psi_{i,j,tau} , gammahat_{i,j,tau} ) |
I think that should be the symbols, right? Do we reference our paper in both model Readmes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now we do
namespace regions | ||
{ | ||
|
||
/// @biref Index for enumerating subregions (cities, counties, etc.) of the modelled area. | ||
struct Region : public mio::Index<Region> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be fine to put Region into the mio namespace directly. Or do you prefer to "hide" it here?
{ | ||
return "AdoptionRates"; | ||
} | ||
}; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This header is empty, should we remove it?
@@ -24,11 +24,13 @@ | |||
#include "memilio/math/eigen.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't reach the include guard for comments, please change it to MIO_D_ABM_QUAD_WELL_H
cpp/tests/test_smm_model.cpp
Outdated
EXPECT_EQ(sim.get_result().get_last_time(), 30.); | ||
} | ||
|
||
TEST(TestSMMSimulation, covergence) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TEST(TestSMMSimulation, covergence) | |
TEST(TestSMMSimulation, convergence) |
cpp/tests/test_smm_model.cpp
Outdated
std::abs(rate * pop2 - std::abs(std::accumulate(transitions2.begin(), transitions2.end(), 0.0) / num_runs2)) / | ||
(rate * pop2); | ||
|
||
EXPECT_GE(rel_diff1, rel_diff2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you compare the two models with each other rather than with the expected result? The tolerance for comparing with the result is probably fairly large, but I think it would be better than comparing two "random" values.
Also, why the greater equal? If both models converge, reldiff1 < rel_diff2 should be possible, right? Maybe use one model, and compare the simulated rate against the expected rate, with some tolerance (with can be fairly large).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moreover, should this be a random test perhaps? So that it checks not only that one specific seed works. It is okay for a random test to possibly fail, if it is very unlikely that it does. You can run it a couple 1000 times to check that the tolerance is large enough for most edge cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I do here is check whether the simulated transition rate converges to the expected transition rate for num_runs, pop_size to infty. And I realize this by checking whether the relative difference between the simulated number of transitions and the expected number of transitions gets smaller if I increase the number of runs as well as the number of agents.
I can also only use one model and compare the simulated and expected number of transitions, but what would be an appropriate tolerance? 1%, 50%? And how many runs do we require to meet that tolerance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I realize this by checking whether the relative difference between the simulated number of transitions and the expected number of transitions gets smaller if I increase the number of runs as well as the number of agents.
Correct me if I'm wrong, but this is not what you test here. You test only for 2 samples from (num_runs, pop) to infty, and for fixed seeds. The result from that could show pretty much anything, if you were to select seeds intentionally. And as I said, I am pretty sure rel_diff1 >= rel_diff2 does not hold in general. On average, probably; in limit, almost surely. But two samples are enough to convince me of either.
That is why I would aim to test on pair (num_runs, pop), that the average perceived rate over some number of runs is close to the expected rate. Mathematically, the test result is a lot weaker than what you aim to do, but it allows us to make the test itself stronger.
but what would be an appropriate tolerance? 1%, 50%? And how many runs do we require to meet that tolerance?
That part is finicky, and will depend on the model setup. And since we want to keep runtime low for the test we cannot aim for 1-e10 or so, I actually think 50% would be fine. Though smaller would of course be nicer. I would start with averaging over a thousand runs, and do that a thousand times (with a temporary loop or via bash) to check how the variance looks. Then iterate from there by de-/increasing runs, pop or tolerance. You can additionally check the average after each run for whether it is "stable" enough at 1000, or still to fluctuating.
Co-authored-by: reneSchm <[email protected]>
Changes and Information
Please briefly list the changes (main added features, changed items, or corrected bugs) made:
Merge Request - Guideline Checklist
Please check our git workflow. Use the draft feature if the Pull Request is not yet ready to review.
Checks by code author
Checks by code reviewer(s)
closes #1159