-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Epi Scenario 2: Improving Forecasts Through Model Updates #72
Comments
Note: Although many compartmental models include an
Exposed compartment, we are omitting it for this scenario for simplification
reasons. ·
·
·
·
·
·
·
·
·
· To compensate for the fact that we don’t have an Exposed compartment in this model, we lower the total population N to 150e6 people, rather than use the actual total population of the United States. This is meant to approximate the situation where some individuals were exercising caution during the winter of 2021-2022, and were not exposed to Covid-19.
For initial conditions please pull values from the gold standard cases and deaths data from the Covid-19 ForecastHub, and HHS hospitalization data from https://healthdata.gov/Hospital/COVID-19-Reported-Patient-Impact-and-Hospital-Capa/g62h-syeh. |
1. Model Calibration: Using the given parameter values as initial guesses, calibrate the starting model, with data from the first month of the retrospective analysis: December 1st, 2021, through December 31st, 2021. You may decide which parameter values you are confident about and don’t need to calibrate, and the min/max ranges for the ones you would like to calibrate. Include plots of your calibrated model outputs, compared to actual data, for this time period.
2. Single Model Forecast: a. Using your calibrated model, forecast cases, hospitalizations, and deaths, for the test period (January 1st, 2022 – March 1st, 2022). b. Plot your forecast against actual observational data from this time period, and calculate Absolute Error. c. How does your forecast’s Absolute Error metric over the first 4 weeks of this time period, compare against forecasts during this time period, from other compartmental models in the Covid-10 ForecastHub? Compare specifically against the UCLA-SuEIR and BPagano models. You can find forecast data and error scores for these two models, in the supplementary materials. All model forecasts in the ForecastHub are located here: https://github.com/reichlab/covid19-forecast-hub/tree/master/data-processed
3. Ensemble Forecast:
You hypothesize that the a. Create 3 different
configurations of the model from Q1, each with a different value of b. Forecast cases, hospitalizations, and deaths, for the test period (January 1st, 2022 – March 1st, 2022). c. For each outcome (cases, hospitalizations, deaths), plot your forecast against actual observational data from this time period, and calculate Absolute Error. d. How does your forecast’s Absolute Error metric over the first 4 weeks of this time period compare against one of the ForecastHub ensembles (e.g. ‘COVIDhub-4_week_ensemble’). You can find forecast data and error scores for this ensemble, in the supplementary materials. All forecast data from the ForecastHub ensembles are here: https://github.com/reichlab/covid19-forecast-hub/tree/master/data-processed e. How does your forecast performance compare against the results of Q2?
4. Model Update: Now update your model to include vaccination. Ensure that this is done in a way that can support interventions around vaccinations (e.g. incorporate a vaccination policy or requirement that increases rate of vaccination). For this question, only consider one vaccine type and assume one dose of this vaccine is all that’s required to achieve ‘fully vaccinated’ status. You will consider multiple doses in a later question.
5. Find Parameters: Your updated model from Q3 should have additional variables and new parameters. What is the updated parameter table that you will be using? As with scenario 1, you may include multiple rows for the same parameter (e.g. perhaps you find different values from different reputable sources), with a ‘summary’ row indicating the final value or range of values you decide to use. If there are required parameters for your model that you can’t find sources for in the literature, you may find data to calibrate your model with, or make reasonable assumptions on what sensible values could be (with rationale). You may use any sources, including the following references on vaccine efficacy for Moderna, Pfizer, and J&J vaccines. · Estimates of decline of vaccine effectiveness over time https://www.science.org/doi/10.1126/science.abm0620 · CDC Vaccine Efficacy Data https://covid.cdc.gov/covid-data-tracker/#vaccine-effectiveness · Vaccination data sources https://data.cdc.gov/Vaccinations/COVID-19-Vaccinations-in-the-United-States-Jurisdi/unsk-b7fc
6. Model Checks: Implement common sense checks on the model structure and parameter space to ensure the updated model and parameterization makes physical sense. Explain the checks that were implemented. For example, under the assumption that the total population is constant for the time period considered: a. Demonstrate that population is conserved across all compartments b. Ensure that the total unvaccinated population over all states in the model, can never increase over time, and the total vaccinated population over all states in the model, can never decrease over time. c. What other common-sense checks did you implement? Are there other checks you would have liked to implement but it was too difficult to do so?
7. (Optional) Single Model Forecast: Using your updated model, forecast cases, hospitalizations, and deaths, for the test period (January 1st, 2022 – March 1st, 2022). a. Plot your forecast against actual observational data from this time period, and calculate Absolute Error. b. How does your forecast’s Absolute Error metric over the first 4 weeks of this time period, compare against forecasts during this time period, from other compartmental models in the Covid-10 ForecastHub? Compare specifically against the UCLA-SuEIR and BPagano models. c. How does your forecast performance compare with the one in Q2? If the forecast performance has improved or gotten worse, why do you think this is?
8. Model
Update: During this time period, access to at-home testing was vastly
expanded, through distribution of free antigen tests and requirements for
insurance to cover at-home tests for free. Update your model from Q4 to
incorporate testing by modifying the
9. Model Stratification: The decision maker you’re supporting is exploring targeted vaccination campaigns to boost vaccination rates for specific subpopulations. To support these questions, you decide to further extend the model from Q8, by considering several demographic subgroups, as well as vaccination dosage. Stratify the model by the following dimensions: · Vaccination dosage (1 or 2 doses administered) · Age group · Sex · Race or Ethnicity To inform initial conditions and rates of vaccination, efficacy of vaccines, etc., consider the subset of vaccination datasets from the starter kit listed in ‘Scenario2_VaccinationDatasets.xlsx’ (in the supplementary materials). Where initial conditions are not available for a specific subgroup, make a reasonable estimate based on percentages from Census sources (e.g. https://www.census.gov/quickfacts/fact/table/US/PST045223). Where parameters for specific subgroups are unavailable, generalize based on the ones that are available. Choose the number of age and race/ethnicity groups based on the data that is available.
10. Model Checks: Implement common sense checks on model structure and parameter space to ensure the updated model and parameterization from Q9 is structurally sound and makes physical sense. Explain the checks that were implemented. For example, under the assumption that the total population is constant for the time period considered: a. Demonstrate that population is conserved across all disease compartments, and within each demographic group (age, sex, race/ethnicity). b. Ensure that the total unvaccinated population and unvaccinated population within each age group, can never increase over time, and the total vaccinated population and vaccinated population within each age group, can never decrease over time. c. What other common-sense checks did you implement? Are there others you would have liked to implement but were too difficult?
11. Single (Stratified) Model Forecast: Using your updated model from Q9, forecast cases, hospitalizations, and deaths, for the test period (January 1st, 2022 – March 1st, 2022). a. Plot your forecast against actual observational data from this time period, and calculate Absolute Error. Use observational data aggregated to the general population as well as granular data for individual demographic groups. Plot outcomes for individual demographic groups, as well as the total population. b. How does your forecast’s Absolute Error metric over the first 4 weeks of this time period, compare against forecasts during this time period, from other compartmental models in the Covid-10 ForecastHub? Compare specifically against the UCLA-SuEIR and BPagano models. c. How does your forecast performance compare with the one in Q2? If the forecast performance has improved or worsened, why do you think this is?
12. Interventions: Now that you have a model that can support targeted interventions, the decision maker you support asks you to explore what would have happened during the retrospective analysis period, had these interventions been implemented at that time. a. With respect to your forecast from Q11, which demographic group had the worst outcomes during the retrospective period, and therefore should be targeted with interventions such as vaccine campaigns, or increased community outreach to make testing more widely available and encouraged? b. Implement an intervention that targets testing-related parameters (e.g. programs to increase access to tests, distribution of free tests, etc.) at the start of the forecast period, and redo the forecast from Q11. For a 1% increase in a test-related parameter (that has a net positive impact), what’s the impact of the intervention on the forecast trajectory, for the affected demographic group identified in Q12a, as well as for the overall population? c. Implement another intervention that targets vaccination rate(s), at the start of the forecast period, and redo the forecast from Q11. For a 1% increase in vaccination rate, what’s the impact of the intervention on the forecast trajectory, for the affected demographic group identified in Q12a, as well as for the overall population?
|
Scenario 2 Summary Table
|
Decision-maker Panel Questions 1. What is your confidence that the modeling team developed an appropriate model and associated parameter space to sufficiently explore the scenario/problem? Select score on a 7-point scale. 1. Very Low 2. Low 3. Somewhat Low 4. Neutral 5. Somewhat High 6. High 7. Very High
Explanation: The scenario involves updating or modifying a model, and decision makers will evaluate whether this was done in a sensible way and whether the final model can support all the questions asked in the scenario.
The decision-maker confidence score should be supported by the answers to the following questions: · Did modelers clearly explain the changes being made and key differences between the original and updated models? Did the modifications/extensions the modelers made make sense and were they reasonable to you? · Are you confident that the starting model was updated in ways that make sense? Is the final model structurally sound? · As the model was update, was the parameter space being explored reasonable and broad enough/complete enough to support the questions required by the scenario?
2. What is your confidence in understanding model results and tradeoff between potential interventions? Select score on a 7-point scale. 1. Very Low 2. Low 3. Somewhat Low 4. Neutral 5. Somewhat High 6. High 7. Very High
Explanation: Determine your confidence in your ability to do the following, based on the information presented to you by the modelers: assess model performance, assess effectiveness of all interventions considered in the scenario, and understand how uncertainty factors into all of this.
This score should be supported by the answers to the following questions: · Did modelers communicate the impacts of interventions on trajectories? Was the effectiveness of interventions communicated? · Did models help you to understand what would have happened had a different course of action been taken in the past? · Where relevant to the question, was it clear how to interpret uncertainty in the results? Were key drivers of uncertainty in the results, communicated?
|
Scenario 2: Improving Forecasts Through Model Updates
Estimated % of time: Baseline 50%; Workbench 40%
It is the end of 2022, and you are supporting a decision maker who is preparing for a winter Covid wave. The winter Covid wave caused by the original Omicron variant just a year earlier (end of 2021 and early 2022), was, at the US country level, the largest of the pandemic so far. Fearing another similar winter wave, the decision maker asks you to do a retrospective analysis of the prior winter. In particular, they want to you try and develop the most accurate model of the original Omicron wave, explore various interventions in the model and assess their effects. For your retrospective analysis, consider the time period of December 1st, 2021, to March 1st, 2022, with the first month (December 1st – 31st, 2021) as the training period, and the remaining time as the test period.
Starting Model: Begin with the following SIRHD model structure (Figure 1) and set of differential equations. For workbench modelers, a version of this may already exist in the workbench; if not, create it. For baseline modelers, see accompanying code in supplementary materials. The general form/structure of the model is below.
The text was updated successfully, but these errors were encountered: