Loading...
Loading...

Metrics

Loading...
Loading...
Loading...

National

Loading...
Loading...
Loading...
Loading...

States

Loading...

Overview

The Flu Forecasting Interpretation Visualization and Evaluation (FluFIVE) web application provides a user-friendly interface to explore probabilistic near-term forecasts of incident influenza hospitalizations in the United States. The source of forecasts is the CDC FluSight initiative, which since 2013 has served as a consortium to coordinate influenza forecasting efforts in the United States. FluFIVE provides forecast data related to the 2021-22 and 2022-23 FluSight initiative, which solicited near-term (1 to 4 weeks ahead) forecasts of incident confirmed influenza hospitalizations at the state/territory and national level.

FluFIVE was developed and maintained by Signature Science, LLC with support from the Council of State and Territorial Epidemiologists (CSTE) via the Centers for Disease Control (CDC) Cooperative Agreement No. NU38OT000297.

Features

Map

The FluFIVE landing page is an interactive, animated choropleth map that communicates observed and forecasted influenza hospitalizations. For each state, the number of hospitalizations is normalized by population to arrive at a rate per 100,000 people. Normalizing to a rate facilitates side-by-side comparison of activity across states. The map includes several user input widgets to help toggle how the rates are displayed. The user can select a specific model, forecast date, and horizon. The horizon ranges from 1 to 4 weeks ahead, and also includes “horizon 0” (i.e., the observed value for the week of the forecast). The horizon can be shifted by clicking and dragging the slide, or by clicking the “play” button to animate from horizon 0 through 4. Note that all models will share the same value for horizon 0 on the given week, and only the forecasted weeks (horizons 1 to 4) will change from model to model.

Near-term trajectory

The near-term trajectory tab provides plots of the forecasted weekly incident hospitalizations alongside the observed values. The forecasts are presented as point estimates and 95% prediction intervals. The upper and lower bounds of the prediction intervals provide information regarding how certain each model is about the point estimates. Communicating uncertainty is important for public health interpretation, and FluSight required that all submissions include probabilistic forecasts. Users can explore these near-term forecasts by location (state or nationally), model, and forecast date. Multiple models and locations can be visualized simultaneously, and the observed data for horizons that have been eclipsed is provided so that the user can see how close the given forecasted values were to the actual number of weekly hospitalizations.

For forecasts from the 2022-23 season, users can also toggle scenario projections for seasonal peak at each location. The projections are pulled from the Flu Scenario Modeling Hub. Projections represent the ensemble of submitted model estimates and are based on combinations of assumptions around vaccine effectiveness (VE) and prior immunity:

  • High VE = 50% against medically attended influenza illnesses and hospitalizations
  • Low VE = 30% against medically attended influenza illnesses and hospitalizations
  • Optimistic prior immunity = Same amount of prior immunity as in a typical, pre-COVID19 pandemic prior season
  • Pessimistic prior immunity = 50% lower prior immunity as in a typical, pre-COVID19 pandemic prior season

The assumptions are combined to form the current Round 3 scenarios as follows:

  • A: High vaccine effectiveness + optimistic immunity
  • B: High vaccine effectiveness + pessimistic immunity
  • C: Low vaccine effectiveness + optimistic immunity
  • D: Low vaccine effectiveness + pessimistic immunity

Evaluation

The evaluation tab offers summary measures for performance of approaches used to generate weekly FluSight submissions by different modeling groups as computed within forecast seasons. Only the methods that were used to submit forecasts for every week in the 2021-22 or 2022-23 FluSight seasons are included in evaluation. We present several evaluation metrics, aggregated by forecast horizon and summarized across all state and national forecasts during the season.

  • Weighted Interval Score (WIS): The WIS is a proper scoring rule for predictive distributions that weights quantiles and penalizes over prediction and under prediction. This metric was developed to evaluate probabilistic COVID-19 forecasts and has also been demonstrated using influenza-like illness forecasts from the 2016-2017 FluSight challenge.
  • Absolute Error: The absolute error is computed by finding the difference between predicted point estimates and observed values. While absolute error does not penalize over or under prediction, it provides a clear indicator as to how far off predictions were from observed value.
  • Percent Coverage: The percent coverage is summarized in the app as the proportion of forecasts for which the 95% prediction interval included the observed value. This metric provides a means to communicate the calibration of a forecast method. For example, a perfectly calibrated forecaster would include the true value 95% of the time in its 95% prediction interval.

Summary

The summary tab provides a an overall view of forecasted flu hospitalizations for a given week. The user can select a model and forecast date to see national and state-level summaries of forecasts. The national summary provides a gauge plot of the forecasted count of flu hospitalizations at each horizon. The blue bar in the gauge represents the forecasted count. The red tick mark indicates the maximum observed weekly count during the season and the dark grey bar corresponds to the 25th and 75th percentile of observed weekly counts. The arrow beneath the gauge indicates whether or not the difference in forecast versus previous horizon is increasing or decreasing. The number next to the arrow is the difference in forecasted hospitalizations. The state summary provides a tile plot colored by forecasted rate of flu hospitalizations in each state. Each column aligns to a forecast horizon, where horizon=0 represents the observed flu hospitalizations the week prior to the forecast date. The numbers in each tile represent the count of hospitalizations, which provides an indicator of absolute burden. Note that states without colored tiles were absent from the selected model for the given forecast date.

FAQs

Why don’t I see all of the teams who submitted forecasts to FluSight in the evaluation dropdown?

While FluSight allows teams to submit to select weeks during the season, the evaluation dropdown is populated to only include groups that submitted to every week of the season for evaluation parity. As such, certain forecasters who submitted to FluSight for select weeks may not appear in the FluFIVE evaluation.