Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

Detecting signals of seasonal influenza severity through age dynamics

  • Elizabeth C. Lee1,
  • Cécile Viboud2,
  • Lone Simonsen3, 4,
  • Farid Khan5, 6 and
  • Shweta Bansal1, 2Email author
BMC Infectious Diseases201515:587

https://doi.org/10.1186/s12879-015-1318-9

Received: 28 October 2015

Accepted: 11 December 2015

Published: 29 December 2015

Abstract

Background

Measures of population-level influenza severity are important for public health planning, but estimates are often based on case-fatality and case-hospitalization risks, which require multiple data sources, are prone to surveillance biases, and are typically unavailable in the early stages of an outbreak. To address the limitations of traditional indicators, we propose a novel severity index based on influenza age dynamics estimated from routine physician diagnosis data that can be used retrospectively and for early warning.

Methods

We developed a quantitative ‘ground truth’ severity benchmark that synthesizes multiple traditional severity indicators from publicly available influenza surveillance data in the United States. Observing that the age distribution of cases may signal severity early in an epidemic, we constructed novel retrospective and early warning severity indexes based on the relative risk of influenza-like illness (ILI) among working-age adults to that among school-aged children using weekly outpatient medical claims. We compared our relative risk-based indexes to the composite benchmark and estimated seasonal severity for flu seasons from 2001–02 to 2008–09 at the national and state levels.

Results

The severity classifications made by the benchmark were not uniquely captured by any single contributing metric, including pneumonia and influenza mortality; the influenza epidemics of 2003–04 and 2007–08 were correctly identified as the most severe of the study period. The retrospective index was well correlated with the severity benchmark and correctly identified the two most severe seasons. The early warning index performance varied, but it projected 2007–08 as relatively severe 10 weeks prior to the epidemic peak. Influenza severity varied significantly among states within seasons, and four states were identified as possible early warning sentinels for national severity.

Conclusions

Differences in age patterns of ILI may be used to characterize seasonal influenza severity in the United States in real-time and in a spatially resolved way. Future research on antigenic changes among circulating viruses, pre-existing immunity, and changing contact patterns may better elucidate the mechanisms underlying these indexes. Researchers and practitioners should consider the use of composite or ILI-based severity metrics in addition to traditional severity measures to inform epidemiological understanding and situational awareness in future seasonal outbreaks.

Keywords

Influenza Influenza-like illness Severity Metrics Age patterns Epidemiology Mortality United States

Background

The causes and characterization of population-level severity are crucial aspects to understanding influenza epidemiology and designing effective surveillance and control programs. Variation in seasonal influenza severity may be caused by environmental [1, 2], antigenic [3], strain-dependent [4], and epidemiological [5] factors, but this research has not been synthesized across fields and the mechanisms are not fully understood.

Current discourse about population-level seasonal influenza severity ties itself traditionally to experiences of severe patient-level outcomes. The United States Centers for Disease Control and Prevention (CDC) characterizes seasonal severity through influenza-associated hospitalization rates and mortality due to pneumonia and influenza (Fig. 1). From these surveillance data, CDC estimated a range of 3,000 to 49,000 influenza-associated all-cause deaths and over 200,000 hospitalizations per year in the United States during the period between 1976 and 2007 [6, 7]. Clinical studies similarly focus on patient-level outcomes, where physicians use scoring techniques to rate overall patient severity or the severity of specific symptoms [8, 9].
Fig. 1

Influenza surveillance data in the United States for the 1997–98 to 2013–14 seasons (excluding 2009–10). Characterization of ILI activity as a function of: a ILI as a percentage of all outpatient visits in CDC’s ILINet and IMS Health medical claims data, b influenza subtype samples and percentage of laboratory-confirmed influenza specimens, c laboratory-confirmed influenza surveillance: cumulative hospitalization rates per 100,000 population for ages 5–17 and 18–49, and cumulative pediatric deaths (under 18 years old) over the course of the season, and d number of deaths attributed to pneumonia and influenza. The grey vertical line denotes a break in the time series for the period from October 2009 through September 2010; data not shown were not available. e The benchmark (β s ) was constructed from surveillance data on positive percentage of influenza tests, hospitalization rates, pediatric deaths, and pneumonia and influenza deaths. Bar color corresponds to severity categories, qualitatively assigned in a textual analysis of CDC Flu Season Summaries

Many epidemiological analyses utilize aggregate measures of patient-level severity, such as case-fatality and case-hospitalization risk, to assess the severity of pandemic and emerging outbreaks [1014]. Other studies model the relationship between excess mortality and morbidity rates [15, 16] or threshold excess pneumonia and influenza (P&I) mortality rates in order to identify and detect severe flu seasons. The CDC has recently adopted a population-level severity framework for influenza pandemics that incorporates both clinical severity and transmissibility metrics, but the clinical severity component remains closely tied to case-fatality and similar ratios [12]. These measures of severity based on mortality and hospitalization only capture one facet of the experience of flu across the population [4, 17], and are also limited by the availability of data. P&I mortality data are not collected in real-time by many national flu surveillance systems (e.g., in the European Union), and laboratory-confirmed hospitalization and mortality data that are collected are available with some delays (e.g., U.S. hospitalization data are backfilled due to data processing times) and for limited age groups (e.g., only laboratory-confirmed pediatric mortality is reported in the U.S.). Additionally, while hospitalization and mortality remain the accepted measures of influenza severity, there is no composite quantitative metric (used by the CDC or others) that synthesizes the varying acute effects imposed by the disease.

In this work, we develop novel severity assessment metrics that synthesize traditional severity measures on viral activity, hospitalizations, and deaths in the United States, and explore how the age patterns in influenza-like-illness (ILI) among the healthiest and largest segments of the population (children and working-age adults) may be used as proxies of population-level severity. Based on publicly-available epidemiological data, we first derive a composite benchmark that will serve as a quantitative ground truth for population-level influenza severity. Using a high coverage outpatient ILI dataset based on medical claims data from the United States, we then introduce two novel influenza severity metrics: 1) a retrospective index based on ILI age dynamics, which can aid in epidemiological analysis and the evaluation of public health responses using a commonly collected single data source; 2) an early warning index, estimated prior to the epidemic peak, which can help physicians improve patient-level communication, diagnosis, and treatment and inform decision makers on communication strategies regarding vaccination and antiviral usage.

Methods

Severity benchmark for each influenza season

We first created a synthetic composite benchmark for each season to represent a quantitative ‘gold standard’ indicator of severity. This benchmark integrated the following publicly available CDC surveillance data, aggregated to the flu season level: 1) percentage of influenzapositive laboratory confirmations among all tested respiratory specimens, 2) laboratory-confirmed influenza hospitalization rates among individuals five to seventeen years old and 3) eighteen to forty-nine years old, 4) number of laboratory-confirmed influenza deaths in children under 18 years, and 5) proportion of all deaths due to pneumonia and influenza (P&I) (time series displayed for the period from 1997–98 to 2013–14 in Fig. 1 b-d). Data for items one through four may be accessed online through CDC’s FluView Interactive application [18]; data for item five may be accessed through the CDC WONDER Morbidity and Mortality Weekly Report web application [19]. Historically, CDC has used these surveillance sources to qualitatively consider multiple facets of influenza season severity. CDC’s outpatient ILI surveillance system ILINet (Fig. 1 a), was another such historical source of severity, but it was excluded from the benchmark in order to prevent confounding when comparing the benchmark to an ILI-based severity index.

We generated a composite benchmark value for each flu season (β s , where s denotes the season) for the 16 seasons from 1997–98 to 2013–14 (excluding the 2009 H1N1 pandemic), which represented the entire period that CDC provided public surveillance data for flu. First, we performed a log transformation to the rate and count data streams (i.e., hospitalization rates, pediatric deaths) and a logit transformation to proportion and percent data (i.e., positive lab confirmations, P&I deaths) in order to put the various data types on the same scale. Second, we standardized each of these ‘raw’ metrics (θ i,s , where i denotes the data stream and s denotes the season) by the mean (\(\mu _{\theta _{i}}\)) and standard deviation (\(\sigma _{\theta _{i}}\)) across all available flu seasons, such that \(\theta _{i, s}^{*} = (\theta _{i, s} - \mu _{\theta _{i}})/\sigma _{\theta _{i}}\), where * denotes the standardized metric. We took the mean of all available standardized raw metrics \(\left (\theta _{i, s}^{*}\right)\) to generate the composite benchmark, β s , for a given season (\(\beta _{s} = \left (\sum \limits _{i=1}^{n_{\theta }} \theta _{i, s}^{*}\right)/n_{\theta }\), where n θ is the number of contributing data streams. Larger values of β s indicate more severe seasons according to the benchmark, and vice versa.

Surveillance systems did not contribute to β s when data were unavailable (Additional file 1: Table S2). Alternative standardization periods were considered in sensitivity analysis; the rank order of seasons according to β s was mildly sensitive to these methodological changes and may be appropriate for severity assessment in different research contexts (Additional file 1: Figure S2).

For comparison, we determined categorical severity classifications (i.e., mild, moderate, severe) from a qualitative analysis of CDC influenza season summaries and Morbidity and Mortality Weekly Reports. This method is further described in Additional file 1: Section 3.1 and Additional file 2. These severity categories were used to provide additional context to Fig. 1 e and Fig. 3.

ILI medical claims data

The primary data for the remainder of our analysis comprised weekly physicians’ office and outpatient visits from October 2001 to May 2009 for influenza-like illness from a records-level database of CMS-1500 U.S. medical claims (Fig. 1 a). This medical claims dataset incorporated 934 three-digit physician office U.S. zipcode prefixes (zip3s) and physician coverage increased from 22 to 70 % over the course of the study period; data were collected from 408,606 of 581,876 active physician practices during the 2008–09 flu season. We used a synthetic ILI indicator to represent influenza activity; this indicator was derived and validated in a previous study from a set of International Classification of Diseases, Ninth Revision (ICD-9) codes – influenza (487–488) or [fever and (respiratory symptoms or febrile viral illness) (780.6 and (462 or 786.2))] or prescription of oseltamivir (most commonly, 079.99) [20]. Recent analysis finds that ILI claims data accurately capture weekly fluctuations in influenza activity and season level intensity at high resolutions by age group and geographic location [20] and can be used to monitor the spatial spread of the disease [21]. See Additional file 1: SM section S1 for statements on ethics and data access. We considered the population of school-age children as 5–19 years old and working-age adults as 20–59 years old. Data were adjusted for differences in temporal coverage and age-specific care-seeking behavior. See Additional file 1: SM section S2 for further details on ILI data processing.

Retrospective and early warning severity indexes based on ILI

Based on results from exploratory analysis (see Additional file 1: SM section S4), our first step towards developing a severity index was to calculate the relative risk (RR) of adjusted ILI (see Additional file 1: SM section S2) among adults to that in school-aged children: R R s (t)=A s (t)/C s (t), where A s (t) and C s (t) are the number of ILI cases captured in the surveillance system in a given week (t) in a season (s), divided by the group’s population size, in adults and school-age children, respectively (Additional file 1: Figure S6a). Since ILI is not restricted to laboratory-confirmed flu cases and baseline ILI activity varies from year to year, we standardized each season’s weekly RR time series (r h o s (t)), such that \(\rho _{s}(t) = \nicefrac {({RR}_{s}(t) - \mu _{{RR}_{s}})}{\sigma _{{RR}_{s}}}\phantom {\dot {i}\!}\), where \(\mu _{{RR}_{s}}\phantom {\dot {i}\!}\) and \(\sigma _{{RR}_{s}}\phantom {\dot {i}\!}\) were the mean and standard deviation of R R s (t) values during a specified baseline period. We defined this baseline period as the beginning of October to mid-November (weeks 40–46).

Two severity classification periods were identified under our framework, the retrospective (r) and early warning (w) periods. These periods were the only weeks t when ρ s (t) was significantly correlated with β s , which indicated the uniqueness of the signal detection during these periods. See Additional file 1: Section 5.1 and Additional file 1: Figure S4 for further detail on the identification of these periods. The retrospective period was the two week period that began three weeks before the ILI peak in a given flu season ILI curve; this period can only be identified retrospective to the epidemic peak. The early warning period was the two week period that began two weeks after the Thanksgiving holiday in the United States. We defined retrospective severity (\(\overline {\rho _{s, r}}\)) as the mean of ρ s (t) values during the retrospective period and early warning severity (\(\overline {\rho _{s, w}}\)) as the mean of ρ s (t) values during the early warning period.

Retrospective severity captured the disease dynamics of the primary epidemic growth period and could only be assessed after the epidemic peak had passed, while the early warning severity provided an earlier assessment of severity between the Thanksgiving and winter holidays. Severity was also reasonably well estimated with the age-specific ILI relative risk over the entire flu epidemic period, but use of the two-week retrospective period was preferred as it requires less data (Additional file 1: Figure S5). Early warning severity was not reported for early flu seasons (eg. 2003–04) because the early warning period coincided with the epidemic peak during this season. To compare β s to \(\overline {\rho _{s, r}}\) or \(\overline {\rho _{s, w}}\), we calculated Pearson’s R correlation coefficients (H o :R=0) and reported p-values from a two-sided test of permutations without replacement. We also compared retrospective severity to traditional severity metrics, circulation of H3 strains, vaccine match and vaccine efficacy for seasons where these data were publicly available from CDC or reported in other studies [22] (Additional file 1: SM section S5).The primary analysis constructed and validated indexes developed from the medical claims ILI data, but a secondary analysis applied the same methods to construct relative-risk-based indexes from publicly available data from CDCŠs ILINet, and compared β s to these relative-risk-based indexes, \(\overline {\rho _{s, r}^{cdc}}\) and \(\overline {\rho _{s, w}^{cdc}}\) (Additional file 1: SM section S8).

We assessed the sensitivity of the retrospective severity rank order to baseline period duration and found that the retrospective severity index was somewhat sensitive to changing baseline periods (Additional file 1: Figure S6b-d), but that our chosen period best represents baseline age dynamics (Additional file 1: Figure S7). We also performed analyses with ILI rates in excess of a seasonal baseline, and found that age dynamic patterns of relative risk remained similar for the medical claims data (Additional file 1: Figure S8).

State-level analyses

To study regional patterns in influenza severity, we calculated relative-risk-based severity indexes for each U.S. state with the available, aggregated zip3-level data (See Additional file 1: SM section S6). State-level retrospective severity (\(\overline {\rho _{s, r}(\tau)}\)) and early warning severity (\(\overline {\rho _{s, w}(\tau)}\)), where states are represented as τ, were calculated with similar methods to national level indexes. The state-level retrospective period was tied to a state’s peak ILI week. (For example, in season s, California’s retrospective severity (\(\overline {\rho _{s, r}(\tau)}\)) is the two week period beginning three weeks before California’s peak ILI week). In these analyses, national retrospective and early warning indexes remain notated \(\overline {\rho _{s, r}}\) and \(\overline {\rho _{s, w}}\), respectively.

State-level retrospective severity was examined for each season. To identify states that may have had more severe or mild seasons relative to the rest of the United States, we calculated the state deviation from the national baseline as the relative difference between state and national retrospective indexes: (\((\overline {\rho _{s, r}(\tau)} - \overline {\rho _{s, r}})/|\overline {\rho _{s, r}}|\)). To identify possible “sentinel” states for national influenza severity, we compared Pearson’s R correlation coefficients (H o :R=0) between state-level early warning (\(\overline {\rho _{s, w}(\tau)}\)) and national retrospective severity (\(\overline {\rho _{s, r}}\)) across seven study seasons (excludes 2003–04, where the early warning period occurred after the epidemic start). Tests across states were treated as independent, and p-values were calculated with a two-sided test of 1000 permutations without replacement.

Results

Severity benchmark

The composite severity benchmark (β s ) identified 1997–98, 2000–01, 2002–03, 2005–06, 2006–07, and 2011–12 as the mildest seasons and 1999-00, 2003–04, 2010–11, and 2012–13 as the most severe seasons across the period from 1997–98 to 2013–14 (excludes the 2009–10 pandemic year) (Fig. 1 e). While the peak percentage of influenza-positive test samples appeared higher among the most severe seasons, this data stream did not differentiate the mildest from the more moderate seasons (Fig. 1 b). Laboratory-confirmed hospitalization rates and pediatric deaths varied across seasons, and only in more recent years (2010–14) did these measures appear to match benchmark severity magnitude (Fig. 1 c). Peak P&I mortality was greatest among three of the four most severe seasons according to the benchmark, but the mildest and more moderate seasons had less clear separation.

Benchmark severity magnitude was not uniquely captured by any single contributing metric, supporting our use of a composite benchmark measure (Fig. 1 b-d and Additional file 1: Figure S1). For example, the 2006–07 season was one of the mildest seasons according to the benchmark, and it had the lowest rates of child and adult hospitalization and P&I mortality compared to other seasons, but relatively high counts in pediatric deaths, suggesting that seasons could have mixed indications of severity across different data streams. More severe seasons like 1999-00, 2003–04, and 2012–13 tended to have high P&I mortality at the peak, but they did not necessarily have a greater percentage of influenza-positive laboratory tests. Moreover, high P&I mortality was not a sufficient condition to indicate severity, as demonstrated by the severe 2010–11 season. In comparing the data across seasons, the benchmark integrated these indicators into a single quantitative value that captured the magnitude of these multiple facets of severity.

Measuring severity through age-specific illness risks

We were motivated to study ILI age patterns for epidemiological and empirical reasons. While elderly and young child populations are considered high-risk for severe influenza outcomes and are the traditional source of direct measurements of influenza severity, we adopted an indirect approach by considering ILI rates in high transmission age groups: adults and children. Children are thought to play an important role in influenza transmission due to high numbers of contacts [23, 24], while working-age adults represent a large part of the population, bridge contact between age groups, and have greater within-group contact heterogeneity [24, 25]. We operationalized this relationship by using weekly ILI data (Fig. 2 a) to consider a weekly proxy of age-specific disease burden, ρ s (t), which is a standardized relative risk of adult to child ILI rates at week t (Fig. 2 b). We emphasize that our metric is not a proxy for seasonal transmissibility; rather, it is formulated from the relative age distribution of cases.
Fig. 2

Influenza age dynamics differ from overall epidemic dynamics. a Medically attended outpatient ILI visits per 100,000 for the 2001–02 through 2008–09 flu seasons, adjusted for increasing surveillance data coverage and ILI care-seeking behavior, are displayed. The national early warning and retrospective classification periods are overlaid in green and black, respectively. b The normalized relative risk of adult ILI to child ILI rates (ρ s (t)), a proxy of age-specific disease burden, follows a regular seasonal pattern during the U.S. Thanksgiving and winter holiday periods, and diverges during the typical epidemic growth periods of January and February (around weeks 2–7)

We compared our relative risk-based severity measures (retrospective \(\overline {\rho _{s, r}}\) and early warning \(\overline {\rho _{s, w}}\)) with quantitative classifications such as the benchmark (β s ) and other traditional severity metrics. During the 2001–02 to 2008–09 study period, retrospective severity (\(\overline {\rho _{s, r}}\)) identified 2002–03, 2006–07, and 2008–09 as the mildest seasons and 2003–04 and 2007–08 as the most severe seasons. Retrospective severity was moderately correlated with the benchmark (Pearson’s R = 0.71, p-value =0.05 when compared to β s classifications) (Fig. 3 a). The early warning index (\(\overline {\rho _{s, w}}\)) projected 2007–08 as relatively severe and 2002–03 and 2006–07 as relatively mild; the correlation was weaker with the benchmark (Pearson’s R = 0.59, p-value = 0.16) (Fig. 3 b). Note that the 2003–04 season was removed from this analysis because it peaked during the early warning period (Fig. 2 a). Among traditional severity metrics, total season ILI visits also had a positive relationship with retrospective severity \(\overline {\rho _{s, r}}\) (Additional file 1: Figure S9). Proportion of H3 subtype circulation had a weak positive relationship (Additional file 1: Figure S10) while a proxy of vaccine match had a negative relationship with retrospective severity \(\overline {\rho _{s, r}}\) (Additional file 1: Figure S11).

Next, we repeated this analysis where ILINet, the traditional ILI surveillance system maintained by the CDC, was used instead of medical claims data to calculate the relative risk severity indexes. The early warning index \(\left (\overline {\rho _{s, w}^{cdc}}\right)\) did not appear to have a linear relationship with β s . Nevertheless, we found that the retrospective index \(\left (\overline {\rho _{s, r}^{cdc}}\right)\) had a strong positive relationship with β s (Pearson’s R = 0.64, p-value = 0.01) (Additional file 1: Figure S13), and that the retrospective indexes for ILINet and the medical claims had a strong positive relationship to each other (Pearson’s R = 0.78, p-value = 0.02) (Additional file 1: Figure S14).
Fig. 3

Retrospective and early warning severity indexes compared to the benchmark. a Retrospective severity (\(\overline {\rho _{s, r}}\)) has a positive relationship with the benchmark (R= 0.71, p-value = 0.05). b Early warning severity (\(\overline {\rho _{s, w}}\)) has a positive relationship with the benchmark (R= 0.59, p-value = 0.16). The 2003–04 season was removed because it was an early flu season and the early warning period occurred after the retrospective period. Point color corresponds to qualitatively-assigned severity category, where red is severe, yellow is moderate, and blue is mild

State-level severity patterns and sentinels

We examined spatial severity patterns by calculating retrospective and early warning indexes from age-specific ILI rates at the state-level (based on the medical claims data). Regardless of national retrospective severity (\(\overline {\rho _{s, r}}\)), state-level retrospective severity (\(\overline {\rho _{s, r}(\tau)}\)) could range from mild to severe in a single season (Fig. 4 a). Across the eight study seasons, the adjacent Mid-Atlantic states of Virginia and North Carolina may have experienced more severe seasons than national \(\overline {\rho _{s, r}}\) (75 t h percentile of state deviation was above zero), and other adjacent Mid-Atlantic and Midwestern states like Ohio, Pennsylvania, Florida, South Carolina, and Maryland may have experienced somewhat more severe seasons (70 t h percentile of state deviation was above zero) (Fig. 4 b). No state was highlighted for experiencing milder flu seasons than the rest of the U.S., but western states had the lowest median \(\overline {\rho _{s, r}(\tau)}\) indexes across the study period (Additional file 1: Figure S12).
Fig. 4

State-level patterns of seasonal influenza severity. a State retrospective severity (\(\overline {\rho _{s, r}(\tau)}\)) may range from mild to severe in a single season regardless of the national retrospective severity index (\(\overline {\rho _{s, r}}\)). The 2007–08 (left) and 2008–09 (right) seasons, where \(\overline {\rho _{s, r}}\) values were 16 and -9 respectively, are displayed. States in white did not have sufficient data to calculate a retrospective severity index. b Deviation between state (\(\overline {\rho _{s, r}(\tau)}\)) and national retrospective severity (\(\overline {\rho _{s, r}}\)) across the eight study seasons was used to identify states that tend to experience more severe flu seasons than other states. The 75 t h and 70 t h percentiles exceeded zero for red and orange highlighted states, respectively. c Pearson’s R correlation coefficients (H o :R=0) between state early warning (\(\overline {\rho _{s, w}(\tau)}\)) and national retrospective (\(\overline {\rho _{s, r}}\)) classifications were used to suggest possible ‘sentinel’ states. Only coefficients for Illinois, Virginia, Colorado and Maine had p-values below 0.05. States in white did not have enough data to calculate at least one of the two metrics for at least one study season

In a separate analysis, we explored whether “sentinel” states, where early warning (\(\overline {\rho _{s, w}(\tau)}\)) was strongly correlated with national retrospective severity (\(\overline {\rho _{s, r}}\)), could be identified. In Fig. 4 c, we examined correlation coefficients between \(\overline {\rho _{s, w}(\tau)}\) and \(\overline {\rho _{s, r}}\) among the 36 states with data available for the seven study seasons (excludes the early 2003–04 season). Illinois and Virginia had early warning indexes \(\overline {\rho _{s, w}(\tau)}\) with strong positive correlations with \(\overline {\rho _{s, r}}\) (Pearson’s R = 0.82, 0.72; p-values = 0.01, 0.04, respectively), while Colorado and Maine had a strong negative correlation with \(\overline {\rho _{s, r}}\) (Pearson’s R =−0.80, −0.71, p-value =0.02, 0.03, respectively).

Discussion

In this study, we have developed a composite indicator that synthesizes different influenza data streams to provide a quantitative benchmark of seasonal influenza severity. We have also developed a novel severity index based on age-related patterns of influenza-like illness that can be used in both retrospective and early warning contexts. Motivated by our finding that adult ILI visits were highly correlated with traditional measures of severity like hospitalization and deaths, we developed a proxy for influenza severity based on the ratio of ILI risk among adults relative to that among children. As school-aged children and adults are at the lowest risk for seasonal influenza complications and death [26], our metric seeks to measure signals of severity indirectly through populations that are well-represented in influenza case data and well-connected to high-risk populations. The retrospective severity index had a positive correlation with the benchmark, while the early warning index tended to err conservatively from the standpoint of public health (i.e., early warning signals predicted more severe seasons than occurred).

We constructed the composite severity benchmark to synthesize publicly available influenza surveillance data in the United States, and have shown that it agrees with epidemiological understanding of historical CDC reports of past influenza seasons. The benchmark thus captures multiple facets of severity in composite form and fills a gap in the current literature where quantitative ground truth measures of population-level influenza severity are absent. With additional data availability, future applications of the benchmark may add weights to contributing data streams or apply alternative normalization methods according to researcher or practitioner needs. Despite its contribution to public health, this measure remains limited by its contributing data sources: these data streams are not available in real-time, their data collection methods and definitions may change substantially across seasons, and they are not readily collected at different spatial scales or in different countries.

Our novel relative risk-based severity indexes based on ILI age patterns aim to address the limitations of traditional severity measures. The retrospective index may inform public health systems evaluations and enable historical analysis of severe season attributes, which will improve our understanding of influenza disease ecology. In relation to existing severity measures, this index can be used with a single data stream, and a source of data (i.e., ILI) that is commonly collected in routine influenza surveillance in many countries and at local departments of health. The performances of our early warning index remain modest, perhaps owing to the limited number of seasons available for study. In theory, however, this or an improved early warning index, determined 9–12 weeks before the typical epidemic peak, may enable clinicians to make informed decisions about patient diagnosis and treatment strategies, and help hospitals to plan staffing and supply logistics during an outbreak. Individual health-related behaviors may change during an epidemic as a result of health communication campaigns regarding pharmaceutical [27] and behavioral [28, 29] interventions; in pursuit of these goals, the early warning index presents a novel attempt at real-time severity estimation. To make the use of our metrics more intuitive and to provide an example of how they may be used in an operational context, we map the retrospective severity index to functional indicators of influenza burden, including peak ILI, hospitalization, and mortality rate in Fig. 5. (See Additional file 1: SM section S7 for the calculation of operational indicators).
Fig. 5

Translation of retrospective severity to operational indicators of the burden of influenza. The retrospective severity index (\(\overline {\rho _{s, r}}\)) may be mapped to historical data on cumulative confirmed influenza-related hospitalizations per 100,000, peak week outpatient visits due to ILI (ILINet), and seasonal excess P&I mortality rates per 100,000 in order to inform decision makers about the expected range of disease burden in a given season. Error bars represent the standard deviation in state-level variation of the excess P&I mortality rate, and bar color represents a milder to more severe retrospective severity index value (dark blue to dark red)

The extension of our index to state-level patterns highlights how scalable severity metrics have the potential to improve the observation of broader epidemiological trends and forge new directions (e.g., spatial signals of early warning) to inform public health preparedness. The low data requirements of the relative risk-based indexes enable continued future study over longer time periods, which may help elucidate the mechanisms that drive spatial variation in severity within individual seasons. Additional validation of state-level severity indexes is needed, but the future identification of robust state sentinels could improve multi-scale planning and coordination efforts months before resources are widely demanded.

Instead of focusing on the elderly and young children as traditional high-risk groups [30], our retrospective and early warning indexes look for indirect signals of severity using the disease dynamics of ‘healthier’ populations. Measurement of ILI among high-risk groups at outpatient facilities may be unreliable, as those groups may be seeking care at hospitals for severe pathology. Instead, we use the more reliable signals provided by measurement of ILI among working adults and school-age children. We posit that school-aged children experience substantial flu morbidity every season because they have high numbers of potential disease-causing contacts [24, 31, 32] and greater susceptibility due to limited prior exposure to influenza. We hypothesize that adults have fewer contacts and greater prior exposure than children, so they experience high flu activity only when the flu season is severe, regardless if the cause is strain novelty, higher transmissibility, greater virulence, or some combination of factors. High connectivity between adults and other age groups [24] and the role of adults in seeding new regions [33, 34] may underlie our observation that seasons with high burden in adult populations tend to be severe for the entire population. In demonstrating the potential of this metric, we call for the continued collection of age-specific ILI data and additional research on the development of thresholds to define and differentiate mild from severe seasons.

Further work is needed to improve severity index signal detection in the early warning period, and extra caution should be taken when making decisions based on the early warning index. This period sometimes experiences low influenza circulation, thus allowing pathogens like respiratory syncytial virus (RSV) and Haemophilus influenzae to confound the ILI age dynamics used in our index [35]; however, we note that low circulation is rare in our study period (Additional file 1: Table S3). Additionally, the fixed nature of the early warning period limits its utility for early-peaking flu seasons (e.g., 2003–04). Future research should explore methods to represent uncertainty in severity assessments; action upon incorrect predictions could lead to overburdening the health care system or the inefficient use of resources, and a mismatch in expectations and reality could result in a loss of public trust in public health agencies. Moreover, the early warning index for ILINet surveillance did not perform well; this may be explained by ILINet’s smaller sample size (roughly 1,900 providers submitted weekly reports in 2013–14 to ILINet, while over 400,000 physicians reported to the medical claims data in 2008–09) and narrower syndromic definition of flu compared to the medical claims data, both of which could limit the detection and classification of influenza activity during the early warning period (Additional file 1: Figure S13). Nevertheless, our observations of ILI age dynamics in this early warning period (around weeks 49–52) lead us to hypothesize that the predictable age dynamic shifts in the abutting Thanksgiving and winter holidays, which may be due to reduced contact rates, create an insulated ‘severity testbed’ for improved signal detection during these weeks. Future research on holiday age dynamics and early flu seasons with different ILI surveillance systems may in fact reveal that the early warning index is limited to use in the United States.

The relative risk-based severity indexes are limited in their detection capabilities for influenza pandemics. Pandemic events are characterized by different distributions of age risk, which may alter the severity classifications provided by our index; an initial pandemic wave may be dominated by morbidity among school-aged children, and empirical and modeling studies suggest that adults are more likely to become infected in the season following a pandemic [5, 3639]. Moreover, there appears to be an accumulation of heterosubtypic immunity for pandemic strains with age [40]. Our index would not capture severity in the first and second waves of pandemic virus circulation, which is why we exclude the 2009–10 season from our analysis, and unstable age dynamics in post-pandemic seasons may explain poor performance of recent seasons in the ILINet analyses (Additional file 1: Figure S13c-d).

Our novel severity index relies on real-time age-specific medical claims data for ILI, which does not appear to have the disadvantages of flu-related ‘big data’ sources [20, 41]. Traditional ILI surveillance (eg. ILINet) also provides real-time age-specific data, but the medical claims database represents a more obligatory form of provider reporting, captures ILI activity at least as well as traditional surveillance, and provides higher coverage, greater spatial resolution, and finer age-specific disease information due to its administrative purpose [20]. Medical claims and ILINet data are both subject to physician biases regarding the demographics and seasonality of influenza and doctor’s office closures. They also have healthcare-seeking behavior biases; school-aged children have higher rates of healthcare-seeking behavior for ILI than adults (approximately 1.1 to 1.4 times higher) [4244], which we consider in the construction of our index (See Additional file 1: SM section S2). Additional studies on disparities in insurance and access to care, especially in consideration of ongoing changes to the U.S. health care system, are needed to better quantify biases in medical claims data as compared to other flu surveillance systems.

Conclusions

Traditional measures of seasonal influenza severity are limited by their need for multiple data streams and the lack of accurate hospitalization and mortality data in real-time. In our study, relative disease burden among adults and children is proposed as the basis for a novel population-level severity index with retrospective and early warning classification periods, and the index is applied to influenza-like illness data in the United States across multiple seasons and spatial scales. By correctly identifying the two most severe influenza seasons in the study period, this work represents proof of concept that influenza age dynamics may provide epidemiological understanding beyond surveillance data at face value, and our approach may be used by physicians, hospital administrators, and policy makers to make real-time decisions about clinical, logistical, and strategic responses to a seasonal influenza outbreak. While further study of the novel severity metrics is warranted, we recommend that researchers and practitioners consider the use of composite or ILI-based metrics in addition to traditional severity measures for improved epidemiological understanding and situational awareness. Our research raises new questions about causal severity mechanisms; future analyses should disambiguate the age patterns characterized in our study as a harbinger or result of population-level severity, examine the hypothesis that holiday contacts seed broader infection in different age groups [45] or new locations, and examine different regional subtype circulation, pre-existing immunity, age distributions, or vaccine coverage rates as mechanisms for spatial variation in severity.

Abbreviations

CDC: 

United States Centers for Disease Control and Prevention

ICD-9: 

International Classification of Diseases, Ninth Revision

ILI: 

influenza-like illness

MMWR: 

Morbidity and Mortality Weekly Reports

P&I: 

pneumonia and influenza

RR: 

relative risk

RSV: 

respiratory syncytial virus

Declarations

Acknowledgments

This work was supported by the Research and Policy for Infectious Disease Dynamics (RAPIDD) program of the Science and Technology Directorate, Department of Homeland Security (DHS), and the Fogarty International Center, National Institutes of Health (NIH). The authors thank Matthew Biggerstaff for providing data on age-specific ILI care-seeking behavior and comments on our draft, Jason Asher for useful suggestions in the state-level analyses, and Vittoria Colizza and Anne Presanis who provided valuable feedback on a previous manuscript version.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

(1)
Department of Biology, Georgetown University
(2)
Fogarty International Center, National Institutes of Health
(3)
Department of Global Health, George Washington University
(4)
Department of Public Health, University of Copenhagen
(5)
IMS Health
(6)
Pfizer Inc., Collegeville

References

  1. Hardelid P, Andrews N, Pebody R. Excess mortality monitoring in England and Wales during the influenza A(H1N1) 2009 pandemic. Epidemiol Infect. 2011; 139(9):1431–9. doi:10.1017/S0950268811000410.PubMedView ArticleGoogle Scholar
  2. Group TE. Cold exposure and winter mortality from ischaemic heart disease, cerebrovascular disease, respiratory disease, and all causes in warm and cold regions of Europe. Lancet. 1997; 349:1341–6.View ArticleGoogle Scholar
  3. Wolf YI, Nikolskaya A, Cherry JL, Viboud C, Koonin E, Lipman DJ. Projection of seasonal influenza severity from sequence and serological data. PLoS Curr. 2010; 2:1200. doi:10.1371/currents.RRN1200.View ArticleGoogle Scholar
  4. Simonsen L, Clarke MJ, Williamson GD, Stroup DF, Arden NH, Schonberger LB. The impact of influenza epidemics on mortality: introducing a severity index. Am J Public Health. 1997; 87(12):1944–50.PubMedPubMed CentralView ArticleGoogle Scholar
  5. Bansal S, Pourbohloul B, Hupert N, Grenfell B, Meyers LA. The shifting demographic landscape of pandemic influenza. PLoS One. 2010; 5(2):9360. doi:10.1371/journal.pone.0009360.View ArticleGoogle Scholar
  6. Thompson WW, Comanor L, Shay DK. Epidemiology of Seasonal Influenza: Use of Surveillance Data and Statistical Models to Estimate the Burden of Disease. J Infect Dis. 2006; 194(Suppl 2):S82–S91.PubMedView ArticleGoogle Scholar
  7. Thompson MG, Shay DK, Zhou H, Bridges CB, Cheng PY, Burns E, et al. Estimates of Deaths Associated with Seasonal Influenza - United States, 1976 - 2007. Morb Mortal Wkly Rep. 2010;59(33).Google Scholar
  8. Fleming DM, Moult AB, Keene O. Indicators and significance of severity in influenza patients. Int Congr Ser. 2001; 1219:637–43.View ArticleGoogle Scholar
  9. Frank AL, Taber LH, Wells JM. Comparison of Infection Rates and Severity of Illness for Influenza A Subtypes H1N1 and H3N2. J Infect Dis. 1985; 151(1):73–80.PubMedView ArticleGoogle Scholar
  10. Presanis AM, De Angelis D, Hagy A, Reed C, Riley S, Cooper BS, et al. The severity of pandemic H1N1 influenza in the United States, from April to July 2009: a Bayesian analysis. PLoS Med. 2009; 6(12):e1000207. doi:10.1371/journal.pmed.1000207.PubMedPubMed CentralView ArticleGoogle Scholar
  11. Lipsitch M, Finelli L, Heffernan RT, Leung GM, Redd SC. Improving the evidence base for decision making during a pandemic: the example of 2009 influenza A/H1N1. Biosecurity Bioterrorism Biodefense Strateg Pract Sci. 2011; 9(2):89–115. doi:10.1089/bsp.2011.0007.Google Scholar
  12. Reed C, Biggerstaff M, Finelli L, Koonin LM, Beauvais D, Uzicanin A, et al. Novel Framework for Assessing Epidemiologic Effects of Influenza Epidemics and Pandemics. Emerg Infect Dis. 2013; 19(1):85–91.PubMedPubMed CentralView ArticleGoogle Scholar
  13. Garske T, Legrand J, Donnelly CA, Ward H, Cauchemez S, Fraser C, et al. Assessing the severity of the novel influenza A/H1N1 pandemic. BMJ. 2009; 339:2840.View ArticleGoogle Scholar
  14. Yu H, Cowling BJ, Feng L, Lau EHY, Liao Q, Tsang TK, et al. Human infection with avian influenza A H7N9 virus: an assessment of clinical severity. Lancet. 2013; 382(9887):138–45. doi:10.1016/S0140-6736(13)61207-6.PubMedPubMed CentralView ArticleGoogle Scholar
  15. Denoeud L, Turbelin C, Ansart S, Valleron AJ, Flahault A, Carrat F. Predicting pneumonia and influenza mortality from morbidity data. PLoS One. 2007; 2(5):464. doi:10.1371/journal.pone.0000464.View ArticleGoogle Scholar
  16. van den Wijngaard CC, van Asten L, Meijer A, van Pelt W, Nagelkerke NJD, Donker GA, et al. Detection of excess influenza severity: associating respiratory hospitalization and mortality data with reports of influenza-like illness by primary care physicians. Am J Public Health. 2010; 100(11):2248–54. doi:10.2105/AJPH.2009.168245.PubMedPubMed CentralView ArticleGoogle Scholar
  17. Simonsen L, Clarke MJ, Stroup DF, Williamson GD, Arden NH, Cox NJ. A Method for Timely Assessment of Influenza-Associated Mortality in the United States. Epidemiology. 1997; 8(4):390–5.PubMedView ArticleGoogle Scholar
  18. Centers for Disease Control and Prevention. FluView Interactive. http://www.cdc.gov/flu/weekly/fluviewinteractive.htm. Accessed: 17 Nov 2015.
  19. Centers for Disease Control and Prevention. MMWR Table III. http://wonder.cdc.gov/mmwr.html. Accessed: 17 Nov 2015.
  20. Viboud C, Charu V, Olson D, Ballesteros S, Gog J, Khan F, et al. Demonstrating the use of high-volume electronic medical claims data to monitor local and regional influenza activity in the US. PLoS One. 2014; 9(7):e102429. doi:10.1371/journal.pone.0102429.PubMedPubMed CentralView ArticleGoogle Scholar
  21. Gog JR, Ballesteros S, Viboud C, Simonsen L, Bjornstad ON, Shaman J, et al. Spatial Transmission of 2009 Pandemic Influenza in the US. PLoS Comput Biol. 2014; 10(6):e1003635. doi:10.1371/journal.pcbi.1003635.PubMedPubMed CentralView ArticleGoogle Scholar
  22. Osterholm MT, Kelley NS, Sommer A, Belongia EA. Efficacy and effectiveness of influenza vaccines: a systematic review and meta-analysis. Lancet Infect Dis. 2012; 12(1):36–44. doi:10.1016/S1473-3099(11)70295-X.PubMedView ArticleGoogle Scholar
  23. Longini IM, Koopman JS, Monto a. S, Fox JP. Estimating household and community transmission parameters for influenza. Am J Epidemiol. 1982; 115(5):736–51.PubMedGoogle Scholar
  24. Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, et al. Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS Med. 2008; 5(3):74. doi:10.1371/journal.pmed.0050074.View ArticleGoogle Scholar
  25. Van Kerckhove K, Hens N, Edmunds WJ, Eames KTD. The impact of illness on social networks: implications for transmission and control of influenza. Am J Epidemiol. 2013; 178(11):1655–62. doi:10.1093/aje/kwt196.PubMedPubMed CentralView ArticleGoogle Scholar
  26. Sebastian R, Skowronski DM, Chong M, Dhaliwal J, Brownstein JS. Age-related trends in the timeliness and prediction of medical visits, hospitalizations and deaths due to pneumonia and influenza, British Columbia, Canada, 1998-2004. Vaccine. 2008; 26(10):1397–403. doi:10.1016/j.vaccine.2007.11.090.PubMedPubMed CentralView ArticleGoogle Scholar
  27. Flood EM, Rousculp MD, Ryan KJ, Beusterien KM, Divino VM, Toback SL, et al. Parents’ decision-making regarding vaccinating their children against influenza: A web-based survey. Clin Ther. 2010; 32(8):1448–67. doi:10.1016/j.clinthera.2010.06.020.PubMedView ArticleGoogle Scholar
  28. Park JH, Cheong HK, Son DY, Kim SU, Ha CM. Perceptions and behaviors related to hand hygiene for the prevention of H1N1 influenza transmission among Korean university students during the peak pandemic period. BMC Infect Dis. 2010; 10:222. doi:10.1186/1471-2334-10-222.PubMedPubMed CentralView ArticleGoogle Scholar
  29. Timpka T, Spreco A, Gursky E, Eriksson O, Dahlström Ö, Strömgren M, et al. Intentions to perform non-pharmaceutical protective behaviors during influenza outbreaks in Sweden: a cross-sectional study following a mass vaccination campaign. PLoS One. 2014; 9(3):91060. doi:10.1371/journal.pone.0091060.View ArticleGoogle Scholar
  30. Thompson WW, Shay DK, Weintraub E, Brammer L, Cox N, Anderson LJ. Mortality Associated with Influenza and Respiratory Syncytial Virus in the United States. J Am Med Assoc. 2003; 289(2):179–86.View ArticleGoogle Scholar
  31. Kucharski AJ, Kwok KO, Wei VWI, Cowling BJ, Read JM, Lessler J, et al. The contribution of social behaviour to the transmission of influenza A in a human population. PLoS Pathog. 2014; 10(6):e1004206. doi:10.1371/journal.ppat.1004206.PubMedPubMed CentralView ArticleGoogle Scholar
  32. Wallinga J, Teunis P, Kretzschmar M. Using data on social contacts to estimate age-specific transmission parameters for respiratory-spread infectious agents. Am J Epidemiol. 2006; 164(10):936–44. doi:10.1093/aje/kwj317.PubMedView ArticleGoogle Scholar
  33. Viboud C, Bjornstad ON, Smith DL, Simonsen L, Miller MA, Grenfell BT. Synchrony, waves, and spatial hierarchies in the spread of influenza. Science (80-.) 2006; 312(5772):447–51. doi:10.1126/science.1125237.View ArticleGoogle Scholar
  34. Apolloni A, Poletto C, Colizza V. Age-specific contacts and travel patterns in the spatial spread of 2009 H1N1 influenza pandemic. BMC Infect Dis. 2013; 13:176. doi:10.1186/1471-2334-13-176.PubMedPubMed CentralView ArticleGoogle Scholar
  35. Falsey AR, Hennessey PA, Formica MA, Cox C, Walsh EE. Respiratory Syncytial Virus Infection in Elderly and High-Risk Adults. N Engl J Med. 2005; 352(17):1749–59.PubMedView ArticleGoogle Scholar
  36. Dávila J, Chowell G, Borja-aburto VH, Viboud C, Muñiz CG. Substantial Morbidity and Mortality Associated with Pandemic A/H1N1 Influenza in Mexico, Winter 2013-2014: Gradual Age Shift and Severity. PLoS Curr. Outbreaks. 2014;2014(October 2013).Google Scholar
  37. Gómez-Gómez A, Magaña-Aquino M, Bernal-Silva S, Araujo-Meléndez J, Comas-García A, Alonso-Zúñiga E, et al. Risk Factors for Severe Influenza A, Related Pneumonia in Adult Cohort, Mexico, 2013 to 14. Emerg Infect Dis. 2014; 20(9):1554–1558.PubMedPubMed CentralView ArticleGoogle Scholar
  38. Rahamat-Langendoen JC, Tutuhatunewa ED, Schölvinck EH, Hak E, Koopmans M, Niesters HGM, et al. Influenza in the immediate post-pandemic era: a comparison with seasonal and pandemic influenza in hospitalized patients. J Clin Virol. 2012; 54(2):135–40. doi:10.1016/j.jcv.2012.02.010.PubMedView ArticleGoogle Scholar
  39. Skowronski DM, Hottes TS, Janjua NZ, Purych D, Sabaiduc S, Chan T, et al. Prevalence of seroprotection against the pandemic (H1N1) virus after the 2009 pandemic. CMAJ. 2010; 182(17):1851–6. doi:10.1503/cmaj.100910.PubMedPubMed CentralView ArticleGoogle Scholar
  40. Epstein SL. Prior H1N1 influenza infection and susceptibility of Cleveland Family Study participants during the H2N2 pandemic of 1957: an experiment of nature. J Infect Dis. 2006; 193(1):49–53. doi:10.1086/498980.PubMedView ArticleGoogle Scholar
  41. Olson DR, Konty KJ, Paladini M, Viboud C, Simonsen L. Reassessing Google Flu Trends data for detection of seasonal and pandemic influenza: a comparative epidemiological study at three geographic scales. PLoS Comput Biol. 2013; 9(10):e1003256. doi:10.1371/journal.pcbi.1003256.PubMedPubMed CentralView ArticleGoogle Scholar
  42. Biggerstaff M, Jhung M, Kamimoto L, Balluz L, Finelli L. Self-reported influenza-like illness and receipt of influenza antiviral drugs during the 2009 pandemic, United States, 2009-2010. Am J Public Health. 2012; 102(10):21–6. doi:10.2105/AJPH.2012.300651.View ArticleGoogle Scholar
  43. Brooks-Pollock E, Tilston N, Edmunds WJ, Eames KTD. Using an online survey of healthcare-seeking behaviour to estimate the magnitude and severity of the 2009 H1N1v influenza epidemic in England. BMC Infect Dis. 2011; 11(1):68. doi:10.1186/1471-2334-11-68.PubMedPubMed CentralView ArticleGoogle Scholar
  44. Van Cauteren D, Vaux S, de Valk H, Le Strat Y, Vaillant V, Lévy-Bruhl D. Burden of influenza, healthcare seeking behaviour and hygiene measures during the A(H1N1)2009 pandemic in France: a population based study. BMC Public Health. 2012; 12:947. doi:10.1186/1471-2458-12-947.PubMedPubMed CentralView ArticleGoogle Scholar
  45. Eames KTD, Tilston NL, Edmunds WJ. The impact of school holidays on the social mixing patterns of school children. Epidemics. 2011; 3(2):103–8. doi:10.1016/j.epidem.2011.03.003.PubMedView ArticleGoogle Scholar

Copyright

© Lee et al. 2015

Advertisement