Trends in parameterization, economics and host behaviour in influenza pandemic modelling: a review and reporting protocol

  • Luis R Carrasco1, 2Email author,

    Affiliated with

    • Mark Jit3,

      Affiliated with

      • Mark I Chen4, 5, 6,

        Affiliated with

        • Vernon J Lee4, 7, 8,

          Affiliated with

          • George J Milne9 and

            Affiliated with

            • Alex R Cook1, 4, 10

              Affiliated with

              Emerging Themes in Epidemiology201310:3

              DOI: 10.1186/1742-7622-10-3

              Received: 1 November 2012

              Accepted: 26 April 2013

              Published: 7 May 2013



              The volume of influenza pandemic modelling studies has increased dramatically in the last decade. Many models incorporate now sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals.


              We reviewed trends in these aspects in models for influenza pandemic preparedness that aimed to generate policy insights for epidemic management and were published from 2000 to September 2011, i.e. before and after the 2009 pandemic.


              We find that many influenza pandemics models rely on parameters from previous modelling studies, models are rarely validated using observed data and are seldom applied to low-income countries. Mechanisms for international data sharing would be necessary to facilitate a wider adoption of model validation. The variety of modelling decisions makes it difficult to compare and evaluate models systematically.


              We propose a model Characteristics, Construction, Parameterization and Validation aspects protocol (CCPV protocol) to contribute to the systematisation of the reporting of models with an emphasis on the incorporation of economic aspects and host behaviour. Model reporting, as already exists in many other fields of modelling, would increase confidence in model results, and transparency in their assessment and comparison.


              Bayesian inference Behaviour Economic analysis Epistemology of simulation Influenza Pandemic modelling


              Influenza pandemics are overwhelmingly large scale phenomena that may result in high morbidity, mortality and large economic impacts worldwide. The influenza pandemic of 1918–9 is believed to have caused excess mortality of 20–40 million people [1]. Influenza pandemics have occurred during the 20th century and beginning of the 21st century at intervals of between 10 and 40 years, with the latest pandemics occurring in 1918–9, 1957–8, 1968–9 [2] and 2009–10 [3]. Pharmaceutical and public health measures can help mitigate the impacts of pandemics [4, 5] and were implemented by many governments during the last pandemic in 2009–10 [6, 7].

              Because empirical or field studies of population-level strategies to control or mitigate influenza pandemics are generally either infeasible (e.g. controlling movement of people within a city) or unethical (e.g. withholding vaccination of subpopulations to assess the effect on transmission), modelling is one of the only suitable methodologies to enable multiple hypothetical pandemic preparedness and mitigation scenarios to be assessed. Epidemic models are especially useful to address epidemiological, economic, and individuals’ behavioural questions [810]. The usefulness of epidemic models in directing mitigation efforts has been supported by empirical findings that have echoed previous modelling predictions. For instance, models predicted that reduced international air travel would be unlikely to stop an influenza pandemic [11], a finding later verified empirically during the 2009 H1N1 pandemic [12, 13]; other models predicted the potential of antiviral prophylaxis and contact tracing to control small outbreaks [5], a prediction also verified in real-life outbreaks in semi-closed army camps [14].

              For epidemic models to produce reasonable predictions on the course of the epidemic and how it can be controlled, we need to be confident that the model captures the essential mechanisms that drive the epidemic dynamics [15]. It is therefore essential to parameterize the model from available data [15, 16] and validate the model to increase its credibility [17]. One of the main focuses of this review is to evaluate the trends in the construction and validation of mechanistic models — models that explicitly incorporate the mechanisms or processes underlying the outcomes of the system — of infection dynamics for influenza pandemic preparedness, control and mitigation. Traditionally, the main approach for mechanistic modelling of influenza pandemics has been based on compartmental models (Table 1) represented by systems of differential equations. Compartmental models represented the dynamics of a host-disease system for which a tractable analytical solution could in principle be derived through mathematical methods [8, 18]. It was not until the widespread availability of modern computing power that more complex compartmental models for which analytical solutions could not be derived and agent-based models (ABMs, Table 1) explored using computer simulation became an attractive alternative. Recent modelling work, dealing with the threat of an influenza pandemic of avian origin (A-H5N1), with the severe acute respiratory syndrome (SARS) crisis in 2003 and the H1N1 2009 pandemic, has exemplified the use of both models solved analytically and through simulation [3, 5, 19, 20].
              Table 1

              Definitions of model types

              Compartmental epidemic models

              Models that divide the population according to states relevant to the disease studied and represent the rates at which individuals change state. These models are widely used in epidemic modelling and can be represented by systems of differential or difference equations or stochastic rates. For instance a SIR compartmental model would divide the population according to whether the individuals are susceptible (S), infectious (I) or recovered (R). Basic compartmental models assume perfect mixing between homogeneous individuals but can be expanded to account for instance for different transmission rates between ages (age-structured compartmental models), or other heterogeneities

              Network or random graph models

              Network (graph) models are models that characterize the relationships between individuals. Infection occurs only between individuals (nodes) that have a connection between them (arcs or edges).

              Agent-based models

              These models simulate the actions and interactions of autonomous agents with the aim to observe patterns of aggregation resulting from such interaction. Their relevance in epidemic modelling stems from their capacity to represent interactions and decisions at the individual level.

              Metapopulation models

              Metapopulation models originate from ecology and are used to represent distinct populations distributed in separated and discrete habitat patches. The populations can interact through migration. These models are useful in epidemic modelling by making the patches represent cities or other levels of spatial aggregation, thus allowing for the consideration of spatial structure. Although in their original application in ecology they did not consider the dynamics within patches, they are amenable of incorporating the epidemic dynamics within each patch, e.g. using compartmental models.

              Game theoretic models

              Models that study the decisions of an individual when the outcome of such decisions depends on the decisions of other individuals. These models study when cooperation or defection would arise from the interaction between individuals given certain circumstances. They can be useful in epidemic modelling to explore the incentives that humans face regarding vaccination, wearing face masks or adopting other preventative behaviour.

              Optimal control and stochastic programming models

              These are dynamic optimization techniques that aim to find the optimal way to control a system over time. In the case of epidemic modelling, they are useful to investigate for instance the optimal deployment of vaccines or antivirals over time to minimize the disease burden or the overall costs generated by the epidemic. These models are different to the other models that assume a level of control that is independent of the state of the system. By contrast, these models allow control to very depending of the final outcome or the state of the system.

              Partial or general computable equilibrium models

              Partial equilibrium models are economic models based on the equilibrium of the supply and demand of a market assuming that the prices and quantities traded in other markets do not vary. Computable equilibrium models (CGE), by contrast, consider the interactions between the markets composing an economy and study the price equilibrium in all the markets considered.

              Pandemic preparedness, control and mitigation modelling has heretofore been reviewed [2125]. These reviews show a bewildering array of models that have been introduced, especially since the 2009 pandemic, with different purposes, outcomes and structures. Despite the usefulness of modelling, few public health practitioners or decision makers undergo explicit training in modelling techniques. When combined with the rapid growth in modelling capabilities driven by increasing computing power, and the multitude of different disciplines – e.g. economics, psychology, genetics – that contribute to modelling epidemics, this makes it daunting to keep abreast of all that modelling is capable of. To facilitate model understanding, this review focuses in three pandemic modelling aspects that are recently experimenting substantial innovations: parameterization and validation, economic aspects [26] and behaviour of the hosts [27]. Given the diversity of new techniques in these aspects of modelling, a review of common traits would be very helpful for non-expert users to determine which modelling techniques are most useful to address the decisions they face. In addition, a protocol to guide the reporting of these aspects together with model construction would help modellers and policy makers compare and evaluate models. To this end, we review and classify models for influenza pandemic preparedness from January 2000 to September 2011, and use the resulting analysis to develop a simple guiding protocol for reporting modelling decisions.


              Search strategy and selection criteria

              We searched Google Scholar, PubMed and ISI Web of Knowledge to identify articles focusing on influenza pandemic modelling to inform management strategies (see Additional file 1: Figure S1 in the electronic supplementary material (ESM) for a PRISMA flow diagram [28]). Our search criterion was: contains pandemic AND model* AND influenza AND policy OR policies. Our eligibility criteria were articles that: (i) were published in peer reviewed journals from January 2000 to September 2011; (ii) aimed to advise policy makers and made policy recommendations about pandemic influenza preparedness, mitigation or control; and (iii) employed mechanistic models to derive those insights. We further excluded cost-effectiveness and decision tree studies that did not incorporate disease transmission dynamics. The search in PubMed retrieved 72 articles, ISI Web of Knowledge 128, and Google Scholar 19,200 results. After an additional query refinement in Google Scholar (adding to the previous query the terms: AND preparedness OR strateg* AND simulation OR compartment*), screening of articles and further full-text assessment for their eligibility (Additional file 1: Figure S1 in ESM), 91 articles were selected for the analysis.

              Classification and evaluation of modelling traits

              We classify models into several major groups: compartmental epidemic models, network models, agent-based models, metapopulation models, game theoretic models, optimal control models and partial or general computable equilibrium models (definitions of the models can be found in Table 1). In some instances models can conform to several categories: e.g. compartmental models combined with metapopulation models.

              In addition to classifying the models, we evaluated traits common to several taxa consistent with the focus of the review on evaluating the trends in parameterization and validation, incorporation of economic aspects and host behaviour (Table 2). We evaluated capacity of the models to answer (i) epidemiological questions: for instance, how many people and which age groups were expected to become infected, hospitalized and die as a result of infection during the pandemic? To what extent would control and treatment interventions mitigate this impact?; (ii) health economic questions: what would be the economic impacts of the pandemic and which interventions would represent better value for money to reduce the health and economic impacts?; (iii) behavioural questions: would changes in the behaviour of the individuals during the pandemic influence the effectiveness of the interventions?
              Table 2

              Processes for model construction and validation


              The process of selecting the values or distributions of the model parameters based on empirical data, usually with a random component. Rigorous parameterization is fundamental since the value of the parameters largely determines the behaviour and predictions of the model.

              Sensitivity and uncertainty analysis

              The study of the influence of the parameter values of the models on the model outcomes. Sensitivity analysis can vary one parameter at a time (univariate) or multiple (multivariate). The comparison of the model predictions with the baseline parameter values and the modified values gives an idea of how sensitive the model is to a certain parameter. Sensitivity analysis is useful because enhances the communication of the model, tests the robustness of the results allowing the evaluation of our confidence in the predictions, increases our understanding of the system and allows detection of implementation errors.

              Uncertainty analysis evaluates the model response for the plausible range of the parameters. Uncertainty analysis provides information on what variable generates more uncertainty in the model and can help to direct data collection efforts.


              The process of investigating whether model predictions are likely to be accurate. Two main types of validation can be distinguished: structural and predictive validation [29]. Structural validity requires that the model reproduces the observed system behaviour and is constructed in accordance with the way the real system operates, i.e. is consistent and based on theory. Predictive validation requires that the model predicts accurately data that were not used in its construction. It has also been argued that the credibility of a model might be provided by the credentials of the model building techniques, that sometimes involve contrary-to-fact principles that increase the reliability of the results [30].

              Least squares

              Standard data fitting procedure that consists on the minimization of the squares of the difference between the observed data points and the fitted value provided by the model.

              Maximum likelihood estimation

              Method to estimate the parameters of a model based on data. This method chooses values for which the probability of generating the observed data is highest, given the model.

              Bayesian inference

              Method of statistical inference to estimate the parameters of a model combining prior belief and the evidence observed. As more evidence is gathered the prior distribution is modified into the posterior distribution that represents the uncertainty over the parameters value.

              Markov chain Monte Carlo (MCMC)

              MCMC are algorithms that can be used to sample the posterior distribution for Bayesian inference and are useful because they allow to sample from multi-dimensional distributions of observations.

              Particle filtering

              Particle filtering is a parameterization technique based on the simulation and sequential weighting of a sample of parameter values according to their consistency with the observed data. Particle filters are normally used to parameterize Bayesian models in which variables that cannot be observed are inferred by the model through connection in a Markov chain.


              Here we define calibration as an iterative comparison between model predictions and observed data (e.g. attack rates, R0) without the use of standard statistical inference methods. After comparison, simulation of the model for different parameter values is performed and compared with the former predictions to see if an improvement in their agreement is obtained.

              To classify models by their construction and validation techniques, we evaluated whether they (i) incorporated an assessment of the sensitivity of their results to model parameters and assumptions, (ii) were parameterized using parameters directly from other models, and (iii) were validated from empirical data. To that end, the articles were categorized according to the type of model used (Table 1), population heterogeneity level considered, parameterization procedure, consideration of economic impacts, inclusion of human behaviour and performance of validation or sensitivity analysis (see the ESM for a full list of the models and their characteristics).

              Standard reporting protocol

              To facilitate the systematization and comparison of models we develop a guiding protocol for reporting model general Characteristics, Construction, Parameterization and Validation aspects (CCPV protocol), derived from the results of the review. The protocol builds upon previous protocols to describe ABMs [31] and retains the description of technical aspects of the models that facilitate its understanding and reproduction, for instance describing the aim of the model, scale, structure, model type, dynamic aspects, initialization, data inputs, the way individuals and their interactions are considered or the inclusion of stochasticity (Table 3). The protocol was then extensively expanded to include aspects relative to models construction, parameterization, sensitivity analysis, verification and validation, incorporation of economics and host behaviour.
              Table 3

              Characteristics, construction, parameterization and validation aspects protocol (CCPV protocol) for influenza pandemic model reporting




              General characteristics

              Aim of the model

              What questions is the model trying to address? Is the model based on past influenza pandemics?

              Is the model aimed at generating predictions for future pandemics used to inform policy making? Are the predictions intended to generate quantitative or qualitative policy insights?

              Theoretical basis

              What are the underlying assumptions that support the construction of the model or parts of the model? E.g. the law of mass action, rational choice theory.

              Scale, structure and model type.

              What are the geographical and temporal scales of the model? What are the state and control variables and the parameters? Is the model solved analytically through mathematical methods or simulated? What type of model is it?

              Dynamic aspects

              Is time modelled as discrete or continuous?

              What variables and processes occur or are updated at each time step?

              Construction aspects


              How is the model initialized? E.g. what proportion of individuals is initially infected?


              Is the model informed by data from previous pandemics? If so, what are the main sources of data in the model?


              Is the model spatially explicit or implicit? What is the spatial structure of the model?

              Are the expected heterogeneities of transmission reflected by this structure?


              Is the model stochastic or deterministic? How is stochasticity modelled?


              What interventions are modelled (e.g. antivirals, vaccination or isolation)? How do the interventions modify epidemiological or clinical parameters in the model?


              Are individuals modelled as discrete or continuous entities?

              Are individuals grouped by some characteristic? (e.g. age, risk of infection).

              Interactions leading to transmission

              How is interaction between individuals modelled? Are interactions heterogeneous among individuals or locations?

              Economic aspects

              Does the model consider the cost of the intervention and/or the economic impact of the disease?

              Does the model seek to guide decision making that will optimise net benefit? Are there groups whose infection would lead to higher economic impacts? Was this distinction considered? Are costs per reduction of disease burden provided?


              Are changes in the behaviour of individuals as a result of pandemic processes being modelled? What are the assumptions made regarding behaviour? Has the model been run without assumptions about pandemic-related changes to behaviour? How do results differ from the model considering such changes?


              Have model results been compared with simplified versions of the model? How did results differ?

              To what extent has the increase in complexity in the model hindered its interpretability?

              Parameterization and Validation aspects

              Sensitivity and uncertainty analysis

              Have sensitivity and uncertainty analyses been undertaken? What types of analyses were done, what were the outputs and parameter ranges considered? Were there sensitive or uncertain parameters that were taken directly from previous modelling studies and that might entail a risk of bias to the predictions? Are there alternative data sets to obtain those parameters? Have alternative scenarios for values of those parameters been considered?

              Model parameterization

              Describe which parameters were parameterized from: (i) previous parameters used in other pandemic models in the literature; (ii) data published in the literature, e.g. clinical trials, cohort studies; and (iii) pandemic data, e.g. time series of number of cases, attack rates.

              For parameters taken directly from previous pandemic modelling studies, how were these derived? Do they apply to the case being studied? Is there a risk of model overfitting, e.g. by using epidemic case data to fit both transmission and infectious rate parameters?

              Model verification

              Has the model undergone standard simulation verification tests? How are results from the model observed to evaluate its functioning? E.g. production of dynamic maps of spread during the simulation.

              Model validation

              Has the model been tested for structural and/or predictive validity?

              What type of data independent of model parameterization was used to test its predictive validity? If data were not available for the specific strain of study, did alternative strains or diseases were considered? E.g. seasonal instead of pandemic influenza.

              Was the model able to reproduce the validation data set? If not, what changes to the structure of the model were considered? Did the updated model obtain an improved prediction?

              Was this model developed in parallel with other independent research teams?

              Review of influenza pandemic modelling

              The prevalence of modelling for pandemic influenza has increased dramatically since 2000 (Figure 1B). Out of the 91 articles included in the review, more than half of the models were compartmental (58/91, 64%). Compartmental models were in some instances combined with a dynamic optimization framework (7/58, 12%) or game theory (2/58, 3%). Metapopulation models were employed in combination with both compartmental models and ABMs (9 and 3 models respectively). The second most common modelling approach were ABMs (22/91, 24%). The rest of the models were computable general equilibrium models (CGE) (3/91, 3%), network models (4/91, 4%) and household models (3/91, 3%) (Figure 1A).
              Figure 1

              Literature review of pandemic influenza modelling papers. A: type of compartmental and non-compartmental models and parameterization approaches used. B: cumulative number of modelling and simulation papers identified from 2000 to 2011 (left axis) and number of hits retrieved on PubMed for the query: “pandemic AND influenza” (right axis). This search is used as a surrogate for general research interest in pandemic influenza. C: proportion of models incorporating economic aspects, individuals’ behaviour, parameterization from data other than reproducing parameter value choices in previous studies and validation. ABM: agent-based model; CGE: computable or general equilibrium model; Epi. lab. case data: models are fitted to epidemiological, laboratory or case data.

              Most models that were applied to a specific geographic region focused on high-income countries. Very scarce were studies not focusing on high-income economies (5/91, 6% applied to upper-middle income countries like Thailand or Mexico and none applied exclusively to low-income or lower-middle income countries), despite the higher case fatality rate expected in those countries [32]. The majority of studies were not intended to study impact in specific, localised settings such as schools or hospitals and represented instead the national or international level. A few exceptions did, on the other hand, concentrate on the effects of school closures [4, 6, 3337] and on hospitals or hospital staff [3840].


              In reality many models utilized multiple parameterization strategies. For instance, combining estimates from the literature, censuses and maximum likelihood. For simplicity, models were categorized by the less common and most sophisticated technique used. For instance, a model using literature estimates and Bayesian inference was categorized as using Bayesian methods for parameterization. Among all models, the dominant parameterization strategy (used by 47% of the models) was to adopt parameters from previous studies, especially from other modelling studies perpetuating the use of parameters chosen by other modellers (Figure 1A “parameterization”). 25% of the studies utilized information or parameters derived from epidemiological, laboratory (e.g. viral shedding duration, cohort studies) or case data (e.g. epidemic curves, attack rates) from other sources to parameterize the model. It was common (60%, Figure 1C) to use some sort of sensitivity analysis and this was more frequent in models that did not directly adopt parameters from previous models, suggesting that sensitivity analysis was not used as a complement to reusing parameters from previous models. ABMs were more frequently built using parameters chosen by modellers in previous studies (70% Figure 1C) and constructed from population demographic data, for instance from decennial censuses, rather than using empirical data or parameters obtained from epidemiological or laboratory studies.

              Although the most common approach was to parameterise models using parameter values chosen by previous modelling studies, there were several exceptions that used alternative parameterization methods (Figure 1B shows the distribution of parameterization methods and Table 2 defines the methods) ranging from calibration through simulation [33, 41, 42], maximum likelihood [12, 36], least squares [1, 11] and Bayesian computational methods such as Markov chain Monte Carlo (MCMC) [4] (Figure 1A).

              Several real-time pandemic modelling articles involved sophisticated methods of parameterization employing on-going observed case data, such as maximum likelihood estimation [9] or sequential particle filtering within a Bayesian framework [43]. Their real-time nature enabled the possibility of continuous open validation regarding the prediction of pandemic characteristics such as the timing and height of the peak, and indeed Ong et al. [43] report posting real-time predictions on the internet.

              There were several non-real-time examples of modelling papers that parameterized compartmental models using disaggregated epidemic data such as: questionnaire or survey results [44, 45]; serological data [36, 46]; epidemic cases or mortality time series [1, 4749]; and observed time of pandemic peaks [11, 12]. Examples of parameterization from historical epidemic data in ABMs included calibration to reproduce attack or serological infection rates from previous pandemics [33, 41, 42, 50]. Parameterization from case data can be used to investigate policy effectiveness. For instance, Cauchemez et al. [4] evaluated the effectiveness of school closures for pandemic control in France and showed that prolonged school closures would potentially reduce the attack rate of a pandemic by 13–17% by using MCMC Bayesian computational methods to fit an age-structured household-based compartmental model to influenza surveillance data.

              Most of the reviewed models reproduced parameter choices from previous studies. This is to be expected as deriving parameters from outbreak data is complex. As a result, some articles specialize in the statistical analysis that leads to parameter derivation and others specialize in the analysis of broad policy questions. There is however the risk that this approach may perpetuate faulty parameterisations from previous studies, or applies a valid parameter value to an inappropriate setting. On the other hand, informing too many parameters in the model by fitting to epidemic time series may run the risk of overfitting or non-identifiability. It may be most credible to inform model parameters using a combination of field or laboratory studies data (e.g. to fit or even directly inform parameters such as recovery rates) and epidemic case data (e.g. to fit transmission related parameters), and then compare fitted parameter values to those obtained from previous studies. One of the possible explanations why this combination of data sources is not common is data paucity, rendering the use of parameters chosen from other modelling studies as one of the few alternatives. One way to increase the pool of available data for model parameterization is to establish international data sharing mechanisms among governments and researchers, especially regarding disease transmission between individuals and surveys of population contact patterns [51], to facilitate the construction of robust models.

              Even if epidemic data are available, the small number of models parameterized from such data might also reflect statistical difficulties brought about by censorship in the data—some processes cannot be observed, and many influenza infections are not virologically confirmed, have indistinguishable symptoms, or are asymptomatic. Such censoring combines with non-independence between observations to prevent the use of standard statistical techniques. While such difficulties can be overcome, for instance using maximum likelihood estimation methods [9], particle filtering [43] or other likelihood-based computational methods [4] (Table 3), these require at least some mastery of modern statistical techniques and may be computationally intensive. For instance, Bayesian methods that use MCMC algorithms or approximate Bayesian computation methods can be particularly powerful and flexible tools (Table 2) [52]. These methods allow the merging of prior knowledge on the epidemic parameters—such as those derived from datasets described in the literature—with observed data from the outbreak in question. In addition they allow rigorous parameterisation of models of the processes underlying highly censored data [44]. Bayesian computational methods can thus be used as a flexible and powerful way to perform inference on unobserved parameters. Software such as openBUGS [53] and JAGS [54] are making the use of MCMC algorithms for model fitting accessible to non-specialists.

              Parameterization becomes more difficult for large-scale simulation models like ABMs not only because ABMs present many more parameters to be fitted but also because they make it harder to derive an explicit likelihood function making impossible the use of MCMC in Bayesian computational methods or maximum likelihood estimation methods. One promising techniques that does not require full, explicit likelihood functions, and that is used in statistical ecology and DNA sequencing, is one potential solution: sequential importance sampling [55]. Sequential importance sampling, particle filtering or the sequential Monte Carlo method can be performed using the R package POMP [56].

              Implications for CCPV protocol

              Reporting the combination of data used for parameterization would allow model users to evaluate the reliability of the models, reduce the risk of model overfitting and allow assessing the adequacy of the parameter for a specific setting (Table 3 “model parameterization”). Sensitivity and uncertainty analysis are other ways to evaluate the influence of individual parameters and their uncertainty range on model predictions (Table 2). They can be used to direct data collection efforts and should ideally be reported (Table 3 “sensitivity and uncertainty analysis”).


              The review demonstrated the rarity of model validation (only 16% of compartmental models and 22% of ABMs, Figure 1C), despite the importance of two types of validation – structural and predictive (Table 2) – in developing model credibility. Structural validity, which concerns the consistency of a model with theory, may be easier to establish for compartmental models as they are (usually) based on epidemic theory for which results have been derived analytically, as long as they are not oversimplified and unable to capture the salient features of the pandemic. In some instances, modellers may use these analytically soluble models to generate qualitative insights rather than quantitative predictions to inform policy. Structural validity will thus be more relevant for these models rather than comparisons with observed quantitative data.

              Predictive validity, on the other hand, is established by comparing model predictions to independently observed outcomes during a pandemic to help assess whether the model appropriately reflects reality, i.e. is capable of capturing the salient mechanisms governing the dynamics of the pandemic. If the agreement with validation data is poor, structural or parametric changes to the model might be needed until adequate validation can be obtained (Table 3). Compartmental models, by aggregating individuals in homogeneous compartments, are amenable to structural changes, accounting, for instance, for spatial and host structure by adding further compartments (e.g. only 64% of the models reviewed were exclusively compartmental with extensions including a metapopulation approach (15%), dynamic optimization (10%) and game theory (3%) (Figure 1A)).

              When making structural changes, modellers have to deal with a fundamental trade-off between realism and interpretability, with additional complexity increasing the opacity of the model at the same time it adds realism, potentially up to a point where the model becomes a black box. An example of a structural change is the need to capture spatial hierarchies, such as cities and countries, if space is expected to influence transmission dynamics or the roll out or effectiveness of an intervention. Often such structure is captured using ABMs that represent individuals in different countries, provinces, cities and even districts within a city, but such finely grained structure makes analytical interpretation of model operation virtually impossible. One possible compromise between ease of interpretation and complexity of spatial structure — e.g. between compartmental and ABMs — for populations clustered in cities or countries is the metapopulation model [11, 57].

              As part of the assessment of predictive validity, it might also be useful to compare models with analogous simplified or extended versions [e.g. [58]. For example, the predictions of a spatially explicit ABM can be compared to those of its “equivalent” spatially implicit compartmental model. Because complex models, such as ABMs, will only be more realistic than compartmental models provided there are data to support their added realism, comparisons of ABMs with their simplified compartmental ‘analogue’ will demonstrate whether the added realism of the ABM is justified by improved predictive power and whether the complexity brought about by the ABM leads to substantial losses in model interpretability (“complexity” in the CCPV protocol, Table 3).

              Comparison between models developed by different groups is another interesting alternative to investigate model validity. Parallel model development – by different groups working on the same problem – allows identifying inconsistencies between model results, thus highlighting aspects of the system that are insufficiently understood or outcomes that are not robust to the decisions made in model construction. Parallel model development has been applied for instance to malaria eradication [59], rheumatoid arthritis [60] and HIV antiretroviral treatment effectiveness [61].

              If data for validation are non-existent, reporting of the alternative verification techniques used would enhance credibility. These might involve simulation-based observation techniques such as animation (e.g. reproducing maps of model predictions to identify malfunctions), degeneration tests (deactivate model functions to evaluate changes in predictions), extreme-conditions tests (checking that model predictions are logical even under unusually extreme inputs) or face validation (showing results to experts) and can be very useful to detect anomalies in the models [62] (“model verification”, Table 3).

              Implications for CCPV protocol

              Reporting the underlying assumptions governing the model, as well as their justification, would help model users evaluate the structural validity of the model (Table 3 “characteristics, theoretical basis”). Validation processes will show if the models are oversimplified and do not capture the salient features of the pandemic. In addition, reporting structural and predictive validity together with subsequent structural changes (e.g. spatial explicitness) to models would allow policy makers to assess the reliability of model predictions, and other analysts to assess the robustness of model construction and parameterisation (Table 3 “construction aspects, space” and “model validation”). Further assistance in evaluating the validity of the model can be obtained through reporting model verification techniques, whether the model has been compared with simpler versions or with other models developed in parallel (Table 3 “model verification” and “complexity”).

              Economic aspects

              Very few pandemic preparedness models integrate transmission dynamics and economic analysis [63]. Most models reviewed could quantify the time course of an outbreak and the associated disease and health care endpoints. Metrics such as the reduction in the number infected or dying were commonly used to evaluate the effectiveness of any interventions considered. However, only a minority of studies (17% and 26% of compartmental and ABMs respectively, Figure 1C) sought to address economic questions, either related to the economic impacts of the pandemic or the value for money of the control or mitigation measures in question. In some cases, this may be because epidemiological modellers lack the expertise to identify and model economic aspects. Collaboration between epidemiological modellers and health economists may thus be mutually beneficial to explore new interdisciplinary modelling approaches.

              While evaluation of the effectiveness of interventions such as social distancing or antiviral prophylaxis is useful in itself, and may be enough to rule an intervention out or guide policy when costs are uncertain, in many circumstances being able to integrate effectiveness with economic concerns in critical in deciding whether to support the intervention. One possible way to elucidate whether economic aspects would enhance the usefulness of the model for policy makers is to ask whether the relative costs of the intervention would condition its selection. For instance, school closures—identified as effective strategies [4, 34, 64, 65]—of more than four weeks have been shown to burden the economy and even treble the costs arising from an influenza pandemic [66]. In addition, individuals who are economically active will involve a much higher economic burden by job absenteeism due to illness or care giving [67]. Considering the economic impacts of such heterogeneities at a social and individual level may change the optimal implementation of an intervention from what would be recommended based on epidemiological considerations alone (i.e. minimising disease burden). The inclusion of a cost-effectiveness outcome (e.g. cost per quality-adjusted life years (QALY) gained or per case averted) is a common approach which allows comparison of the value for money of different interventions for the same health problem (or even with other health problems when generic measures such as QALYs are used as the denominator).

              Few of the reviewed studies incorporated economic aspects but, of those that did, several novel approaches were taken. One such approach was to couple estimates of the cost-effectiveness of vaccinating specific age and risk groups to real-time predictions [9]. These types of real-time outputs of the model, refined as the pandemic progressed, are helpful for decision makers who need to decide the number of vaccine doses to purchase and distribute, and to whom they will be allocated, based on the latest country-specific data.

              Novel insights on the optimal allocation of economic resources were also obtained from approaches embedding compartmental models into optimization frameworks such as optimal control theory or dynamic programming [39, 45, 6871]. For instance Lee et al. [40], using optimal control theory, identified the optimal way to dynamically allocate control measures such as antiviral allocation and isolation, subject to the dynamics of the pandemic and the effects of the control measures on those dynamics. Their analysis identified aggressive allocation of antivirals at the beginning of the pandemic as an optimal strategy. Accounting for the dynamic nature of the pandemic and allowing control efforts to vary produces new dynamic insights for interventions, a fundamental difference from epidemic models that keep control efforts constant (Table 1).

              Few compartmental models were used to perform cost-effectiveness analysis. On those that did, models were integrated in a cost-effectiveness analysis of antiviral prophylaxis and vaccination [72, 73]. Cost-effectiveness analyses were also incorporated into ABMs [6, 7, 7476]. For instance, Sander et al. [77] estimated the number of QALYs lost and economic costs due to pandemic influenza using a detailed ABM structured by age and infection risk. This model represented people interacting in known contact groups such as households, neighbourhoods, communities, schools and work groups. QALYs were obtained from clinical trial data. Direct costs such as visits to physicians and indirect costs such as job absenteeism were also computed. As a result the cost-effectiveness of different antiviral, school closure and pre-vaccination strategies could be estimated and compared to inform policy making.

              The integration of economic and epidemic models for pandemic preparedness does not yet appear to have explored all possible model combinations, with a large scope for modelling innovation. For instance, although advanced economic models such as CGE models have been applied to influenza pandemics and were able to capture the effects of job absenteeism or deaths on the affected sectors of various economies [66, 78, 79], our review did not identify any study that combined such models with dynamic epidemic models in a way that both models feedback on to each other. Not allowing feedback is reasonable if job absenteeism can be approximated as a sudden shock to the production systems—though in reality the shock might be progressive or present several peaks—or if feedback from the economy into the epidemic is not expected. Examples of such feedback could be changes in individuals’ commuting patterns or behaviour as the economy is affected or a potential loss of the financial capacity to mitigate the epidemic at the individual and government levels.

              Implications for CCPV protocol

              Reporting the economic aspects considered in the model, the type of analysis employed, heterogeneity of impacts in different groups and disease burden metrics employed, would facilitate model users understanding the capabilities of the model and the adequacy of the economic analysis undertaken (Table 3, “model construction, economic aspects”).

              Individual behaviour

              Behavioural aspects of infection transmission have been studied in the context of the control of other diseases [a general review is provided by [27]. The inclusion of the behaviour of the individuals during an influenza pandemic has heretofore been uncommon among compartmental models and has only recently started to receive attention [10, 44, 80, 81]. Although most pandemic models represent individuals as entities whose behaviour remains invariant, in reality, human behaviour might hinder or foster pandemic mitigation efforts, especially for severe pandemics like that of 1918. Very few compartmental models reviewed considered the effect of changes in behaviour on the impact of the pandemic (7%). New insights have been obtained by integrating compartmental models with game theory [44, 80]. For instance, Galvani et al [44] parameterized an epidemiological game-theoretic model from questionnaires on perceptions on influenza. The model was employed to compare self-interested behaviour from the elderly towards vaccination with the socially optimal behaviour that would involve vaccinating children to reduce overall transmission. The model identified how the individual and social equilibria differed more for seasonal influenza than for pandemic influenza – because pandemic influenza might also pose a substantial risk to the young. This study illustrates how, as a result of including human behaviour in the model, the need to incentivize individuals to reduce overall influenza transmission can be identified.

              In our review, the inclusion of individuals’ behaviour was more common among simulation models although, instead of basing behaviour representation on game or microeconomic theory, it was usually based on simple rules and assumptions. Different kinds of behaviours were considered in several models, including voluntary isolation, increased social distancing once infected, and preventive behaviour [33, 42, 66, 74, 79, 8285]. The inclusion of behaviour can lead to substantially different conclusions. For instance, if individuals perceive an epidemic as life-threatening, they might change their commuting patterns, wear masks and take more extreme precautions [86] and as a result, a model not considering these behavioural changes would overestimate the attack rate and the number of fatalities that eventually would result from the epidemic. In a similar fashion, if individuals perceive an epidemic to be benign, vaccination rates and adoption of precautions may drop, undermining the effectiveness of control measures (evidence of both kinds of responses has been observed during the H1N1 2009 pandemic [81]).

              The extent to which human behaviour can affect model predictions is, however, poorly understood and further research is necessary to gauge when behaviour should be included in models. A useful practice would be to report behavioural assumptions, including homogeneity, in the model systematically and how the incorporation of individuals’ behaviour affects model predictions with respect to the model without behaviour (Table 3). Data availability is also a major obstacle for the incorporation of human behaviour to models and again sharing mechanisms would facilitate model development.

              Implications for CCPV protocol

              Reporting of the assumptions on how behaviour is modelled would help model users interpreting model results. Reporting of comparisons of model results with and without behaviour would further facility the understanding of behaviour in the model (Table 3, “construction aspects, behaviour”).


              Influenza pandemic models have, over the last decade, proliferated dramatically. In parallel to the rapid increase in the number of models, many now incorporate sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals. Techniques such as Bayesian inference, agent-based modelling and the application of game theory are being newly applied to influenza, answering a more diverse set of public health questions.

              This increase in modelling diversity stems from an increase in diversity of research questions and policy strategies. Ultimately, however, the choices made in model construction will depend critically on the data available, the research question and the consideration of the trade-off between realism and interpretability of the model. Even though models need to be fit for purpose, it is noteworthy that many influenza pandemics models rely on parameters from previous modelling studies and are rarely validated using observed data.

              Although model validation is not expected in influenza pandemic modelling, it is considered a basic prerequisite for publication in other fields, such as the related discipline of ecological modelling. For instance, the editorial policy of the journal Ecological Modelling states: “Papers that only present a model without support of ecological data for calibration and hopefully also validation of the model will not be accepted because a model has in most cases no interest before it has been held up to ecological reality” [87], and a standardised ODD protocol (overview, design concepts, and details) for documenting ABMs more generally in that field has been published in the same journal [31, 88]. Guidelines also exist in the fields of health economics. Examples are guidelines from the National Institute for Clinical Excellence (NICE) in the UK [89], the Drummond Checklist that is required for economic submissions by the British Medical Journal [90], guidelines for cost-effectiveness analysis [91] and modelling guidelines from the International Society for Pharmacoeconomics and Outcomes Research [92].

              Given the large variety in modelling approaches for influenza pandemic management and to facilitate comparison between models, we developed a simple general modelling Characteristics, Construction, Parameterization and Validation aspects (CCPV) reporting protocol (Table 3). The use of the protocol together with international data sharing mechanisms would facilitate comparability between models, transparency in decisions about the kinds of models to use, and ultimately increase the confidence in the use of modelling in formulating influenza pandemic policies.



              M.I.C. and L.R.C. gratefully acknowledge research funding from the research grants NMRC/CSA/011/2009 and NMRC/H1N1R/005/2009 respectively. L.R.C. acknowledges support from the grant WBS R-154-000-527-133.

              Authors’ Affiliations

              Department of Statistics and Applied Probability, National University of Singapore
              Department of Biological Sciences, National University of Singapore
              Modelling and Economics Unit, Health Protection Agency
              Saw Swee Hock School of Public Health, National University of Singapore
              Program in Emerging Infectious Diseases, Duke-NUS Graduate Medical School
              Communicable Disease Centre, Tan Tock Seng Hospital
              Ministry of Defence, Biodefence Centre
              Centre for Health Services Research, National University of Singapore
              School of Computer Science and Software Engineering, The University of Western Australia
              Program in Health Services and Systems Research, Duke-NUS Graduate Medical School


              1. Mills C, Robins J, Lipsitch M: Transmissibility of 1918 pandemic influenza. Nature. 2004, 432: 904-906.View ArticlePubMed
              2. Potter CW: A history of influenza. J Appl Microbiol. 2001, 91: 572-579.View ArticlePubMed
              3. Fraser C, Donnelly C, Cauchemez S, Hanage W, Van Kerkhove M, Hollingsworth T, Griffin J, Baggaley R, Jenkins H, Lyons E, et al: Pandemic potential of a strain of influenza A (H1N1): early findings. Science. 2009, 324: 1557-1561.PubMed CentralView ArticlePubMed
              4. Cauchemez S, Valleron A, Boelle P, Flahault A, Ferguson N: Estimating the impact of school closure on influenza transmission from Sentinel data. Nature. 2008, 452: 750-754.View ArticlePubMed
              5. Ferguson N, Cummings D, Cauchemez S, Fraser C, Riley S, Aronrag M, Iamsirithaworn S, Burke D: Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature. 2005, 437: 209-214.View ArticlePubMed
              6. Brown ST, Tai JHY, Bailey RR, Cooley PC, Wheaton WD, Potter MA, Voorhees RE, LeJeune M, Grefenstette JJ, Burke DS, et al: Would school closure for the 2009 H1N1 influenza epidemic have been worth the cost?: a computational simulation of Pennsylvania. BMC Publ Health. 2011, 11: 353.View Article
              7. Andradottir S, Chiu W, Goldsman D, Lee M, Tsui K-L, Sander B, Fisman D, Nizam A: Reactive strategies for containing developing outbreaks of pandemic influenza. BMC Publ Health. 2011, 11: S1.View Article
              8. Anderson RM, May RM: Infectious Diseases of Humans: Dynamics & Control. New York, USA: Oxford University Press; 1992.
              9. Baguelin M, Hoek AJV, Jit M, Flasche S, White PJ, Edmunds WJ: Vaccination against pandemic influenza A/H1N1v in England: a real-time economic evaluation. Vaccine. 2010, 28: 2370-2384.View ArticlePubMed
              10. Fenichel EP, Castillo-Chavez C, Ceddia MG, Chowell G, Parra PAG, Hickling GJ, Holloway G, Horan R, Morin B, Perrings C, et al: Adaptive human behavior in epidemiological models. Proc Natl Acad Sci U S A. 2011, 108: 6306-6311.PubMed CentralView ArticlePubMed
              11. Cooper BS, Pitman RJ, Edmunds WJ, Gay NJ: Delaying the international spread of pandemic influenza. PLoS Med. 2006, 3: e212.PubMed CentralView ArticlePubMed
              12. Bajardi P, Poletto C, Ramasco JJ, Tizzoni M, Colizza V, Vespignani A: Human mobility networks, travel restrictions, and the global spread of 2009 H1N1 pandemic. PLoS One. 2011, 6: e16591.PubMed CentralView ArticlePubMed
              13. Yu H, Cauchemez S, Donnelly CA, Zhou L, Feng L, Xiang N, Zheng J, Ye M, Huai Y, Liao Q: Transmission dynamics, border entry screening, and school holidays during the 2009 influenza A (H1N1) pandemic, China. Emerg Infect Dis. 2012, 18: 758.PubMed CentralView ArticlePubMed
              14. Lee VJ, Yap J, Cook AR, Chen MI, Tay JK, Tan BH, Loh JP, Chew SW, Koh WH, Lin R, et al: Oseltamivir ring prophylaxis for containment of 2009 H1N1 influenza outbreaks. N Engl J Med. 2010, 2010 (362): 2166-2174.View Article
              15. Daley DJ, Gani J: Epidemic Modelling: An Introduction. Page 13. Cambridge: Cambridge University Press; 2001.
              16. Keeling MJ, Rohani P: Modeling Infectious Diseases in Humans and Animals. Page 10. Princeton Univ Press: Princeton; 2007.
              17. Sargent RG: Verification and validation of simulation models. Proceedings of the 2005 Winter Simulation Conference. Edited by: Kuhl ME, Steiger NM, Armstrong FB, Joines JA. Piscataway, New Jersey, USA; 2005, 130-143.View Article
              18. Kermack WO, McKendrick AG: A contribution to the mathematical theory of infections. Proc R Soc Lond A. 1927, 115: 700-721.View Article
              19. Gumel A, Ruan S, Day T, Watmough J, Brauer F, van den Driessche P, Gabrielson D, Bowman C, Alexander M, Ardal A, et al: Modelling strategies for controlling SARS outbreaks. Proc R Soc B. 2004, 271: 2223-2232.PubMed CentralView ArticlePubMed
              20. Fraser C: Factors that make an infectious disease outbreak controllable. Proc Natl Acad Sci. 2004, 101: 6146-6151.PubMed CentralView ArticlePubMed
              21. Riley S: Large-scale spatial-transmission models of infectious disease. Science. 2007, 316: 1298-1301.View ArticlePubMed
              22. Coburn B, Wagner B, Blower S: Modeling influenza epidemics and pandemics: insights into the future of swine flu (H1N1). BMC Med. 2009, 7: 30.PubMed CentralView ArticlePubMed
              23. Arino J, Bauch CT, Brauer F, Driedger SM, Greer AL, Moghadas SM, Pizzi NJ, Sander B, Tuite A, van den Driessche P, et al: Pandemic influenza: modelling and public health perspectives. Math Biosci Eng. 2011, 8: 1-20.View ArticlePubMed
              24. Grassly NC, Fraser C: Mathematical models of infectious disease transmission. Nat Rev Microbiol. 2008, 6: 477-487.PubMed
              25. Lee V, Lye D, Wilder-Smith A: Combination strategies for pandemic influenza response—a systematic review of mathematical modeling studies. BMC Med. 2009, 7: 76.PubMed CentralView ArticlePubMed
              26. Pérez Velasco R, Praditsitthikorn N, Wichmann K, Mohara A, Kotirum S, Tantivess S, Vallenas C, Harmanci H, Teerawattananon Y: Systematic review of economic evaluations of preparedness strategies and interventions against influenza pandemics. PLoS One. 2012, 7: e30333.PubMed CentralView ArticlePubMed
              27. Funk S, Salathe M, Jansen VAA: Modelling the influence of human behaviour on the spread of infectious diseases: a review. J R Soc Interface. 2010, 7: 1247-1256.PubMed CentralView ArticlePubMed
              28. Moher D, Liberati A, Tetzlaff J, Altman DG, The PG: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009, 6: e1000097.PubMed CentralView ArticlePubMed
              29. Grüne-Yanoff T, Weirich P: The philosophy and epistemology of simulation: a review. Simul Gaming. 2010, 41: 20-50.View Article
              30. Winsberg E: Models of success versus the success of models: reliability without truth. Synthese. 2006, 152: 1-19.View Article
              31. Grimm V, Berger U, Bastiansen F, Eliassen S, Ginot V, Giske J, Goss-Custard J, Grand T, Heinz SK, Huse G, et al: A standard protocol for describing individual-based and agent-based models. Ecol Model. 2006, 198: 115-126.View Article
              32. Fedson DS: Meeting the challenge of influenza pandemic preparedness in developing countries. Emerg Infect Dis. 2009, 15: 365-371.PubMed CentralView ArticlePubMed
              33. Milne G, Kelso J, Kelly H, Huband S, McVernon J: A small community model for the transmission of infectious diseases: comparison of school closure as an intervention in individual-based models of an influenza pandemic. PLoS One. 2008, 3: e4005.PubMed CentralView ArticlePubMed
              34. Glass K, Barnes B: How much would closing schools reduce transmission during an influenza pandemic?. Epidemiology. 2007, 18: 623-628.View ArticlePubMed
              35. House T, Baguelin M, Van Hoek AJ, White PJ, Sadique Z, Eames K, Read JM, Hens N, Melegaro A, Edmunds WJ, Keeling MJ: Modelling the impact of local reactive school closures on critical care provision during an influenza pandemic. Proc R Soc B Biol Sci. 2011, 278: 2753-2760.View Article
              36. Vynnycky E, Edmunds W: Analyses of the 1957 (Asian) influenza pandemic in the United Kingdom and the impact of school closures. Epidemiol Infect. 2008, 136: 166-179.PubMed CentralPubMed
              37. Chen S, Liao C: Modelling control measures to reduce the impact of pandemic influenza among schoolchildren. Epidemiol Infect. 2008, 136: 1035-1045.PubMed CentralView ArticlePubMed
              38. Lee VJ, Chen MI: Effectiveness of neuraminidase inhibitors for preventing staff absenteeism during pandemic influenza. Emerg Infect Dis. 2007, 13: 449-457.PubMed CentralView ArticlePubMed
              39. Lee S, Chowell G, Castillo-Chávez C: Optimal control for pandemic influenza: the role of limited antiviral treatment and isolation. J Theor Biol. 2010, 265: 136-150.View ArticlePubMed
              40. Cooley P, Lee BY, Brown S, Cajka J, Chasteen B, Ganapathi L, Stark JH, Wheaton WD, Wagener DK, Burke DS: Protecting health care workers: a pandemic simulation based on Allegheny County. Influenza Other Respir Viruses. 2010, 4: 61-72.PubMed CentralView ArticlePubMed
              41. Chao DL, Halloran ME, Obenchain VJ, Longini IM Jr: FluTE, a publicly available stochastic influenza epidemic simulation model. PLoS Comput Biol. 2010, 6: e1000656.PubMed CentralView ArticlePubMed
              42. Savachkin A, Uribe A: Dynamic redistribution of mitigation resources during influenza pandemics. Socio Econ Plan Sci. In Press.
              43. Ong JBS, Chen MI, Cook AR, Lee HC, Lee VJ, Lin RTP, Tambyah PA, Goh LG: Real-time epidemic monitoring and forecasting of H1N1-2009 using influenza-like illness from general practice and family doctor clinics in Singapore. PLoS One. 2010, 5: e10036.PubMed CentralView ArticlePubMed
              44. Galvani A, Reluga T, Chapman G: Long-standing influenza vaccination policy is in accord with individual self-interest but not with the utilitarian optimum. Proc Natl Acad Sci U S A. 2007, 104: 5692-5697.PubMed CentralView ArticlePubMed
              45. Medlock J, Galvani A: Optimizing influenza vaccine distribution. Science. 2009, 325: 1705-1708.View ArticlePubMed
              46. Mylius SD, Hagenaars TJ, Lugner AK, Wallinga J: Optimal allocation of pandemic influenza vaccine depends on age, risk and timing. Vaccine. 2008, 26: 3742-3749.View ArticlePubMed
              47. Tuite AR, Fisman DN, Kwong JC, Greer AL: Optimal pandemic influenza vaccine allocation strategies for the Canadian population. PLoS One. 2010, 5: e10520.PubMed CentralView ArticlePubMed
              48. Krumkamp R, Kretzschmar M, Rudge JW, Ahmad A, Hanvoravongchai P, Westenhoefer J, Stein M, Putthasri W, Coker R: Health service resource needs for pandemic influenza in developing countries: a linked transmission dynamics, interventions and resource demand model. Epidemiol Infect. 2011, 139: 59-67.View ArticlePubMed
              49. Matrajt L, Longini IM Jr: Optimizing vaccine allocation at different points in time during an epidemic. PLoS One. 2010, 5: e13767.PubMed CentralView ArticlePubMed
              50. Halloran M, Ferguson N, Eubank S, Longini I, Cummings D, Lewis B, Xu S, Fraser C, Vullikanti A, Germann T, et al: Modeling targeted layered containment of an influenza pandemic in the United States. Proc Natl Acad Sci U S A. 2008, 105: 4639-4644.PubMed CentralView ArticlePubMed
              51. Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, Massari M, Salmaso S, et al: Scalia Tomba G, Wallinga J, et al: Social contacts and mixing patterns relevant to the spread of infectious diseases. Plos Med. 2008, 5: 381-391.View Article
              52. Lee PM: Bayesian Statistics: An Introduction. 3rd edition. London: Arnold; 2004.
              53. Thomas A: O Hara B, Ligges U, Sturtz S: Making BUGS open. R News. 2006, 6: 12-17.
              54. Plummer M: JAGS: a program for analysis of Bayesian graphical models using Gibbs sampling. Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), March 20-22. Edited by: Hornik K, Leisch F, Zeileis A. Vienna, Austria: Technische Universität Wien; 2003, ISSN 1609-395X 2003.
              55. Doucet A, Godsill S, Andrieu C: On sequential Monte Carlo sampling methods for Bayesian filtering. Stat Comput. 2000, 10: 197-208.View Article
              56. King AA, Ionides EL, Bret'o CM, Ellner SP, Kendall BE, Wearing H, Ferrari MJ, Lavine M, Reuman DC: pomp: Statistical inference for partially observed Markov processes (R package). 2010,http://​pomp.​r-forge.​r-project.​org
              57. Colizza V, Barrat A, Barthelemy M, Valleron A, Vespignani A: Modeling the worldwide spread of pandemic influenza: baseline case and containment interventions. PLoS Med. 2007, 4: e13.PubMed CentralView ArticlePubMed
              58. Debarre F, Bonhoeffer S, Regoes R: The effect of population structure on the emergence of drug-resistance during pandemic influenza. J R Soc Interface. 2007, 4: 893-906.PubMed CentralView ArticlePubMed
              59. Alonso PL, Brown G, Arevalo-Herrera M, Binka F, Chitnis C, Collins F, Doumbo OK, Greenwood B, Hall BF, Levine MM, et al: A research agenda to underpin malaria eradication. PLoS Med. 2011, 8: e1000406.PubMed CentralView ArticlePubMed
              60. Drummond MF, Barbieri M, Wong J: Analytic choices in economic models of treatments for rheumatoid arthritis: what makes a difference?. Med Dec Making. 2005, 25: 520-533.View Article
              61. Eaton JW, Johnson LF, Salomon JA, Bärnighausen T, Bendavid E, Bershteyn A, Bloom DE, Cambiano V, Fraser C, Hontelez JAC, et al: HIV treatment as prevention: systematic comparison of mathematical models of the potential impact of antiretroviral therapy on HIV incidence in South Africa. PLoS Med. 2012, 9: e1001245.PubMed CentralView ArticlePubMed
              62. Sargent RG: A tutorial on validation and verification of simulation models. 1988 Winter Simulation Conference. San Diego, USA; 1988.
              63. Lugnér AK, Mylius SD, Wallinga J: Dynamic versus static models in cost-effectiveness analyses of anti-viral drug therapy to mitigate an influenza pandemic. Heal Econ. 2009, 19: 518-531.
              64. Ferguson N, Cummings D, Fraser C, Cajka J, Cooley P, Burke D: Strategies for mitigating an influenza pandemic. Nature. 2006, 442: 448-452.View ArticlePubMed
              65. Germann T, Kadau K, Longini I, Macken C: Mitigation strategies for pandemic influenza in the United States. Proc Natl Acad Sci U S A. 2006, 103: 5935-5940.PubMed CentralView ArticlePubMed
              66. Keogh-Brown MR, Smith RD, Edmunds JW, Beutels P: The macroeconomic impact of pandemic influenza: estimates from models of the United Kingdom, France, Belgium and The Netherlands. Eur J Heal Econ. 2010, 11: 543-554.View Article
              67. Szucs T: The socio-economic burden of influenza. J Antimicrob Chemother. 1999, 44: 11-15.View ArticlePubMed
              68. Jung E, Iwami S, Takeuchi Y, Jo T-C: Optimal control strategy for prevention of avian influenza pandemic. J Theor Biol. 2009, 260: 220-229.View ArticlePubMed
              69. Lin F, Muthuraman K, Lawley M: An optimal control theory approach to non-pharmaceutical interventions. BMC Infect Dis. 2010, 10: 32.PubMed CentralView ArticlePubMed
              70. Tanner MW, Sattenspiel L, Ntaimo L: Finding optimal vaccination strategies under parameter uncertainty using stochastic programming. Math Biosci. 2008, 215: 144-151.View ArticlePubMed
              71. Prosper O, Saucedo O, Thompson D, Torres-Garcia G, Wang XH, Castillo-Chavez C: Modeling control strategies for concurrent epidemics of seasonal and pandemic H1N1 influenza. Math Biosci Eng. 2011, 8: 141-170.View ArticlePubMed
              72. Khazeni N, Hutton DW, Garber AM, Owens DK: Effectiveness and cost-effectiveness of expanded antiviral prophylaxis and adjuvanted vaccination strategies for an influenza A (H5N1) pandemic. Ann Intern Med. 2009, 151: 840-853.PubMed CentralView ArticlePubMed
              73. Carrasco LR, Lee VJ, Chen MI, Matchar DB, Thompson JP, Cook AR: Strategies for antiviral stockpiling for future influenza pandemics: a global epidemic-economic perspective. J R Soc Interface. 2011, 8: 1307-1313.PubMed CentralView ArticlePubMed
              74. Barrett C, Bisset K, Leidig J, Marathe A, Marathe M: Economic and social impact of influenza mitigation strategies by demographic class. Epidemics. 2011, 3: 19-31.PubMed CentralView ArticlePubMed
              75. Lee BY, Brown ST, Korch GW, Cooley PC, Zimmerman RK, Wheaton WD, Zimmer SM, Grefenstette JJ, Bailey RR, Assi T-M, Burke DS: A computer simulation of vaccine prioritization, allocation, and rationing during the 2009 H1N1 influenza pandemic. Vaccine. 2010, 28: 4875-4879.PubMed CentralView ArticlePubMed
              76. Epstein J, Goedecke D, Yu F, Morris R, Wagener D, Bobashev G: Controlling pandemic flu: the value of international air travel restrictions. PLoS One. 2007, 2: e401.PubMed CentralView ArticlePubMed
              77. Sander B, Nizam A, Garrison L, Postma M, Halloran M, Longini I: Economic evaluation of influenza pandemic mitigation strategies in the US using a stochastic microsimulation transmission model. Value Health. 2008, 12: 226-233.PubMed CentralView ArticlePubMed
              78. Dixon PB, Lee B, Muehlenbeck T, Rimmer MT, Rose A, Verikios G: Effects on the U.S. of an H1N1 epidemic: analysis with a quarterly CGE model. J Homel Secur Emerg Manag. 2010, 7: article75.
              79. Smith RD, Keogh-Brown MR, Barnett T: Estimating the economic impact of pandemic influenza: an application of the computable general equilibrium model to the UK. Soc Sci Med. 2011, 73: 235-244.View ArticlePubMed
              80. Shim E, Meyers LA, Galvani AP: Optimal H1N1 vaccination strategies based on self-interest versus group interest. BMC Publ Health. 2011, 11 (Suppl 1): S4.View Article
              81. Poletti P, Ajelli M, Merler S: The effect of risk perception on the 2009 H1N1 pandemic influenza dynamics. PLoS One. 2011, 6 (2): e16460.PubMed CentralView ArticlePubMed
              82. Morimoto T, Ishikawa H: Assessment of intervention strategies against a novel influenza epidemic using an individual-based model. Environ Health Prev Med. 2010, 15: 151-161.PubMed CentralView ArticlePubMed
              83. Aleman DM, Wibisono TG, Schwartz B: A nonhomogeneous agent-based simulation approach to modeling the spread of disease in a pandemic outbreak. Interfaces. 2011, 41: 301-315.View Article
              84. Kelso J, Milne G, Kelly H: Simulation suggests that rapid activation of social distancing can arrest epidemic development due to a novel strain of influenza. BMC Publ Health. 2009, 9: 117.View Article
              85. Loganathan P, Sundaramoorthy S, Lakshminarayanan S: Modeling information feedback during H1N1 outbreak using stochastic agent-based models. Asia Pac J Chem Eng. 2011, 6: 391-397.View Article
              86. Lau J, Yang X, Pang E, Tsui H, Wong E, Wing Y: SARS-related perceptions in Hong Kong. Emerg Infect Dis. 2005, 11: 417-424.PubMed CentralPubMed
              87. Jørgensen SE, Fath BD, Grant W, Nielsen SN: The editorial policy of ecological modelling. Ecol Model. 2006, 199: 1-3.View Article
              88. Grimm V, Berger U, DeAngelis DL, Polhill JG, Giske J, Railsback SF: The ODD protocol: a review and first update. Ecol Model. 2010, 221: 2760-2768.View Article
              89. Birch S, Gafni A: On being NICE in the UK: guidelines for technology appraisal for the NHS in England and Wales. Heal Econ. 2002, 11: 185-191.View Article
              90. Drummond MF, Jefferson TO: Guidelines for authors and peer reviewers of economic submissions to the BMJ. BMJ. 1996, 313: 275-283.PubMed CentralView ArticlePubMed
              91. Murray C, Evans DB, Acharya A, Baltussen R: Development of WHO guidelines on generalized cost-effectiveness analysis. Heal Econ. 2000, 9: 235-251.View Article
              92. Weinstein MC, O'Brien B, Hornberger J, Jackson J, Johannesson M, McCabe C, Luce BR: Principles of good practice for decision analytic modeling in health-care evaluation: report of the ISPOR Task Force on Good Research Practices—Modeling Studies. Value Health. 2003, 6: 9-17.View ArticlePubMed

              This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://​creativecommons.​org/​licenses/​by/​2.​0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.