Advertisement

Value of Information Analytical Methods: Report 2 of the ISPOR Value of Information Analysis Emerging Good Practices Task Force

      Highlights

      • Value of information (VOI) analysis provides a framework for quantifying the value of acquiring additional information to reduce uncertainty in decision making. Quantifying the expected improvement with new information requires an assessment of the scale and consequences of uncertainty in terms of payoffs. Acquiring information, however, can be costly. Therefore, the value of new information is compared with the cost of acquiring the information to determine whether it is worthwhile.
      • This report provides practical guidance on the methods and reporting of VOI analysis. The methods are presented in generic form to allow them to be adapted to any specific decision-making context. This means that even in healthcare systems in which economic considerations are not explicitly incorporated into decision making, the same methods can be applied.
      • This report provides 8 recommendations for good practice when planning, undertaking, or reviewing VOI analyses. The primary audience for the report is methodologists or analysts who are responsible for undertaking VOI analysis to inform decision making.

      Abstract

      The allocation of healthcare resources among competing priorities requires an assessment of the expected costs and health effects of investing resources in the activities and of the opportunity cost of the expenditure. To date, much effort has been devoted to assessing the expected costs and health effects, but there remains an important need to also reflect the consequences of uncertainty in resource allocation decisions and the value of further research to reduce uncertainty. Decision making with uncertainty may turn out to be suboptimal, resulting in health loss. Consequently, there may be value in reducing uncertainty, through the collection of new evidence, to better inform resource decisions. This value can be quantified using value of information (VOI) analysis. This report from the ISPOR VOI Task Force describes methods for computing 4 VOI measures: the expected value of perfect information, expected value of partial perfect information (EVPPI), expected value of sample information (EVSI), and expected net benefit of sampling (ENBS). Several methods exist for computing EVPPI and EVSI, and this report provides guidance on selecting the most appropriate method based on the features of the decision problem. The report provides a number of recommendations for good practice when planning, undertaking, or reviewing VOI analyses. The software needed to compute VOI is discussed, and areas for future research are highlighted.

      Keywords

      Introduction

      Healthcare resource allocation decisions are made with uncertainty. Decision makers, tasked with selecting among competing alternative options, need to determine the payoffs associated with each option before making a choice, but these payoffs are based on imperfect knowledge. This inevitably means that decisions based on the available information may turn out to be suboptimal. Suboptimal decisions can lead to unintended effects such as adverse health consequences to individuals, when expected benefits of an activity are not realized, and to the population, when the resources committed to the activity are transferred away from other activities. Acquiring more information could reduce uncertainty and the associated consequences of suboptimal decision making.
      Value of information (VOI) analysis provides a framework for quantifying the value of acquiring additional information to reduce uncertainty in decision making. Quantifying the expected improvement with new information requires an assessment of uncertainty and the scale of the consequences of that uncertainty in terms of payoffs. Acquiring information, however, can be costly. Therefore, the value of new information is compared with the cost of acquiring the information to determine whether it is worthwhile.
      This report is the second report of the ISPOR Value of Information Analysis Emerging Good Practices Task Force (Box 1). It provides details of the various methods used to assess the value of research and practical guidance for selecting the appropriate method for the decision problem of interest. These methods are presented in generic form to allow them to be adapted to any specific decision-making context. The primary audience for this report is methodologists or analysts who are responsible for undertaking VOI analysis to inform decision making. It complements the first report of the ISPOR VOI Task Force,
      • Fenwick E.
      • Steuten L.
      • Knies S.
      • et al.
      Value of information analysis for research decisions—an introduction: report 1 of the ISPOR Value of Information Analysis Emerging Good Practices Task Force.
      which introduced the concept of VOI analysis, outlined the role of VOI for supporting different types of research decisions, and provided an overview of the steps for conducting and reporting VOI analysis.
      Background on the Task Force Process
      The proposal to initiate an ISPOR Value of Information Good Practices Task Force was evaluated by the ISPOR Health Science Policy Council and then recommended to the ISPOR Board of Directors for approval. The task force was composed of international subject matter experts representing a diverse range of stakeholder perspectives (academia, research organizations, government, regulatory agencies, and commercial entities). The task force met approximately every 5 weeks by teleconference and in person at ISPOR conferences. All task force members reviewed many drafts of the report and provided frequent feedback in both oral and written comments. To ensure that ISPOR Good Practices Task Force Reports are consensus reports, findings and recommendations were presented and discussed at ISPOR conferences. In addition, the first and final draft reports were circulated to the task force’s review group for a formal review. All reviewer comments were considered. Comments were addressed as appropriate in subsequent versions of the report. Most were substantive and constructive at improving the report.

      Characterization of Uncertainty

      The outcomes of VOI analysis are always conditional on the characterization of the decision problem and the specification of judgments about the relevant uncertainties. This means that the extent to which VOI analysis is sufficient to quantify the value of further research depends critically on how well the uncertainties have been characterized. With this in mind, this report first characterizes the sources of uncertainty.
      The starting point for VOI analysis is typically a decision-analytic model that represents judgments about the relationship between outputs that are relevant for decision making (eg, costs and health outcomes) and input parameters derived from clinical, epidemiological, registry, or economic studies. Uncertainty in decision-analytic models can be broadly characterized as relating to either model input parameters or model structure, although this distinction is not always meaningful because model structural choices can be parametrized.

      Parameter Uncertainty

      Decision-analytic models typically use information from a variety of sources, such as randomized controlled trials, observational studies, registries, or expert opinion. Model input parameters usually correspond to unknown “population” quantities, and finite-sized studies provide imprecise estimates of these quantities. Uncertainty about the true population parameter values is represented by probability distributions.
      • Briggs A.H.
      • Weinstein M.C.
      • Fenwick E.
      • Karnon J.
      • Sculpher M.
      • Paltiel D.
      Model parameter estimation and uncertainty analysis: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-6.
      Probability distributions should be assigned to all uncertain parameters (including those with little or no information from which to estimate the parameter); otherwise, the parameter value is assumed to be known with certainty. When a model has more than 1 input parameter, careful consideration should be given to any dependencies between parameter values. If parameters are dependent, then judgments about the values of those parameters should be represented via a joint, correlated probability distribution. Guidelines exist to aid the selection of distributions for parameters.
      • Briggs A.H.
      • Sculpher M.J.
      • Claxton K.
      Decision Modelling for Health Economic Evaluation.
      Statistical and methodological choices can also introduce uncertainty about parameter values when it is not clear which choice of method or statistical distribution is preferred. For example, choices made regarding methods used to synthesize data from multiple sources, type of survival distribution for extrapolation of study data, or weighting scheme used for pooling opinions elicited from multiple experts.
      • Aspinall W.P.
      • Cooke R.M.
      Quantifying scientific uncertainty from expert judgement elicitation.
      ,
      • O’Hagan A.
      • Buck C.E.
      • Daneshkhah A.
      • et al.
      Uncertain Judgements: Eliciting Expert Probabilities.
      Uncertainty in parameter values can also arise owing to missing data, poor-quality data, and study estimates that are biased or confounded.
      • Härkänen T.
      • Maljanen T.
      • Lindfors O.
      • Virtala E.
      • Knekt P.
      Confounding and missing data in cost-effectiveness analysis: comparing different methods.
      • Johnson M.L.
      • Crown W.
      • Martin B.C.
      • Dormuth C.R.
      • Siebert U.
      Good research practices for comparative effectiveness research: analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report–Part III.
      • Welton N.J.
      • Ades A.E.
      • Carlin J.B.
      • Altman D.G.
      • Sterne J.A.C.
      Models for potentially biased evidence in meta-analysis using empirically based priors.
      • Turner R.M.
      • Spiegelhalter D.J.
      • Smith G.C.S.
      • Thompson S.G.
      Bias modelling in evidence synthesis.
      When the most appropriate technique for data analysis or synthesis is unclear and choices or assumptions are required, the choice of technique should be parametrized and uncertainty about the choice included in the VOI analysis. Guidelines exist to aid characterization of uncertainty about methodological choice.
      • Bilcke J.
      • Beutels P.
      • Brisson M.
      • Jit M.
      Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.
      ,
      • Jackson C.H.
      • Sharples L.D.
      • Thompson S.G.
      Structural and parameter uncertainty in Bayesian cost-effectiveness models.

      Good Practice Recommendation 1

      Uncertainty in parameter input values should be characterized using probability distributions, and any dependency between parameters represented by a joint, correlated probability distribution.

      Structural uncertainty

      A model’s structure relies on scientific judgments or assumptions about the underlying decision problem. Because the model structure, or functional form, is an approximation of real-world processes and relationships, the choice of model structure gives rise to structural uncertainty as a result of uncertain model error.
      • Briggs A.H.
      • Weinstein M.C.
      • Fenwick E.
      • Karnon J.
      • Sculpher M.
      • Paltiel D.
      Model parameter estimation and uncertainty analysis: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-6.
      ,
      • Strong M.
      • Oakley J.E.
      When is a model good enough? Deriving the expected value of model improvement via specifying internal model discrepancies.
      Quantifying structural uncertainty is difficult and often ignored, which is equivalent to assuming that the model is perfect.
      Where possible, structural uncertainty should be characterized. Several methods for handling structural uncertainty have been described in the literature. These include (1) scenario analysis (reporting of alternative models based on different plausible structural assumptions
      • Jones L.
      • Griffin S.
      • Palmer S.
      • Main C.
      • Orton V.
      • Sculpher M.
      • et al.
      Clinical effectiveness and cost-effectiveness of clopidogrel and modified-release dipyridamole in the secondary prevention of occlusive vascular events: a systematic review and economic evaluation.
      ), (2) model structure parametrization (adding parameters to the model that define alternative structural choices
      • Bojke L.
      • Claxton K.
      • Sculpher M.
      • Palmer S.
      Characterizing structural uncertainty in decision analytic models: a review and application of methods.
      ), (3) model averaging (weighting the outcomes from a set of plausible models based on fit to observed data or expert opinion
      • Jackson C.H.
      • Thompson S.G.
      • Sharples L.D.
      Accounting for uncertainty in health economic decision models by using model averaging.
      ,
      • Price M.J.
      • Welton N.J.
      • Briggs A.H.
      • Ades A.E.
      Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.
      ), or (4) model discrepancy analysis (the direct quantification of uncertainty about the difference between the model evaluated at its “true” input values and the true value of the output quantity, either by calibration to external data or through expert elicitation
      • Strong M.
      • Oakley J.E.
      When is a model good enough? Deriving the expected value of model improvement via specifying internal model discrepancies.
      ,
      • Strong M.
      • Oakley J.E.
      • Chilcott J.
      Managing structural uncertainty in health economic decision models: a discrepancy approach.
      ).

      Good Practice Recommendation 2

      Clearly describe any important model structural uncertainties. Where possible, structural uncertainty should be quantified and included in the VOI analysis.

      Probabilistic analysis

      Once characterized, a complete assessment of uncertainty in all parameters, in addition to structural and analysis techniques, is achieved through Monte Carlo probabilistic analysis (referred to as “probabilistic sensitivity analysis” in the health economics literature). Probabilistic analysis is used to propagate the impact of uncertainty in model input parameters through to uncertainty about model outputs. This involves repeatedly sampling values at random from each of the parameter input distributions and running the model, using the selected set of values, to provide a corresponding set of model outcomes of interest for each decision option being evaluated. The results of many sampled simulations allow for estimation of the expected (mean) model outputs for each decision option and the uncertainty around these outputs.
      • Briggs A.H.
      • Sculpher M.J.
      • Claxton K.
      Decision Modelling for Health Economic Evaluation.

      Good Practice Recommendation 3

      Use probabilistic analysis to provide an appropriate quantification of uncertainty in model outputs.

      VOI analysis

      Decision-making with uncertainty

      Decision making with uncertainty involves choosing between alternative decision options based on imperfect information. In decision theory, a risk-neutral decision maker would choose between the alternative options based on the one that maximizes the expected payoff.
      • Von Neumann J.
      • Morgenstern O.
      Theory of Games and Economic Behavior.
      However, any decision made with uncertainty creates the potential for adverse consequences, as the expected payoff of the chosen option may not be realized in practice. Some decision makers may be averse to this risk, preferring an option with a small guaranteed payoff to an uncertain outcome with a larger expected payoff.
      • Basu A.
      • Meltzer D.
      Decision criterion and value of information analysis: optimal aspirin dosage for secondary prevention of cardiovascular events.
      ,
      • Nadiminti R.
      • Mukhopadhyay T.
      • Kriebel C.H.
      Risk aversion and the value of information.
      Careful selection of the attitude to risk that aligns with the decision maker’s perspective is required for VOI analysis.
      • Koffijberg H.
      • Knies S.
      • Janssen M.P.
      The impact of decision makers’ constraints on the outcome of value of information analysis.
      In this report, VOI analysis is presented from the perspective of a risk-neutral decision maker. It follows that a decision based on expectation is used to establish the decision option that offers the maximum expected payoff based on current knowledge. VOI analysis is used to address the question of whether further research is needed to reduce the uncertainty in the decision.

      Key Concepts, Definitions, and Notation

      VOI starts by assuming that a decision maker is faced with a set of mutually exclusive decision options, indexed d in the decision space D. Next, it is assumed that a decision model, denoted U(d,θ), predicts the utility for decision option d given p uncertain parameters θ={θ1,,θp}. The uncertainty about the “true” unknown values of θ is represented by the joint probability distribution, π(θ).
      By specifying the model as a general utility function, the analysis can be tailored to any specific decision-making context by choosing an appropriate utility metric. In health technology assessment, in which decision options represent alternative treatment interventions, the utility function is often defined as net health benefit or net monetary benefit.
      The expected value of learning, with certainty, the “true” values of all model parameters θ (ie, eliminating all parameter uncertainty) is referred to as the expected value of perfect information (EVPI). The EVPI is equivalent to the expected cost of uncertainty associated with making the decision based on the current evidence.
      The expected value of acquiring new information about a subset of parameters of interest is used to identify the parameters that are important in driving the decision uncertainty. The set of parameters of interest is denoted by θi and the remaining complementary set of parameters by θc, such that together {θi,θc}=θ. The expected value of learning, with certainty, the parameters of interest θi is the expected value of partial perfect information (EVPPI) for θi (also known as the expected value of perfect parameter information).
      Perfect information about parameters is usually not achievable with a finite sample size, but it is possible to conduct a study to provide some information about the parameters. The expected value of a data collection exercise that will result in data X, where X will be informative for θi, is referred to as the expected value of sample information (EVSI). The EVPPI for θi is an upper limit on the EVSI for any study that is informative about θi.

      Optimum Decision Option With Current Knowledge

      With current knowledge, the best that a risk-neutral decision maker can do is to choose the decision option that gives the highest expected utility. The utility associated with this option is
      maxdDEθ{U(d,θ)}
      (1)


      where Eθ() represents the expectation (mean) taken with respect to π(θ).

      Expected Value of Perfect Information

      If all uncertainty about θ could be eliminated with perfect information, the decision maker would know the values of all parameters θ=θ with certainty and, therefore, would choose the option that maximizes the utility, conditional on knowing θ. This has utility:
      maxdDU(d,θ)
      (2)


      However, when a decision is made about whether to conduct further research, θ is not known. Therefore, the expected value of a decision when uncertainty is resolved with perfect information is found by averaging the maximized utility over the joint distribution of θ. This is the expectation of (2), that is,
      Eθ{maxdDU(d,θ)}
      (3)


      The EVPI is the difference between the expected value of a decision made with perfect information and the expected value of a decision made with current knowledge, that is, the difference between (3) and (1),
      EVPI=Eθ{maxdDU(d,θ)}maxdDEθ{U(d,θ)}
      (4)


      Expected Value of Partial Perfect Information

      If all uncertainty about a subset of parameters, θi, could be resolved with perfect information, the decision maker would know the “true” values θi=θi with certainty when choosing between the alternative decision options. However, the values of the remaining (complementary) parameters θc remain uncertain. Therefore, the decision option is selected based on the one that maximizes expected utility, conditional on the values θi. This has utility:
      maxdDEθc|θi{U(d,θi,θc)}
      (5)


      where Eθc|θi() represents expectation taken with respect to π(θc|θi). When the decision about conducting further research to provide information about these parameters is made, the values of θi are unknown. Therefore, the expectation of (5) is computed:
      Eθi[maxdDEθc|θi{U(d,θi,θc)}]
      (6)


      The EVPPI for θi is the difference between (6) and (1):
      EVPPI(θi)=Eθi[maxdDEθc|θi{U(d,θi,θc)}]maxdDEθ{U(d,θ)}
      (7)


      EVPI and EVPPI can be multiplied by the size of the beneficiary population to give population EV(P)PI values. The population EV(P)PI provides an expected upper bound on the value of further research that would eliminate uncertainty about all (or subsets of) parameters. A population EV(P)PI that is less than the estimated costs of any research study is a sufficient condition for establishing that research is not of value. A population EV(P)PI that is greater than the estimated cost of the research study is a necessary, but not sufficient, condition for establishing that research is potentially of value. To establish a sufficient condition for further research, the costs of conducting the new study must also be considered.

      Expected Value of Sample Information

      In the absence of perfect information, if data X were to become available, the decision maker would choose the option that maximizes the utility, conditional on knowing X. This has utility:
      maxdDEθ|X{U(d,θ)}
      (8)


      However, the data X are not collected when the decision to conduct further research is made. Therefore, the expected value of a decision taken with sample information is obtained by averaging the maximized expected utility of (8):
      EX[maxdDEθ|X{U(d,θ)}]
      (9)


      The EVSI for the data collection exercise that yields X is the difference between (9) and (1):
      EVSI=EX[maxdDEθ|X{U(d,θ)}]maxdDEθ{U(d,θ)}
      (10)


      As with EVPI and EVPPI, EVSI can be multiplied by the size of the beneficiary population to yield a population EVSI value.

      Expected Net Benefit of Sampling

      The difference between the population EVSI value and the cost of the data collection exercise is the expected net benefit of sampling (ENBS). The ENBS is a measure of the net value of any particular study. Under the assumption that the proposed study is relevant only to the decision problem at hand and has no wider value, an ENBS ≥ 0 is a necessary condition for conducting the study. The ENBS is powerful for guiding choices about study characteristics such as sample size and length of follow-up, with the optimal design being the one that maximizes the ENBS.
      • Conti S.
      • Claxton K.
      Dimensions of design space: a decision-theoretic approach to optimal research design.
      ,
      • Tuffaha H.W.
      • Gordon L.G.
      • Scuffham P.A.
      Value of information analysis informing adoption and research decisions in a portfolio of health care interventions.
      The costs of research not only include the costs of the study itself but also the opportunity costs to individuals while the research is underway; for example, some participants will receive a nonoptimal intervention during the study.
      • McKenna C.
      • Claxton K.
      Addressing adoption and research design decisions simultaneously: the role of value of sample information analysis.

      Estimation of VOI Measures

      EVPI Computation

      In the simplest case of a 2-decision option problem in which the difference in utility between options is assumed to be normally distributed, an exact analytic expression for EVPI exists.
      • Eckermann S.
      • Willan A.R.
      Expected value of information and decision making in HTA.
      ,
      • Willan A.R.
      • Pinto E.M.
      The value of information and optimal clinical trial design.
      However, for most problems, an analytic solution cannot easily be derived, and sampling-based methods are required. For models that generate a nonlinear relationship between inputs and outputs, such as those for which Eθ{U(d,θ)}U{d,Eθ(θ)}, a deterministic analysis, in which the model is evaluated at the mean values of its parameters, will generate an incorrect estimate of expected utility. Monte Carlo probabilistic analysis is used, which approximates Eθ{U(d,θ)} by
      1Nn=1NU(d,θ(n))
      (11)


      where θ(n),n=1,,N are samples drawn from the joint distribution π(θ). Monte Carlo simulation is also used to approximate the first term in the EVPI expression, Eθ{maxdDU(d,θ)} via
      1Nn=1NmaxdDU(d,θ(n))
      (12)


      Expression (12) can be computed using the single set of N samples from π(θ) that are used to approximate the baseline expected utility of (11). Therefore, the computation of EVPI is a single-loop Monte Carlo scheme and does not require additional sampling beyond that required for a probabilistic analysis (note that loop here calls into mind the for-loop programming construct that is used to execute repeatedly a set of instructions). Algorithm 1 describes the single-loop scheme for computing EVPI.
      Single-Loop Monte Carlo Scheme for Computing EVPI
      • 1.
        Sample a value from the distribution of the uncertain parameters.
      • 2.
        Evaluate the utility function for each decision option using the parameter values generated in step 1. Store the values.
      • 3.
        Repeat steps 1 to 2 for N samples (eg, 10 000). This is the probabilistic analysis sample.
      • 4.
        Calculate the expected (mean) utility value of the N samples for each decision option.
      • 5.
        Choose the maximum of the expected utility values in step 4 and store. This is the expected utility with current knowledge.
      • 6.
        Calculate the maximum utility of the decision options for each of the N samples generated in step 3.
      • 7.
        Calculate the mean of the N maximum utilities generated in step 6. This is the expected utility when uncertainty is resolved with perfect information.
      • 8.
        Calculate the EVPI as the difference between the expected utility when uncertainty is resolved with perfect information (step 7) and the expected utility with current knowledge (step 5).

      EVPPI Computation

      An analytic solution for EVPPI rarely exists, and sampling-based methods are required. The first term in the EVPPI expression (7) contains a nested expectation, which means that the Monte Carlo approach requires a nested double-loop solution:
      Eθi[maxdDEθc|θi{U(d,θi,θc)}]1Kk=1K[maxdD1Jj=1J{U(d,θi(k),θc(j,k))}]
      (13)


      For the parameters of interest, k=1,,K samples, θi(k), are drawn from the distribution π(θi) in the outer loop of simulation. An inner loop of simulation is then used to sample from the complementary parameters, conditional on the value of θi(k). For the complementary parameters, j=1,,J samples, θc(j,k) are drawn from the conditional distribution π(θc|θi(k)). If θi and θc are independent, then sampling from the conditional distribution π(θc|θi(k)) reduces to sampling from the marginal distribution π(θc). Algorithm 2 describes the double-loop scheme for estimating EVPPI.
      Double-Loop Monte Carlo Scheme for Computing EVPPI
      • 1.
        Sample a value from the distribution(s) of the target parameter(s) of interest.
      • 2.
        Sample a value from the distributions of the remaining (complementary) uncertain parameters, conditional on the value of the target parameter(s) sampled in step 1. If the target and complementary parameters are independent, the sample for this step can be drawn from the prior distribution of the complementary parameters.
      • 3.
        Evaluate the utility function for each decision option using the parameter values generated in steps 1 and 2, and store the resulting utility values.
      • 4.
        While holding the parameter value from step 1 constant, repeat steps 2 and 3 for J samples. This represents the inner loop of simulation.
      • 5.
        Calculate the mean of the utility values across all J samples for each decision option and store.
      • 6.
        Repeat steps 1 to 5 for K values from the distribution of the target parameter(s) (step 1) and store the outputs from step 5. This represents the outer loop of simulation.
      • 7.
        Calculate the mean utility for each decision option across all K samples of the output loop stored in step 6.
      • 8.
        Choose the maximum of the mean utilities calculated in step 7 and store. This is the expected utility with current knowledge about the target parameter(s) of interest.
      • 9.
        Calculate the maximum utility of the decision options (ie, the maximum of the inner loop means) for each of the K samples of the output stored in step 6.
      • 10.
        Calculate the mean of the K maximum utility values generated in step 9. This yields the expected utility when uncertainty is resolved with perfect information about the target parameter(s) of interest.
      • 11.
        Calculate the EVPPI as the difference between the expected utility when uncertainty is resolved with perfect information about the parameter(s) of interest (step 10) and the expected utility with current knowledge (step 8).
      Note that the selection of the sample size of the inner loop (J) is crucial, as the double-loop EVPPI computation can provide biased estimates when the sample size is small.
      • Oakley J.E.
      • Brennan A.
      • Tappenden P.
      • Chilcott J.
      Simulation sample sizes for Monte Carlo partial EVPI calculations.
      Nested double-loop sampling schemes can be computationally expensive. One of the key determinations for reducing the computational burden is whether the model is linear or multilinear in the complementary parameters θc. A model is linear in complementary parameters, θc1 and θc2 if it can be written as a sum of these parameters, for example, U(θ)=θc1θi12+θc2θi2, where θi1 and θi2 are parameters of interest. A model is multilinear in the complementary parameters if it can be written in sum-product form of the complementary parameters, for example, U(θ)=θc1θc2θi12+θc3θi1θi2. If these conditions hold (and there is no correlation between the complementary parameters that are multiplied together), the double-loop sampling scheme can be replaced by a single loop, in which the mean values of the complementary parameters are used to avoid the need for the inner loop of simulation.
      The general model forms for which a single-loop approach is justified are described elsewhere.
      • Madan J.
      • Ades A.E.
      • Price M.
      • et al.
      Strategies for efficient computation of the expected value of partial perfect information.
      Where applicable, single-loop methods are to be preferred to reduce Monte Carlo error.
      • Oakley J.E.
      • Brennan A.
      • Tappenden P.
      • Chilcott J.
      Simulation sample sizes for Monte Carlo partial EVPI calculations.
      ,
      • Strong M.
      • Oakley J.E.
      • Brennan A.
      Estimating multiparameter partial expected value of perfect information from a probabilistic sensitivity analysis sample: a nonparametric regression approach.
      ,
      • Strong M.
      • Oakley J.E.
      • Brennan A.
      • Breeze P.
      Estimating the expected value of sample information using the probabilistic sensitivity analysis sample: a fast, nonparametric regression-based method.
      Algorithm 3 describes the single-loop Monte Carlo scheme for estimating EVPPI.
      Single-Loop Monte Carlo Scheme for Computing EVPPI
      • 1.
        Sample a value from the distribution of the target parameter(s) of interest.
      • 2.
        Evaluate the utility function for each decision option using the value for the target parameter(s) from step 1 and the mean values of the remaining uncertain parameters (or functions of them
        • Madan J.
        • Ades A.E.
        • Price M.
        • et al.
        Strategies for efficient computation of the expected value of partial perfect information.
        ). Store the values.
      • 3.
        Repeat steps 1 and 2 for N samples.
      • 4.
        Calculate the mean of the N utility values for each decision option.
      • 5.
        Follow steps 5 to 8 of the algorithm for computing EVPI (algorithm 1).
      EVPPI can also be computed using a regression-based method that uses a nonparametric, or other flexible regression, method to estimate the inner expectation of expression (6).
      • Strong M.
      • Oakley J.E.
      • Brennan A.
      Estimating multiparameter partial expected value of perfect information from a probabilistic sensitivity analysis sample: a nonparametric regression approach.
      The regression-based method requires only the single set of samples that is generated by the probabilistic analysis. Algorithm 4 describes the single-loop regression-based scheme for estimating EVPPI.
      Single-Loop Regression-Based Scheme for Computing EVPPI
      • 1.
        Generate the probabilistic analysis sample using steps 1-3 of the algorithm for computing EVPI (algorithm 1).
      • 2.
        For each of the decision options, regress the estimates of utility on the parameter values of the target parameter(s) of interest.
      • 3.
        Calculate the regression fitted values for each decision option.
      • 4.
        Follow steps 5-8 of the algorithm for computing EVPI (algorithm 1).
      A review of alternative methods for computing EVPPI is available elsewhere,
      • Heath A.
      • Manolopoulou I.
      • Baio G.
      A review of methods for analysis of the expected value of information.
      while Appendix Figure 1 (In Supplemental Materials found at https://doi.org/10.1016/j.jval.2020.01.004) provides guidance on the choice of computation method based on model features.

      Good Practice Recommendation 4

      When Using the Nested Double-Loop Method to Compute EVPPI, Choose Inner- and Outer-Loop Simulation Sizes to Ensure Acceptable Bias and Precision

      Good Practice Recommendation 5

      When Using the Single-Loop Methods to Compute EVPPI, Ensure the Underlying Assumptions of the Method Hold

      EVSI computation

      EVSI can be computed analytically if the difference in utility between decision options is assumed to be normally distributed and the proposed data collection exercise is expected to lead to a known reduction in the variance of the incremental utility.
      • Willan A.R.
      • Pinto E.M.
      The value of information and optimal clinical trial design.
      ,
      • Wilson E.C.
      A practical guide to value of information analysis.
      However, an analytic solution cannot easily be derived for most problems, and sampling-based methods are usually required.
      For sampling-based methods, EVSI relies on the generation of plausible data sets from a proposed new study. The parameters θ can usually be partitioned into 2 sets: a set θi for which judgments will be informed by the newly collected data, X, and a complementary set θc such that {θi,θc}=θ. Plausible data sets can be obtained by first sampling values θi(k),k=1,,K from the prior distribution of the model parameters π(θi). Then, conditional on each value θi(k), a sample from the likelihood function (ie, the probability distribution for new data, conditional on the parameters) X(k)~π(X|θi(k)) is generated. The 2 sources of information are then combined to form a posterior distribution for the model parameters given the new sample data and the prior knowledge about the model parameters.
      When defining the likelihood for data generation, consideration should be given to how the data from the study would actually be analyzed in the study to inform parameters π(θi). For example, the likelihood that is expected to be used in the statistical analysis of the data would be a naturally good candidate for the likelihood used to generate plausible data sets. The analyst should also consider any mechanisms that may result in corrupted, biased, or missing (eg, censored) data.
      When the likelihood is chosen such that the updated posterior distribution is in the same family as the prior (eg, a beta prior updated by binomially distributed data results in a beta posterior), the prior is called a conjugate prior for the likelihood function. Conjugacy has computational advantages because it results in a known posterior distribution that is easy to sample from. The likelihood function that results in conjugacy is often (but not always) the natural choice for the data-generating mechanism.
      The first term in the EVSI expression contains a nested expectation, which means that the basic Monte Carlo approach to EVSI requires a nested double-loop solution:
      EX[maxdDEθ|X{U(d,θ)}]1Kk=1K[maxdD1Jj=1J{U(d,θ(j,k))}]
      (14)


      where parameters θ(j,k),j=1,,J are sampled from the posterior distribution π(θ|X(k)) in an inner loop, conditional on samples X(k),k=1,,K in an outer loop.
      Algorithm 5 describes the double-loop Monte Carlo scheme for estimating EVSI.
      Double-Loop Monte Carlo Scheme for Computing EVSI
      • 1.
        Define the proposed study design (sample size, length of follow-up etc). Determine the data-generating distribution (the likelihood) under this design.
      • 2.
        Sample a value from the prior distribution of the parameter(s) that will be informed by new data.
      • 3.
        Sample a plausible data set from the distribution defined in step 1, conditional on the value of the target parameter(s) sampled in step 2.
      • 4.
        Update the prior distribution of the target parameter(s) with the plausible data set from step 3 to form the posterior distribution for the target parameter(s). Sample a value from this posterior distribution, which may require Markov chain Monte Carlo sampling if the prior and likelihood are not conjugate.
      • 5.
        Sample a value from the prior distribution of the remaining uncertain parameters.
      • 6.
        Evaluate the utility function for each decision option using the parameter values from steps 4 and 5 and store the results.
      • 7.
        Repeat steps 4 to 6 J times. This represents the inner loop of simulation.
      • 8.
        Calculate the mean of the utility values across all J samples for each decision option in step 7 and store.
      • 9.
        Repeat steps 2 to 8 for K values from the prior distribution of the parameters. This represents the outer loop of simulation.
      • 10.
        Calculate the mean utility values for each decision option across all K samples of the output stored in step 9.
      • 11.
        Choose the maximum of the expected utility values in step 10 and store. This is the expected utility with current knowledge.
      • 12.
        Calculate the maximum utility of the decision options (ie, the maximum of the inner loop means) for each of the K samples of the output stored in step 9.
      • 13.
        Calculate the mean of the K maximum utility values generated in step 12. This is the expected utility with new sample information about the target parameter(s) of interest.
      • 14.
        Calculate the EVSI as the difference between the expected utility with new sample information (step 13) and the expected utility with current knowledge (step 11).
      • 15.
        Repeat steps 1 to 14 to calculate EVSI for different study designs (eg, studies with different sample sizes or lengths of follow-up).
      As with EVPPI, one of the key determinations for reducing the computation of EVSI is whether the model is linear or multilinear in either θi or θc (or both). For EVSI, the computation can also be reduced if an analytic expression exists for the posterior mean Eθi|X(θi) given the new data. If these conditions hold, the double-loop scheme can be replaced with a single loop in which the mean values for the posterior distribution for the parameter(s) of interest are used under certain conditions.
      • Ades A.E.
      • Lu G.
      • Claxton K.
      Expected value of sample information calculations in medical decision modeling.
      ,
      • Welton N.J.
      • Madan J.J.
      • Caldwell D.M.
      • Peters T.J.
      • Ades A.E.
      Expected value of sample information for multi-arm cluster randomized trials with binary outcomes.
      Algorithm 6 describes the single-loop Monte Carlo scheme for estimating EVSI.
      Single-Loop Monte Carlo Scheme for Computing EVSI
      • 1.
        Define the proposed study design (sample size, length of follow-up, etc). Determine the data-generating distribution (the likelihood) under this design.
      • 2.
        Sample a value from the prior distribution of the parameter(s) that will be informed by new data.
      • 3.
        Sample a plausible data set from the distribution defined in step 1, conditional on the value of the parameter(s) sampled in step 2.
      • 4.
        Update the prior distribution of the target parameter(s) of interest with the new data in step 3 to form the posterior distribution. Analytically compute the expectation (mean value) of this posterior distribution. This will be possible if the prior and likelihood distributions are conjugate.
      • 5.
        Evaluate the utility function for each decision option using the posterior mean estimate of the target parameter(s) and the mean values of the remaining uncertain parameters. Store the values.
      • 6.
        Repeat steps 2 to 5 for N samples from the prior distribution of the target parameter(s) of interest.
      • 7.
        Calculate the mean utility values for each decision option across all N samples of the output stored in step 5.
      • 8.
        Choose the maximum of the expected utility in step 7 and store. This is the expected utility with current knowledge about the target parameter(s) of interest.
      • 9.
        Calculate the maximum utility of the decision options for each of the N samples of the output stored in step 5.
      • 10.
        Calculate the mean of the N maximum utility values generated in step 9. This is the expected utility with new sample information about the target parameter(s) of interest.
      • 11.
        Calculate the EVSI as the difference between the expected utility with new sample information (step 10) and the expected utility with current knowledge (step 8).
      • 12.
        Repeat steps 1 to 11 to calculate EVSI for different study designs (eg, studies with different sample sizes or lengths of follow-up).
      Several other methods for computing EVSI exist. As with EVPPI, EVSI can be computed directly from the probabilistic analysis sample using regression-based methods.
      • Strong M.
      • Oakley J.E.
      • Brennan A.
      • Breeze P.
      Estimating the expected value of sample information using the probabilistic sensitivity analysis sample: a fast, nonparametric regression-based method.
      ,
      • Tuffaha H.W.
      • Strong M.
      • Gordon L.G.
      • Scuffham P.A.
      Efficient value of information calculation using a nonparametric regression approach: an applied perspective.
      A nonparametric regression is used to estimate the inner expectation of the first term of the EVSI expression (10), and the method becomes a single loop. The method relies on there being a low-dimensional summary statistic for the new data s(X), a good choice being the summary statistic that would be reported if the study were actually conducted. The method makes the assumption that the relationship between s(X) and the conditional expectation Eθ|s(X){U(d,θ)} is smooth, which is likely to be a reasonable assumption in most models.
      EVSI can also be approximated using importance sampling, with only a single set of prior parameter samples and the corresponding probabilistic analysis sample.
      • Menzies N.A.
      An efficient estimator for the expected value of sample information.
      This requires repeated evaluation of the likelihood function, and the scheme is expected to be most useful when the utility function is computationally expensive compared with the likelihood function. More recently, a Gaussian approximation method, which has similarities to the regression-based scheme, and a moment-matching method have been proposed.
      • Jalal H.
      • Alarid-Escudero F.
      A Gaussian approximation approach for value of information analysis.
      ,
      • Heath A.
      • Manolopoulou I.
      • Baio G.
      Efficient Monte Carlo estimation of the expected value of sample information using moment matching.
      These methods have the advantage that, once the EVSI has been computed for a single proposed study, the EVSI values for a range of different study sample sizes can be easily computed. Given the different methods available for computing EVSI, Appendix Figure 2 (in the Supplemental Materials found at https://doi.org/10.1016/j.jval.2020.01.004) provides guidance on the choice of EVSI computation method based on model features.

      Good Practice Recommendation 6

      Choose the Data-Generating Distribution for the EVSI Computation to Reflect How the Data Would Be Analyzed if the Proposed New Study Were Conducted

      Good Practice Recommendation 7

      When Simulating Data Sets, Model the Processes Expected to Result in Censoring, Missing Data, and Measurement Bias to Mimic the True Data-Generating Process

      Reporting of results

      Information generated by research is used to inform decisions for the population of individuals who could potentially benefit from the information. This depends on the size of the beneficiary population whose decision choice will be informed by the additional research (eg, the prevalent cohort with the disease or the future incident cohort) and on the time horizon over which the information generated by research is useful. The VOI population estimate is determined by multiplying the per-person VOI estimate by the size of the beneficiary population over the anticipated time horizon:
      Population VOI=VOI per person×t=0TIt1+dt
      (15)


      where It is the incidence in time period t, T is the time horizon, and d is the discount rate for a single time period.
      • Philips Z.
      • Claxton K.
      • Palmer S.
      The half-life of truth: what are appropriate time horizons for research decisions?.
      An estimate of the size of the beneficiary population is typically derived from epidemiological data. The benefits of future research are realized only when the study findings are reported.
      • Willan A.R.
      • Pinto E.M.
      The value of information and optimal clinical trial design.
      However, some study participants who are enrolled in the optimal arm of a research study will also receive the benefits of the optimal intervention while the study is conducted.
      • McKenna C.
      • Claxton K.
      Addressing adoption and research design decisions simultaneously: the role of value of sample information analysis.
      The size of the beneficiary population also depends on the perspective of the study and whether information might be generalizable to multiple jurisdictions.
      • Eckermann S.
      • Willan A.R.
      Globally optimal trial design for local decision making.
      Gradual uptake or implementation of research findings should also be considered when determining the size of the relevant population.
      • Grimm S.E.
      • Dixon S.
      • Stevens J.W.
      Assessing the expected value of research studies in reducing uncertainty and improving implementation dynamics.
      Estimating the time horizon, T, over which the additional evidence remains informative, is more challenging. Information generated by research is not valuable indefinitely because future changes are expected to occur over time that affect the VOI.
      • Philips Z.
      • Claxton K.
      • Palmer S.
      The half-life of truth: what are appropriate time horizons for research decisions?.
      ,
      • Claxton K.
      • Palmer S.
      • Longworth L.
      • et al.
      Informing a decision framework for when NICE should recommend the use of health technologies only in the context of an appropriately designed programme of evidence development.
      The impact of these complex and uncertain processes is impossible to quantify with certainty, but some assessment is possible based on historical evidence and anticipated future changes (eg, patent expiration, upcoming innovations, and other evaluative research underway). The value of research should also be discounted over this time horizon so that more weight is given to decisions that are informed by the research in the near term and less weight given to decisions informed in the more distant future.
      VOI is expressed in units of utility, which is typically net health benefit or net monetary benefit when a cost-effectiveness model has been employed. Because both net health and monetary benefit depend on the valuation of health opportunity cost (as expressed by the cost-effectiveness threshold), VOI should be reported for explicit thresholds of interest or presented in graphical form as a function of the cost-effectiveness threshold. Appendix Figure 3 (in Supplemental Materials found at https://doi.org/10.1016/j.jval.2020.01.004) illustrates the presentation of EV(P)PI.
      Population EVSI should be reported in a similar way to EV(P)PI but with the additional reporting of information governing the research design (eg, sample size, allocation of participants within the study, length of follow-up, endpoints included in the design). This includes the reporting of the parameter prior distribution and likelihood function used to estimate EVSI. The costs of collecting the sample information should be clearly reported for the calculation of ENBS. This includes the fixed cost of the proposed research, the variable costs associated with the study design, and the expected opportunity costs while the research is underway.
      • McKenna C.
      • Claxton K.
      Addressing adoption and research design decisions simultaneously: the role of value of sample information analysis.
      Appendix Figure 4 (in Supplemental Materials found at https://doi.org/10.1016/j.jval.2020.01.004) illustrates the presentation of EVSI and ENBS.

      Good Practice Recommendation 8

      When Reporting VOI Results, Clearly State All Underlying Assumptions

      Other modeling considerations

      Minimal modeling

      Most commonly, VOI analysis is applied when a decision-analytic model is available to characterize uncertainty and the need for further evaluative research. However, many organizations responsible for making research prioritization decisions lack the time and resources to undertake formal decision modeling. In these circumstances, it may be necessary to adopt a minimal modeling approach, which allows for rapid estimation of the value of further research without the need for constructing a full disease and/or decision-analytic model.
      • Meltzer D.O.
      • Hoomans T.
      • Chung J.W.
      • Basu A.
      Minimal modeling approaches to value of information analysis for health research.
      ,
      • Bennette C.S.
      • Veenstra D.L.
      • Basu A.
      • Baker L.H.
      • Ramsey S.D.
      • Carlson J.J.
      Development and evaluation of an approach to using value of information analyses for real-time prioritization decisions within SWOG, a large cancer clinical trials cooperative group.
      Minimal modeling may be used as a substitute for full modeling when a clinical study is available that directly characterizes uncertainty in the comprehensive measures of outcome that are sufficient to inform the decision maker’s utility for all relevant decision options.
      • Meltzer D.O.
      • Hoomans T.
      • Chung J.W.
      • Basu A.
      Minimal modeling approaches to value of information analysis for health research.
      This is possible when
      • the clinical study captures all important differences in outcomes between the decision options being evaluated,
      • the endpoints that are important for the decision occur during the study, and
      • no age-specific competing causes of death or other events occur after the study ends.
      Clinical studies that report intermediate endpoints are also amenable to minimal modeling if intermediate outcomes can be mapped to comprehensive outcome measures using a simple model with a few parameters.
      Minimal modeling offers a practical means for estimating the value of further research quickly and offers a transparent and efficient method for setting research priorities.
      • Meltzer D.O.
      • Hoomans T.
      • Chung J.W.
      • Basu A.
      Minimal modeling approaches to value of information analysis for health research.
      ,
      • Bennette C.S.
      • Veenstra D.L.
      • Basu A.
      • Baker L.H.
      • Ramsey S.D.
      • Carlson J.J.
      Development and evaluation of an approach to using value of information analyses for real-time prioritization decisions within SWOG, a large cancer clinical trials cooperative group.
      Nevertheless, it has a number of notable limitations. First, minimal modeling may involve an oversimplification of complex clinical processes. The extent to which the approach adequately addresses the decision problem is important, and the analyst should make clear all the assumptions underpinning the analysis. Second, the EVPPI cannot be computed for quantities that are not parametrized within the model. Third, it is difficult to adapt a minimal model that is based on a specific study to address a different but related decision problem.
      • Meltzer D.O.
      • Hoomans T.
      • Chung J.W.
      • Basu A.
      Minimal modeling approaches to value of information analysis for health research.

      VOI for Endpoints Other Than Cost-Effectiveness

      Some decision-making bodies exclude economic considerations from their decision-making process and, instead, use a utility function based on health outcomes alone. VOI analysis may be applied directly to the results of standard meta-analysis (or a single study) on a specific outcome measure.
      • Claxton K.
      • Griffin S.
      • Koffijberg H.
      • McKenna C.
      How to estimate the health benefits of additional research and changing clinical practice.
      ,
      • McKenna C.
      • Griffin S.
      • Koffijberg H.
      • Claxton K.
      Methods to place a value on additional evidence are illustrated using a case study of corticosteroids after traumatic brain injury.
      This approach places the focus on an endpoint of interest (eg, distribution of values describing uncertainty about the relative effect of an intervention on mortality). The VOI is then estimated in terms of that endpoint (eg, number of deaths avoided). Nevertheless, it does lead to difficulty in interpreting VOI outcomes across diverse decision problems.
      Importantly, VOI analysis is relevant to different types of healthcare systems and decision-making contexts. It should not be regarded as restricted to situations in which decision-analytic models or estimates of cost-effectiveness are available.

      Software Resources

      Decision-analytic models are implemented in a range of software, including spreadsheets, modeling programs such as TreeAge (TreeAge Software, Inc, Williamstown, MA) or SIMUL8 (SIMUL8 Corporation, Boston, MA), statistical environments such as R or Stata (StataCorp LLC, College Station, TX), or general purpose programming languages such as Python or C++. Whether or not the VOI analysis can be conducted using the same software as that used to implement the decision-analytic model will depend on the choice of VOI computation method.
      Compared with spreadsheets (which are noted for their perceived transparency), programming languages provide faster execution times and vastly increased flexibility. The analyst must write code, but many programming languages have specialist libraries that can reduce this burden (eg, the BCEA
      • Baio G.
      • Berardi A.
      • Heath A.
      BCEA: Bayesian cost effectiveness analysis.
      and heemod
      • Filipović-Pierucci A.
      • Zarca K.
      • Durand-Zaleski I.
      Markov models for health economic evaluation: the R Package heemod. ArXiv e-prints.
      packages in R). Analysts can also use web tools such as the Sheffield Accelerated Value of Information app
      • Strong M.
      • Oakley J.E.
      • Brennan A.
      SAVI—Sheffield Accelerated Value of Information.
      and BCEAweb,
      • Baio G.
      • Berardi A.
      • Heath A.
      BCEA: Bayesian cost effectiveness analysis.
      ,
      • Baio G.
      • Berardi A.
      • Heath A.
      Bayesian cost-effectiveness analysis with the R package BCEA.
      an online version of the BCEA R package. The introduction of these software solutions has allowed VOI analysis to be computed quickly; however, the analyst should always ensure that the underlying assumptions of the methods hold when using and interpreting the results.

      Future Research Directions

      The following areas have been identified where future research in VOI is warranted.

      Optimizing the Value of Research to Reduce Structural Uncertainties

      Structural uncertainty is rarely quantified in model-based analysis. Not quantifying structural uncertainty implies that the model is a perfect representation of real-world processes and relationships. VOI analysis for structural uncertainty has been explored previously by Strong and Oakley
      • Strong M.
      • Oakley J.E.
      When is a model good enough? Deriving the expected value of model improvement via specifying internal model discrepancies.
      and Bojke et al,
      • Bojke L.
      • Claxton K.
      • Sculpher M.
      • Palmer S.
      Characterizing structural uncertainty in decision analytic models: a review and application of methods.
      but methods in this area are underdeveloped.

      Optimizing Study Design

      The set of potential study designs for a given research problem may be large. The design space may contain a range of sample sizes, allocations across treatment arms, follow-up duration, stopping rules, and so forth.
      • Conti S.
      • Claxton K.
      Dimensions of design space: a decision-theoretic approach to optimal research design.
      Calculating EVSI for every combination of designs is likely to be computationally demanding,
      • Welton N.J.
      • Thom H.H.Z.
      Value of information: we’ve got speed, what more do we need?.
      and methods are needed to increase computational efficiency. A related challenge is EVSI computation for trials with adaptive designs, in which aspects of the trial design itself are conditional on the data simulated in the EVSI calculation. The sequence in which different types of research studies should be conducted also represents an area that has received little attention to date.
      • Griffin S.
      • Welton N.J.
      • Claxton K.P.
      Exploring the research decision space: the expected value of information for sequential research designs.

      Computation of EVSI in Complex Modeling Settings

      When evidence from a new research study informs functions of model parameters, more complex situations are created, which increase the computational burden. Complex modeling situations arise from dynamic transmission modeling. EVSI computation also relies on the ability to generate plausible data sets from a distribution that reflects the data-generating process. This can be difficult if the process is complex (eg, when there is bias, censoring, missingness, data corruption, or measurement error).

      Identifying the Appropriate Time Horizon for VOI

      The “correct” time horizon for research decisions (expression 15) is unknown because it is a proxy for uncertain future changes.
      • Philips Z.
      • Claxton K.
      • Palmer S.
      The half-life of truth: what are appropriate time horizons for research decisions?.
      ,
      • Claxton K.
      • Palmer S.
      • Longworth L.
      • et al.
      Informing a decision framework for when NICE should recommend the use of health technologies only in the context of an appropriately designed programme of evidence development.
      Identifying the appropriate time horizon for research decisions and incorporating uncertainty in the time horizon is an area that has received little attention to date.

      Conclusions

      This second report of the ISPOR VOI Task Force provides good practice guidance in the form of detailed algorithms for estimating EVPI, EVPPI, and EVSI. It also provides information about efficient approaches and software available to support the implementation of VOI. Box 2 provides a summary of the good practice recommendations, for conducting and reviewing VOI analyses, presented throughout this report. The task force report also includes a 26-word glossary of terms to assist readers (in Supplemental Materials found at https://doi.org/10.1016/j.jval.2020.01.004.
      ISPOR Value of Information Analysis Task Force Report’s Good Practice Recommendations for Conducting and Reporting a VOI Analysis
      • 1.
        Uncertainty in parameter input values should be characterized using probability distributions and any dependency between parameters represented by a joint, correlated probability distribution.
      • 2.
        Clearly describe any important model structural uncertainties. Where possible, structural uncertainty should be quantified and included in the VOI analysis.
      • 3.
        Use probabilistic analysis to provide an appropriate quantification of uncertainty in model outputs.
      • 4.
        When using the nested double-loop method to compute EVPPI, choose inner- and outer-loop simulation sizes to ensure acceptable bias and precision.
      • 5.
        When using the single-loop methods to compute EVPPI, ensure the underlying assumptions of the method hold.
      • 6.
        Choose the data-generating distribution for the EVSI computation to reflect how the data would be analyzed if the proposed new study were conducted.
      • 7.
        When simulating data sets, model the processes that are expected to result in censoring, missing data and measurement bias order to mimic the true data-generating process.
      • 8.
        When reporting VOI results, clearly state all underlying assumptions.

      Acknowledgments

      The co-authors thank all those who commented orally during four task force workshop and forum presentations at ISPOR conferences in the US and Europe in 2017 and 2018. The co-authors gratefully acknowledge the following 22 reviewers who generously contributed their time and expertise through submission of written comments on the task force reports: Renee Allard, Gianluca Baio, Alan Brennan, Talitha Feenstra, Aline Gauthier, Sean Gavin, Amer Hayat, Anna Heath, Chris Jackson, Jennifer Kibicho, Joanna Leśniowska, Ka Keat Lim, Amr Makady, Brett McQueen, Celine Pribil, Bram Ramaekers, Stephane Regnier, Josh Roth, Haitham Tuffaha, Remon van den Broek, Rick Vreman, Nicky Welton. Many thanks to Elizabeth Molsen-David at ISPOR for her continuous support from start to finish.

      Supplemental Material

      References

        • Fenwick E.
        • Steuten L.
        • Knies S.
        • et al.
        Value of information analysis for research decisions—an introduction: report 1 of the ISPOR Value of Information Analysis Emerging Good Practices Task Force.
        Value Health. 2020;
        • Briggs A.H.
        • Weinstein M.C.
        • Fenwick E.
        • Karnon J.
        • Sculpher M.
        • Paltiel D.
        Model parameter estimation and uncertainty analysis: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-6.
        Value Health. 2012; 15: 835-842
        • Briggs A.H.
        • Sculpher M.J.
        • Claxton K.
        Decision Modelling for Health Economic Evaluation.
        Oxford University Press, Oxford, UK2006
        • Aspinall W.P.
        • Cooke R.M.
        Quantifying scientific uncertainty from expert judgement elicitation.
        in: Rougier J. Sparks S. Hill L.J. Risk and Uncertainty Assessment for Natural Hazards. Cambridge University Press, Cambridge, UK2013: 64-99
        • O’Hagan A.
        • Buck C.E.
        • Daneshkhah A.
        • et al.
        Uncertain Judgements: Eliciting Expert Probabilities.
        John Wiley and Sons, Chichester, UK2006
        • Härkänen T.
        • Maljanen T.
        • Lindfors O.
        • Virtala E.
        • Knekt P.
        Confounding and missing data in cost-effectiveness analysis: comparing different methods.
        Health Econ Rev. 2013; 3: 8
        • Johnson M.L.
        • Crown W.
        • Martin B.C.
        • Dormuth C.R.
        • Siebert U.
        Good research practices for comparative effectiveness research: analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report–Part III.
        Value Health. 2009; 12: 1062-1073
        • Welton N.J.
        • Ades A.E.
        • Carlin J.B.
        • Altman D.G.
        • Sterne J.A.C.
        Models for potentially biased evidence in meta-analysis using empirically based priors.
        J R Stat Soc Ser A. 2009; 172: 119-136
        • Turner R.M.
        • Spiegelhalter D.J.
        • Smith G.C.S.
        • Thompson S.G.
        Bias modelling in evidence synthesis.
        J R Stat Soc Ser A. 2009; 172: 21-47
        • Bilcke J.
        • Beutels P.
        • Brisson M.
        • Jit M.
        Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.
        Med Decis Making. 2011; 31: 675-692
        • Jackson C.H.
        • Sharples L.D.
        • Thompson S.G.
        Structural and parameter uncertainty in Bayesian cost-effectiveness models.
        J R Stat Soc Ser C Appl Stat. 2010; 59: 233-253
        • Strong M.
        • Oakley J.E.
        When is a model good enough? Deriving the expected value of model improvement via specifying internal model discrepancies.
        SIAM/ASA Journal on Uncertainty Quantification. 2014; 2: 106-125
        • Jones L.
        • Griffin S.
        • Palmer S.
        • Main C.
        • Orton V.
        • Sculpher M.
        • et al.
        Clinical effectiveness and cost-effectiveness of clopidogrel and modified-release dipyridamole in the secondary prevention of occlusive vascular events: a systematic review and economic evaluation.
        Health Technology Assessment. 2004; 8: 1-196
        • Bojke L.
        • Claxton K.
        • Sculpher M.
        • Palmer S.
        Characterizing structural uncertainty in decision analytic models: a review and application of methods.
        Value Health. 2009; 12: 739-749
        • Jackson C.H.
        • Thompson S.G.
        • Sharples L.D.
        Accounting for uncertainty in health economic decision models by using model averaging.
        J R Stat Soc Ser A Stat Soc. 2009; 172: 383-404
        • Price M.J.
        • Welton N.J.
        • Briggs A.H.
        • Ades A.E.
        Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.
        Value Health. 2011; 14: 205-218
        • Strong M.
        • Oakley J.E.
        • Chilcott J.
        Managing structural uncertainty in health economic decision models: a discrepancy approach.
        J R Stat Soc Ser C. 2011; 61: 25-45
        • Von Neumann J.
        • Morgenstern O.
        Theory of Games and Economic Behavior.
        Princeton University Press, Princeton, NJ1944
        • Basu A.
        • Meltzer D.
        Decision criterion and value of information analysis: optimal aspirin dosage for secondary prevention of cardiovascular events.
        Med Decis Making. 2018; 38: 427-438
        • Nadiminti R.
        • Mukhopadhyay T.
        • Kriebel C.H.
        Risk aversion and the value of information.
        Decis Support Syst. 1996; 16: 241-254
        • Koffijberg H.
        • Knies S.
        • Janssen M.P.
        The impact of decision makers’ constraints on the outcome of value of information analysis.
        Value Health. 2018; 21: 203-209
        • Conti S.
        • Claxton K.
        Dimensions of design space: a decision-theoretic approach to optimal research design.
        Med Decis Making. 2009; 29: 643-660
        • Tuffaha H.W.
        • Gordon L.G.
        • Scuffham P.A.
        Value of information analysis informing adoption and research decisions in a portfolio of health care interventions.
        Med Decis Making Policy Pract. 2016; 1: 1-11
        • McKenna C.
        • Claxton K.
        Addressing adoption and research design decisions simultaneously: the role of value of sample information analysis.
        Med Decis Making. 2011; 31: 853-865
        • Eckermann S.
        • Willan A.R.
        Expected value of information and decision making in HTA.
        Health Econ. 2007; 16: 195-209
        • Willan A.R.
        • Pinto E.M.
        The value of information and optimal clinical trial design.
        Stat Med. 2005; 24: 1791-1806
        • Oakley J.E.
        • Brennan A.
        • Tappenden P.
        • Chilcott J.
        Simulation sample sizes for Monte Carlo partial EVPI calculations.
        J Health Econ. 2010; 29: 468-477
        • Madan J.
        • Ades A.E.
        • Price M.
        • et al.
        Strategies for efficient computation of the expected value of partial perfect information.
        Med Decis Making. 2014; 34: 327-342
        • Strong M.
        • Oakley J.E.
        • Brennan A.
        Estimating multiparameter partial expected value of perfect information from a probabilistic sensitivity analysis sample: a nonparametric regression approach.
        Med Decis Making. 2014; 34: 311-326
        • Strong M.
        • Oakley J.E.
        • Brennan A.
        • Breeze P.
        Estimating the expected value of sample information using the probabilistic sensitivity analysis sample: a fast, nonparametric regression-based method.
        Med Decis Making. 2015; 35: 570-583
        • Heath A.
        • Manolopoulou I.
        • Baio G.
        A review of methods for analysis of the expected value of information.
        Med Decis Making. 2017; 37: 747-758
        • Wilson E.C.
        A practical guide to value of information analysis.
        Pharmacoeconomics. 2015; 33: 105-121
        • Ades A.E.
        • Lu G.
        • Claxton K.
        Expected value of sample information calculations in medical decision modeling.
        Med Decis Making. 2004; 24: 207-227
        • Welton N.J.
        • Madan J.J.
        • Caldwell D.M.
        • Peters T.J.
        • Ades A.E.
        Expected value of sample information for multi-arm cluster randomized trials with binary outcomes.
        Med Decis Making. 2014; 34: 352-365
        • Tuffaha H.W.
        • Strong M.
        • Gordon L.G.
        • Scuffham P.A.
        Efficient value of information calculation using a nonparametric regression approach: an applied perspective.
        Value Health. 2016; 19: 505-509
        • Menzies N.A.
        An efficient estimator for the expected value of sample information.
        Med Decis Making. 2016; 36: 308-320
        • Jalal H.
        • Alarid-Escudero F.
        A Gaussian approximation approach for value of information analysis.
        Med Decis Making. 2018; 38: 174-188
        • Heath A.
        • Manolopoulou I.
        • Baio G.
        Efficient Monte Carlo estimation of the expected value of sample information using moment matching.
        Med Decis Making. 2018; 38: 163-173
        • Philips Z.
        • Claxton K.
        • Palmer S.
        The half-life of truth: what are appropriate time horizons for research decisions?.
        Med Decis Making. 2008; 28: 287-299
        • Eckermann S.
        • Willan A.R.
        Globally optimal trial design for local decision making.
        Health Econ. 2009; 18: 203-216
        • Grimm S.E.
        • Dixon S.
        • Stevens J.W.
        Assessing the expected value of research studies in reducing uncertainty and improving implementation dynamics.
        Med Decis Making. 2017; 37: 523-533
        • Claxton K.
        • Palmer S.
        • Longworth L.
        • et al.
        Informing a decision framework for when NICE should recommend the use of health technologies only in the context of an appropriately designed programme of evidence development.
        Health Technol Assess. 2012; 16: 1-323
        • Meltzer D.O.
        • Hoomans T.
        • Chung J.W.
        • Basu A.
        Minimal modeling approaches to value of information analysis for health research.
        Med Decis Making. 2011; 31: E1-E22
        • Bennette C.S.
        • Veenstra D.L.
        • Basu A.
        • Baker L.H.
        • Ramsey S.D.
        • Carlson J.J.
        Development and evaluation of an approach to using value of information analyses for real-time prioritization decisions within SWOG, a large cancer clinical trials cooperative group.
        Med Decis Making. 2016; 36: 641-651
        • Claxton K.
        • Griffin S.
        • Koffijberg H.
        • McKenna C.
        How to estimate the health benefits of additional research and changing clinical practice.
        BMJ. 2015; 351: h5987
        • McKenna C.
        • Griffin S.
        • Koffijberg H.
        • Claxton K.
        Methods to place a value on additional evidence are illustrated using a case study of corticosteroids after traumatic brain injury.
        J Clin Epidemiol. 2016; 70: 183-190
        • Baio G.
        • Berardi A.
        • Heath A.
        BCEA: Bayesian cost effectiveness analysis.
        (R package version 2.2-6.) (2018. Accessed October 2017)
        • Filipović-Pierucci A.
        • Zarca K.
        • Durand-Zaleski I.
        Markov models for health economic evaluation: the R Package heemod. ArXiv e-prints.
        (R package version 0.9.2.) (2017. Accessed October 2017)
        • Strong M.
        • Oakley J.E.
        • Brennan A.
        SAVI—Sheffield Accelerated Value of Information.
        (Accessed October 2017)
        • Baio G.
        • Berardi A.
        • Heath A.
        Bayesian cost-effectiveness analysis with the R package BCEA.
        Springer International Publishing, New York, NY2017
        • Welton N.J.
        • Thom H.H.Z.
        Value of information: we’ve got speed, what more do we need?.
        Med Decis Making. 2015; 35: 564-566
        • Griffin S.
        • Welton N.J.
        • Claxton K.P.
        Exploring the research decision space: the expected value of information for sequential research designs.
        Med Decis Making. 2010; 30: 155-162