## Highlights

- •Value of information (VOI) analysis provides a framework for quantifying the value of acquiring additional information to reduce uncertainty in decision making. Quantifying the expected improvement with new information requires an assessment of the scale and consequences of uncertainty in terms of payoffs. Acquiring information, however, can be costly. Therefore, the value of new information is compared with the cost of acquiring the information to determine whether it is worthwhile.
- •This report provides practical guidance on the methods and reporting of VOI analysis. The methods are presented in generic form to allow them to be adapted to any specific decision-making context. This means that even in healthcare systems in which economic considerations are not explicitly incorporated into decision making, the same methods can be applied.
- •This report provides 8 recommendations for good practice when planning, undertaking, or reviewing VOI analyses. The primary audience for the report is methodologists or analysts who are responsible for undertaking VOI analysis to inform decision making.

## Abstract

## Keywords

## Introduction

^{1}

**Background on the Task Force Process**

### Characterization of Uncertainty

### Parameter Uncertainty

^{4}

^{,}

^{5}

^{6}

^{, }

^{7}

- Johnson M.L.
- Crown W.
- Martin B.C.
- Dormuth C.R.
- Siebert U.

*Value Health.*2009; 12: 1062-1073

^{, }

^{8}

^{, }

^{9}

^{10}

^{,}

### Good Practice Recommendation 1

**Uncertainty in parameter input values should be characterized using probability distributions, and any dependency between parameters represented by a joint, correlated probability distribution.**

#### Structural uncertainty

^{2}

^{,}Quantifying structural uncertainty is difficult and often ignored, which is equivalent to assuming that the model is perfect.

^{13]}

- Jones L.
- Griffin S.
- Palmer S.
- Main C.
- Orton V.
- Sculpher M.
- et al.

*Health Technology Assessment.*2004; 8: 1-196

^{14}

^{15}

^{,}

^{16}

^{12}

^{,}).

### Good Practice Recommendation 2

**Clearly describe any important model structural uncertainties. Where possible, structural uncertainty should be quantified and included in the VOI analysis.**

#### Probabilistic analysis

### Good Practice Recommendation 3

**Use probabilistic analysis to provide an appropriate quantification of uncertainty in model outputs.**

#### VOI analysis

#### Decision-making with uncertainty

^{18}

^{19}

^{,}

^{20}

### Key Concepts, Definitions, and Notation

*d*in the decision space $\mathcal{D}$. Next, it is assumed that a decision model, denoted $\mathcal{U}\left(d,\mathit{\theta}\right)$, predicts the utility for decision option

*d*given

*p*uncertain parameters $\mathit{\theta}=\left\{{\theta}_{1},\dots ,{\theta}_{p}\right\}$. The uncertainty about the “true” unknown values of $\mathit{\theta}$ is represented by the joint probability distribution, $\pi \left(\mathit{\theta}\right)$.

### Optimum Decision Option With Current Knowledge

where ${\mathbb{E}}_{\mathit{\theta}}\left(\cdot \right)$ represents the expectation (mean) taken with respect to $\pi \left(\mathit{\theta}\right)$.

### Expected Value of Perfect Information

### Expected Value of Partial Perfect Information

where ${\mathbb{E}}_{{\mathit{\theta}}_{c}|{\mathit{\theta}}_{i}^{\text{\u2217}}}\left(\cdot \right)$ represents expectation taken with respect to $\pi \left({\mathit{\theta}}_{c}|{\mathit{\theta}}_{i}^{\text{\u2217}}\right)$. When the decision about conducting further research to provide information about these parameters is made, the values of ${\mathit{\theta}}_{i}^{\text{\u2217}}$ are unknown. Therefore, the expectation of (5) is computed:

### Expected Value of Sample Information

### Expected Net Benefit of Sampling

^{22}

^{,}

## Estimation of VOI Measures

### EVPI Computation

^{25}

^{,}

^{26}

where ${\mathit{\theta}}^{\left(n\right)},\phantom{\rule{0.25em}{0ex}}n=1,\dots ,N$ are samples drawn from the joint distribution $\pi \left(\mathit{\theta}\right)$. Monte Carlo simulation is also used to approximate the first term in the EVPI expression, ${\mathbb{E}}_{\mathit{\theta}}\left\{{\text{max}}_{d\in \mathcal{D}}\mathcal{U}\left(d,\mathit{\theta}\right)\right\}$ via

*N*samples from $\pi \left(\mathit{\theta}\right)$ that are used to approximate the baseline expected utility of (11). Therefore, the computation of EVPI is a single-loop Monte Carlo scheme and does not require additional sampling beyond that required for a probabilistic analysis (note that

*loop*here calls into mind the for-loop programming construct that is used to execute repeatedly a set of instructions). Algorithm 1 describes the single-loop scheme for computing EVPI.

**Single-Loop Monte Carlo Scheme for Computing EVPI**

- 1.Sample a value from the distribution of the uncertain parameters.
- 2.Evaluate the utility function for each decision option using the parameter values generated in step 1. Store the values.
- 3.Repeat steps 1 to 2 for
*N*samples (eg, 10 000). This is the probabilistic analysis sample. - 4.Calculate the expected (mean) utility value of the
*N*samples for each decision option. - 5.Choose the maximum of the expected utility values in step 4 and store. This is the expected utility with current knowledge.
- 6.Calculate the maximum utility of the decision options for each of the
*N*samples generated in step 3. - 7.Calculate the mean of the
*N*maximum utilities generated in step 6. This is the expected utility when uncertainty is resolved with perfect information. - 8.Calculate the EVPI as the difference between the expected utility when uncertainty is resolved with perfect information (step 7) and the expected utility with current knowledge (step 5).

### EVPPI Computation

**Double-Loop Monte Carlo Scheme for Computing EVPPI**

- 1.Sample a value from the distribution(s) of the target parameter(s) of interest.
- 2.Sample a value from the distributions of the remaining (complementary) uncertain parameters, conditional on the value of the target parameter(s) sampled in step 1. If the target and complementary parameters are independent, the sample for this step can be drawn from the prior distribution of the complementary parameters.
- 3.Evaluate the utility function for each decision option using the parameter values generated in steps 1 and 2, and store the resulting utility values.
- 4.While holding the parameter value from step 1 constant, repeat steps 2 and 3 for
*J*samples. This represents the inner loop of simulation. - 5.Calculate the mean of the utility values across all
*J*samples for each decision option and store. - 6.Repeat steps 1 to 5 for
*K*values from the distribution of the target parameter(s) (step 1) and store the outputs from step 5. This represents the outer loop of simulation. - 7.Calculate the mean utility for each decision option across all
*K*samples of the output loop stored in step 6. - 8.Choose the maximum of the mean utilities calculated in step 7 and store. This is the expected utility with current knowledge about the target parameter(s) of interest.
- 9.Calculate the maximum utility of the decision options (ie, the maximum of the inner loop means) for each of the
*K*samples of the output stored in step 6. - 10.Calculate the mean of the
*K*maximum utility values generated in step 9. This yields the expected utility when uncertainty is resolved with perfect information about the target parameter(s) of interest. - 11.Calculate the EVPPI as the difference between the expected utility when uncertainty is resolved with perfect information about the parameter(s) of interest (step 10) and the expected utility with current knowledge (step 8).

*J*) is crucial, as the double-loop EVPPI computation can provide biased estimates when the sample size is small.

^{27}

^{28}

^{27}

^{,}

^{29}

^{,}

^{30}

**Single-Loop Monte Carlo Scheme for Computing EVPPI**

- 1.Sample a value from the distribution of the target parameter(s) of interest.
- 2.Evaluate the utility function for each decision option using the value for the target parameter(s) from step 1 and the mean values of the remaining uncertain parameters (or functions of them). Store the values.
- 3.Repeat steps 1 and 2 for
*N*samples. - 4.Calculate the mean of the
*N*utility values for each decision option. - 5.Follow steps 5 to 8 of the algorithm for computing EVPI (algorithm 1).

^{29}

**Single-Loop Regression-Based Scheme for Computing EVPPI**

- 1.Generate the probabilistic analysis sample using steps 1-3 of the algorithm for computing EVPI (algorithm 1).
- 2.For each of the decision options, regress the estimates of utility on the parameter values of the target parameter(s) of interest.
- 3.Calculate the regression fitted values for each decision option.
- 4.Follow steps 5-8 of the algorithm for computing EVPI (algorithm 1).

^{31}

## Good Practice Recommendation 4

**When Using the Nested Double-Loop Method to Compute EVPPI, Choose Inner- and Outer-Loop Simulation Sizes to Ensure Acceptable Bias and Precision**

## Good Practice Recommendation 5

**When Using the Single-Loop Methods to Compute EVPPI, Ensure the Underlying Assumptions of the Method Hold**

### EVSI computation

^{26}

^{,}However, an analytic solution cannot easily be derived for most problems, and sampling-based methods are usually required.

*conjugate prior*for the likelihood function. Conjugacy has computational advantages because it results in a known posterior distribution that is easy to sample from. The likelihood function that results in conjugacy is often (but not always) the natural choice for the data-generating mechanism.

where parameters ${\mathit{\theta}}^{\left(j,k\right)},j=1,\dots ,J$ are sampled from the posterior distribution $\pi \left(\mathit{\theta}|{\mathbf{X}}^{\left(k\right)}\right)$ in an inner loop, conditional on samples ${\mathbf{X}}^{\left(k\right)},k=1,\dots ,K$ in an outer loop.

**Double-Loop Monte Carlo Scheme for Computing EVSI**

- 1.Define the proposed study design (sample size, length of follow-up etc). Determine the data-generating distribution (the likelihood) under this design.
- 2.Sample a value from the prior distribution of the parameter(s) that will be informed by new data.
- 3.Sample a plausible data set from the distribution defined in step 1, conditional on the value of the target parameter(s) sampled in step 2.
- 4.Update the prior distribution of the target parameter(s) with the plausible data set from step 3 to form the posterior distribution for the target parameter(s). Sample a value from this posterior distribution, which may require Markov chain Monte Carlo sampling if the prior and likelihood are not conjugate.
- 5.Sample a value from the prior distribution of the remaining uncertain parameters.
- 6.Evaluate the utility function for each decision option using the parameter values from steps 4 and 5 and store the results.
- 7.Repeat steps 4 to 6
*J*times. This represents the inner loop of simulation. - 8.Calculate the mean of the utility values across all
*J*samples for each decision option in step 7 and store. - 9.Repeat steps 2 to 8 for
*K*values from the prior distribution of the parameters. This represents the outer loop of simulation. - 10.Calculate the mean utility values for each decision option across all
*K*samples of the output stored in step 9. - 11.Choose the maximum of the expected utility values in step 10 and store. This is the expected utility with current knowledge.
- 12.Calculate the maximum utility of the decision options (ie, the maximum of the inner loop means) for each of the
*K*samples of the output stored in step 9. - 13.Calculate the mean of the
*K*maximum utility values generated in step 12. This is the expected utility with new sample information about the target parameter(s) of interest. - 14.Calculate the EVSI as the difference between the expected utility with new sample information (step 13) and the expected utility with current knowledge (step 11).
- 15.Repeat steps 1 to 14 to calculate EVSI for different study designs (eg, studies with different sample sizes or lengths of follow-up).

^{33}

^{,}

^{34}

**Single-Loop Monte Carlo Scheme for Computing EVSI**

- 1.Define the proposed study design (sample size, length of follow-up, etc). Determine the data-generating distribution (the likelihood) under this design.
- 2.Sample a value from the prior distribution of the parameter(s) that will be informed by new data.
- 3.Sample a plausible data set from the distribution defined in step 1, conditional on the value of the parameter(s) sampled in step 2.
- 4.Update the prior distribution of the target parameter(s) of interest with the new data in step 3 to form the posterior distribution. Analytically compute the expectation (mean value) of this posterior distribution. This will be possible if the prior and likelihood distributions are conjugate.
- 5.Evaluate the utility function for each decision option using the posterior mean estimate of the target parameter(s) and the mean values of the remaining uncertain parameters. Store the values.
- 6.Repeat steps 2 to 5 for
*N*samples from the prior distribution of the target parameter(s) of interest. - 7.Calculate the mean utility values for each decision option across all
*N*samples of the output stored in step 5. - 8.Choose the maximum of the expected utility in step 7 and store. This is the expected utility with current knowledge about the target parameter(s) of interest.
- 9.Calculate the maximum utility of the decision options for each of the
*N*samples of the output stored in step 5. - 10.Calculate the mean of the
*N*maximum utility values generated in step 9. This is the expected utility with new sample information about the target parameter(s) of interest. - 11.Calculate the EVSI as the difference between the expected utility with new sample information (step 10) and the expected utility with current knowledge (step 8).
- 12.Repeat steps 1 to 11 to calculate EVSI for different study designs (eg, studies with different sample sizes or lengths of follow-up).

^{30}

^{,}

^{35}

^{36}

^{37}

^{,}

^{38}

## Good Practice Recommendation 6

**Choose the Data-Generating Distribution for the EVSI Computation to Reflect How the Data Would Be Analyzed if the Proposed New Study Were Conducted**

## Good Practice Recommendation 7

**When Simulating Data Sets, Model the Processes Expected to Result in Censoring, Missing Data, and Measurement Bias to Mimic the True Data-Generating Process**

### Reporting of results

where ${I}_{t}$ is the incidence in time period

*t*,

*T*is the time horizon, and

*d*is the discount rate for a single time period.

^{26}

^{24}

^{40}

*T*, over which the additional evidence remains informative, is more challenging. Information generated by research is not valuable indefinitely because future changes are expected to occur over time that affect the VOI.

^{39}

^{,}The impact of these complex and uncertain processes is impossible to quantify with certainty, but some assessment is possible based on historical evidence and anticipated future changes (eg, patent expiration, upcoming innovations, and other evaluative research underway). The value of research should also be discounted over this time horizon so that more weight is given to decisions that are informed by the research in the near term and less weight given to decisions informed in the more distant future.

^{24}

## Good Practice Recommendation 8

**When Reporting VOI Results, Clearly State All Underlying Assumptions**

### Other modeling considerations

#### Minimal modeling

^{43}

^{,}

^{43}

- •the clinical study captures all important differences in outcomes between the decision options being evaluated,
- •the endpoints that are important for the decision occur during the study, and
- •no age-specific competing causes of death or other events occur after the study ends.

^{43}

^{,}

^{44}

## VOI for Endpoints Other Than Cost-Effectiveness

^{45}

^{,}This approach places the focus on an endpoint of interest (eg, distribution of values describing uncertainty about the relative effect of an intervention on mortality). The VOI is then estimated in terms of that endpoint (eg, number of deaths avoided). Nevertheless, it does lead to difficulty in interpreting VOI outcomes across diverse decision problems.

## Software Resources

^{47}

- Baio G.
- Berardi A.
- Heath A.

^{48}

- Filipović-Pierucci A.
- Zarca K.
- Durand-Zaleski I.

^{47}

- Baio G.
- Berardi A.
- Heath A.

^{,}an online version of the BCEA R package. The introduction of these software solutions has allowed VOI analysis to be computed quickly; however, the analyst should always ensure that the underlying assumptions of the methods hold when using and interpreting the results.

## Future Research Directions

### Optimizing the Value of Research to Reduce Structural Uncertainties

^{12}

### Optimizing Study Design

^{22}

^{51}

### Computation of EVSI in Complex Modeling Settings

### Identifying the Appropriate Time Horizon for VOI

^{39}

^{,}Identifying the appropriate time horizon for research decisions and incorporating uncertainty in the time horizon is an area that has received little attention to date.

## Conclusions

**ISPOR Value of Information Analysis Task Force Report’s Good Practice Recommendations for Conducting and Reporting a VOI Analysis**

- 1.Uncertainty in parameter input values should be characterized using probability distributions and any dependency between parameters represented by a joint, correlated probability distribution.
- 2.Clearly describe any important model structural uncertainties. Where possible, structural uncertainty should be quantified and included in the VOI analysis.
- 3.Use probabilistic analysis to provide an appropriate quantification of uncertainty in model outputs.
- 4.When using the nested double-loop method to compute EVPPI, choose inner- and outer-loop simulation sizes to ensure acceptable bias and precision.
- 5.When using the single-loop methods to compute EVPPI, ensure the underlying assumptions of the method hold.
- 6.Choose the data-generating distribution for the EVSI computation to reflect how the data would be analyzed if the proposed new study were conducted.
- 7.When simulating data sets, model the processes that are expected to result in censoring, missing data and measurement bias order to mimic the true data-generating process.
- 8.When reporting VOI results, clearly state all underlying assumptions.

## Acknowledgments

## Supplemental Material

- Appendix Figures 1-4

- Glossary

## References

- Value of information analysis for research decisions—an introduction: report 1 of the ISPOR Value of Information Analysis Emerging Good Practices Task Force.
*Value Health.*2020; - Model parameter estimation and uncertainty analysis: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-6.
*Value Health.*2012; 15: 835-842 - Decision Modelling for Health Economic Evaluation.Oxford University Press, Oxford, UK2006
- Quantifying scientific uncertainty from expert judgement elicitation.in: Rougier J. Sparks S. Hill L.J. Risk and Uncertainty Assessment for Natural Hazards. Cambridge University Press, Cambridge, UK2013: 64-99
- Uncertain Judgements: Eliciting Expert Probabilities.John Wiley and Sons, Chichester, UK2006
- Confounding and missing data in cost-effectiveness analysis: comparing different methods.
*Health Econ Rev.*2013; 3: 8 - Good research practices for comparative effectiveness research: analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report–Part III.
*Value Health.*2009; 12: 1062-1073 - Models for potentially biased evidence in meta-analysis using empirically based priors.
*J R Stat Soc Ser A.*2009; 172: 119-136 - Bias modelling in evidence synthesis.
*J R Stat Soc Ser A.*2009; 172: 21-47 - Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.
*Med Decis Making.*2011; 31: 675-692 - Structural and parameter uncertainty in Bayesian cost-effectiveness models.
*J R Stat Soc Ser C Appl Stat.*2010; 59: 233-253 - When is a model good enough? Deriving the expected value of model improvement via specifying internal model discrepancies.
*SIAM/ASA Journal on Uncertainty Quantification.*2014; 2: 106-125 - Clinical effectiveness and cost-effectiveness of clopidogrel and modified-release dipyridamole in the secondary prevention of occlusive vascular events: a systematic review and economic evaluation.
*Health Technology Assessment.*2004; 8: 1-196 - Characterizing structural uncertainty in decision analytic models: a review and application of methods.
*Value Health.*2009; 12: 739-749 - Accounting for uncertainty in health economic decision models by using model averaging.
*J R Stat Soc Ser A Stat Soc.*2009; 172: 383-404 - Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.
*Value Health.*2011; 14: 205-218 - Managing structural uncertainty in health economic decision models: a discrepancy approach.
*J R Stat Soc Ser C.*2011; 61: 25-45 - Theory of Games and Economic Behavior.Princeton University Press, Princeton, NJ1944
- Decision criterion and value of information analysis: optimal aspirin dosage for secondary prevention of cardiovascular events.
*Med Decis Making.*2018; 38: 427-438 - Risk aversion and the value of information.
*Decis Support Syst.*1996; 16: 241-254 - The impact of decision makers’ constraints on the outcome of value of information analysis.
*Value Health.*2018; 21: 203-209 - Dimensions of design space: a decision-theoretic approach to optimal research design.
*Med Decis Making.*2009; 29: 643-660 - Value of information analysis informing adoption and research decisions in a portfolio of health care interventions.
*Med Decis Making Policy Pract.*2016; 1: 1-11 - Addressing adoption and research design decisions simultaneously: the role of value of sample information analysis.
*Med Decis Making.*2011; 31: 853-865 - Expected value of information and decision making in HTA.
*Health Econ.*2007; 16: 195-209 - The value of information and optimal clinical trial design.
*Stat Med.*2005; 24: 1791-1806 - Simulation sample sizes for Monte Carlo partial EVPI calculations.
*J Health Econ.*2010; 29: 468-477 - Strategies for efficient computation of the expected value of partial perfect information.
*Med Decis Making.*2014; 34: 327-342 - Estimating multiparameter partial expected value of perfect information from a probabilistic sensitivity analysis sample: a nonparametric regression approach.
*Med Decis Making.*2014; 34: 311-326 - Estimating the expected value of sample information using the probabilistic sensitivity analysis sample: a fast, nonparametric regression-based method.
*Med Decis Making.*2015; 35: 570-583 - A review of methods for analysis of the expected value of information.
*Med Decis Making.*2017; 37: 747-758 - A practical guide to value of information analysis.
*Pharmacoeconomics.*2015; 33: 105-121 - Expected value of sample information calculations in medical decision modeling.
*Med Decis Making.*2004; 24: 207-227 - Expected value of sample information for multi-arm cluster randomized trials with binary outcomes.
*Med Decis Making.*2014; 34: 352-365 - Efficient value of information calculation using a nonparametric regression approach: an applied perspective.
*Value Health.*2016; 19: 505-509 - An efficient estimator for the expected value of sample information.
*Med Decis Making.*2016; 36: 308-320 - A Gaussian approximation approach for value of information analysis.
*Med Decis Making.*2018; 38: 174-188 - Efficient Monte Carlo estimation of the expected value of sample information using moment matching.
*Med Decis Making.*2018; 38: 163-173 - The half-life of truth: what are appropriate time horizons for research decisions?.
*Med Decis Making.*2008; 28: 287-299 - Globally optimal trial design for local decision making.
*Health Econ.*2009; 18: 203-216 - Assessing the expected value of research studies in reducing uncertainty and improving implementation dynamics.
*Med Decis Making.*2017; 37: 523-533 - Informing a decision framework for when NICE should recommend the use of health technologies only in the context of an appropriately designed programme of evidence development.
*Health Technol Assess.*2012; 16: 1-323 - Minimal modeling approaches to value of information analysis for health research.
*Med Decis Making.*2011; 31: E1-E22 - Development and evaluation of an approach to using value of information analyses for real-time prioritization decisions within SWOG, a large cancer clinical trials cooperative group.
*Med Decis Making.*2016; 36: 641-651 - How to estimate the health benefits of additional research and changing clinical practice.
*BMJ.*2015; 351: h5987 - Methods to place a value on additional evidence are illustrated using a case study of corticosteroids after traumatic brain injury.
*J Clin Epidemiol.*2016; 70: 183-190 - BCEA: Bayesian cost effectiveness analysis.(R package version 2.2-6.)https://CRAN.R-project.org/package=BCEA(2018. Accessed October 2017)Date: 2018
- Markov models for health economic evaluation: the R Package heemod. ArXiv e-prints.(R package version 0.9.2.)https://pierucci.org/heemod(2017. Accessed October 2017)Date: 2017
- SAVI—Sheffield Accelerated Value of Information.(Accessed October 2017)
- Bayesian cost-effectiveness analysis with the R package BCEA.Springer International Publishing, New York, NY2017
- Value of information: we’ve got speed, what more do we need?.
*Med Decis Making.*2015; 35: 564-566 - Exploring the research decision space: the expected value of information for sequential research designs.
*Med Decis Making.*2010; 30: 155-162

## Article info

### Footnotes

Conflict of interest: All authors volunteered their time for participation in this task force. AB would like to acknowledge support from NIH-NHLBI research grant (R01 HL126804) for his time. No other authors received financial support for their participation.

Please note that the opinions expressed in this article represent those of the authors and do not necessarily reflect those of their employers.

### Identification

### Copyright

### User license

Elsevier user license |## Permitted

### For non-commercial purposes:

- Read, print & download
- Text & data mine
- Translate the article

## Not Permitted

- Reuse portions or extracts from the article in other works
- Redistribute or republish the final article
- Sell or re-use for commercial purposes

Elsevier's open access license policy