Advertisement

Does the Institute for Clinical and Economic Review Revise Its Findings in Response to Industry Comments?

  • Joshua T. Cohen
    Correspondence
    Address correspondence to: Joshua T. Cohen, PhD, Deputy Director, Center for the Evaluation of Value and Risk in Health, Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, 800 Washington Street, Box #063, Boston, MA 02111.
    Affiliations
    Center for the Evaluation of Value and Risk in Health, Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, MA, USA
    Search for articles by this author
  • Madison C. Silver
    Affiliations
    Center for the Evaluation of Value and Risk in Health, Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, MA, USA
    Search for articles by this author
  • Daniel A. Ollendorf
    Affiliations
    Center for the Evaluation of Value and Risk in Health, Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, MA, USA
    Search for articles by this author
  • Peter J. Neumann
    Affiliations
    Center for the Evaluation of Value and Risk in Health, Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, MA, USA
    Search for articles by this author
Open ArchivePublished:October 07, 2019DOI:https://doi.org/10.1016/j.jval.2019.08.003

      Highlights

      • 1.
        What is already known about the topic?
        • Cost-effectiveness analysis (CEA) features heavily in pharmaceutical value assessments conducted by the Institute for Clinical and Economic Review (ICER).
      • 2.
        What does the article add to existing knowledge?
        • We found that comments submitted by the pharmaceutical industry recommending that ICER revise its CEAs have little influence on ICER’s findings.
      • 3.
        What insights does the article provide for informing healthcare-related decision making?
        • Industry can engage ICER more effectively by ensuring that their comments clearly identify problematic assumptions or estimates, offer specific alternatives, and support a case for the quantitative impact of making recommended revisions.

      Abstract

      Background

      The Institute for Clinical and Economic Review (ICER) has gained prominance through its work conducting health technology assessments of pharmaceuticals in the United States.

      Objective

      To understand the influence of industry comments on pharmaceutical value assessments conducted by ICER.

      Methods

      We reviewed 15 ICER reports issued from 2017 through 2019. We quantified ICER’s revisions to its cost-effectiveness analysis (CEA) estimates between release of its draft and revised evidence reports and whether ratios shifted across ICER-specified categories of high, intermediate, or low value. We also reviewed industry-submitted comments recommending revision to ICER’s CEAs, noting ICER’s response as no change, text revised, assumption(s) revised, or conclusion revised. We evaluated each comment in terms of clarity, whether it offered an alternative to ICER’s approach, and whether it characterized the expected impact of revision on ICER’s analysis.

      Results

      We identified 53 ICER-reported ratios. Of these, 45 (84.9%) changed between the draft and revised report, but 26 changes (57.8%) were small (<10%). Six ratios shifted across value categories. We identified 256 industry comments recommending that ICER revise its CEA. Of these, 159 (62%) lacked clarity, 145 (57%) offered no alternative, and 243 (95%) did not characterize their impact on ICER’s estimated ratio. Ninety-one comments (35.5%) caused ICER to revise its assumptions, but only 5 (2.0%) caused ICER to revise its conclusions. Four of these 5 comments characterized their impact on ICER’s findings.

      Conclusions

      Changes in ICER’s estimates of cost-effectiveness between its draft and revised evidence reports are generally modest. Greater precision in industry comments could increase the influence of industry critiques, thus enhancing the dialogue around pharmaceutical value.

      Keywords

      Introduction

      Between 2017 and mid-2019, the Institute for Clinical and Economic Review (ICER) released a series of reports on drug therapies, in the process thoroughly stirring the US debate about drug pricing and value.
      • Cohen J.P.
      ICER’s growing impact on drug pricing and reimbursement.
      Although it is a private, nonprofit group with no formal regulatory or reimbursement authority, ICER has gained national prominence through its work assessing and communicating the clinical benefit and value of pharmaceuticals as well as other technologies. Its reports have garnered national press attention.
      • Butcher L.
      Can ICER bring cost-effectiveness to drug prices?.
      Surveys of payers note its influence in informing pricing and utilization management policies.
      • White N.P.
      • Sidhu M.
      • Johns A.
      • Pace M.
      Industry Perceptions and Expectations: The Role of ICER as an Independent HTA.
      ICER’s evaluation framework, most recently updated in July 2017, aims to secure “sustainable access to high-value care for all patients” by ensuring “long-term value for money” and “short-term affordability” (an acceptable potential budget impact).
      Institute for Clinical and Economic Review
      Overview of the ICER Value Assessment Framework and Update for 2017-2019.
      (p3,5) Its assessment of long-term value for money classifies a therapy as “low,” “intermediate,” or “high” value as follows: ICER automatically designates as “high-value” therapies with cost-effectiveness ratios below (more favorable than) $50 000 per quality-adjusted life-year (QALY) gained and as “low-value” therapies with cost-effectiveness ratios exceeding (less favorable than) $175 000 per QALY gained. For therapies with cost-effectiveness ratios between these 2 benchmarks, which ICER identified based on its interpretation of the literature,
      Institute for Clinical and Economic Review
      Overview of the ICER Value Assessment Framework and Update for 2017-2019.
      an independent committee consisting of clinicians, patient/consumer representatives, methodologists, and policy makers and appointed by ICER votes, with each member casting a ballot for low, intermediate, or high value. For therapies treating “ultra-rare” conditions (ie, conditions with a US prevalence below 10 000 individuals), ICER tables the automatic value designations and instead holds a committee vote regardless of the therapy’s estimated cost-effectiveness. Before its 2017 update, ICER committees voted on value for all therapies, regardless of their cost-effectiveness or the prevalence of their targeted condition.
      We have shown a strong association between committee value votes and ICER’s estimated cost-effectiveness,
      • Neumann P.J.
      • Silver M.
      • Cohen J.T.
      Should a drug’s value depend on the disease or population it treats? Insights from ICER’s value assessments.
      an association that persists in updated tabulations of ICER committee votes (Fig. 1). ICER details its therapy assessment, including cost-effectiveness, in an evidence report, the draft version of which is released 21 weeks after ICER first announces the topic area. During the subsequent 4 weeks, ICER seeks and collects public comments on its draft evidence report. Several weeks after the public period concludes, after which it considers the public comments it has received, ICER releases its revised evidence report. Accompanying the release of its revised evidence report, ICER also publishes a document detailing how it has addressed each comment it received.
      Figure thumbnail gr1
      Figure 1Distribution of Institute for Clinical and Economic Review (ICER) committee votes by treatment cost-effectiveness ratios, June 2011 through April 2019. “Dominant” means an intervention saves money and gains QALYs (improves health). From June 2011 to April 2019, ICER convened 52 committees to review 191 treatments. Among the 191 treatments, ICER reported a cost–per–quality-adjusted life-year estimate and held a committee vote on value in 59 cases. We analyzed the 706 individual committee member votes (for “high,” “intermediate,” or “low” value) on these 59 therapies. For cases in which ICER reported cost-effectiveness as a range, we used the midpoint. For conditions such as obesity, for which ICER separately analyzed 4 types of bariatric surgery but held 1 collective vote for the 4 interventions, we averaged the individual cost-effectiveness estimates. Source: Authors’ analysis of publicly available ICER committee votes (https://icer-review.org/). An earlier version of this figure appeared in Neumann PJ, Silver MC, Cohen JT. Should a drug’s value depend on the disease or population it treats? Insights from ICER’s value assessments. Health Affairs Blog. November 6, 2018.
      This article finds that ICER infrequently makes substantial changes to its cost-effectiveness estimates between releases of its draft and revised evidence reports and that, in particular, pharmaceutical industry public comments have a very limited impact on ICER’s cost-effectiveness analyses. We explore how industry might increase the impact of its comments.

      Methods

       Cost-Effectiveness Ratio Changes From ICER’s Draft to Revised Evidence Reports

      We reviewed 15 ICER evidence reports evaluating pharmaceutical therapies issued by ICER since it adopted its 2017 framework, limiting attention to those reports that evaluated pharmaceutical therapies (https://icer-review.org/topics/#past-topics): amyliodosis, asthma, chimeric antigen receptor T-cell (CAR-T) therapies, cystic fibrosis, endometriosis, hemophilia A, hereditary angeoedema, inherited blindness, migraine, opioid use disorder, ovarian cancer, prostate cancer, psoriasis, spinal muscular atrophy, and tardive dyskinesia. The report we omitted—which evaluated treatments for lower back pain—focused only on nonpharmacological therapies. From the 15 reports we retained, we identified 53 cost-effectiveness ratios for which ICER reported an estimate in both its draft and revised evidence reports.
      For each retained ratio, we recorded ICER’s cost-effectiveness estimate from the draft report, the corresponding value from the revised report, the change in the estimate (if any), and whether the change caused the estimate to shift categories (less than $50 000 per QALY, from $50 000 to $175 000 per QALY, and greater than $175 000 per QALY). We report descriptive statistics for the proportional change in cost-effectiveness, along with the number of cost-effectiveness ratios that moved from one value category to another.

       Industry Comments in Response to ICER Reports and Their Impact

      One of us (J.T.C.) reviewed all comments (N = 583) submitted by any pharmaceutical company (n = 34) in response to the 15 ICER reports we included in our sample (see the previous section). We restricted attention to the 317 comments (54%) pertaining to ICER’s cost-effectiveness analyses, disregarding those addressing ICER’s clinical assessment, budget impact analysis, or qualitative comments and comments on other aspects of the evidence report. We further restricted attention to those comments recommending that ICER revise its analysis (n = 256), in contrast to other comments that, for example, sought clarification of methods, assumptions, or findings.
      We categorized each comment in our final sample based on ICER’s response: (1) ICER made no change to its report, (2) ICER revised text in its report but did not revise the analysis, (3) ICER revised an assumption but not a conclusion. or (4) ICER revised a conclusion. To count as a change to a conclusion, the change had to be in response to the comment and have a large enough impact on the cost-effectiveness ratio to either (1) shift the ratio from one value category to another (see category definitions in prior section) or (2) cause ICER to add or remove a ratio from its base-case assessment. To reduce the risk of missing conclusion-changing comments, we conducted a secondary review of comments for which our initial review of the draft and revised reports, described in the previous section, identified ratios that changed value categories.
      Next, we evaluated each industry comment in terms of 3 attributes. First, we judged a comment to be “clear” if it specified a problematic assumption in ICER’s analysis and the purported reason why it should be changed. Second, we noted whether the comment offered an alternative assumption or methodology to replace the assumption in question. Finally, we judged whether the comment characterized the expected impact on ICER’s analysis of revising the assumption in question.
      We report descriptive statistics characterizing the proportion of comments with various attributes and the association between comment attributes and how ICER responded to comments, that is, the proportion of responses resulting in no change to ICER’s report, a change to the text only, a revised assumption only, or a revised conclusion.

      Results

      Table 1 summarizes the information we extracted from the revised evidence reports published by ICER. Therapy prices ranged widely, from $6900 annually for calcitonin gene-related peptide inhibitors to treat migraine to $2 million for administration of gene therapy for spinal muscular atrophy. Cost-effectiveness estimates also varied, ranging from cost-saving (emicizumab in hemophilia A) to $1.7 million per QALY gained (inotersen for amyloidosis).
      Table 1Cost-effectiveness findings from ICER reviews.
      Includes the 15 reports covered in this analysis (see the “Methods” section).
      Final report dateIndicationInterventionComparatorAnnual WAC ($)
      WAC values are from the Value-Based Price Benchmarks section of the most recent ICER report (often designated “final”). These reports are publically available online at https://icer-review.org/.
      Cost-effectiveness ratio ($/QALY)
      Bolded cost-effectiveness ratios fall below ICER’s $150 000 maximum acceptable cost-effectiveness ratio benchmark.
      10/4/2018hATTR amyloidosisInotersenBest supportive care450 0001.7m
      PatisiranBest supportive care450 000850 000
      12/20/2018Asthma (severe)Biologic agents (4)Best supportive care31 000-39 000325 000-391 000
      03/23/2018CAR-T therapies
      Pediatric leukemiaTisagenlecleucelChemotherapy475 00046 000
      Adult lymphomaAxicabtagene ciloleucelChemotherapy373 000136 000
      06/07/2018Cystic fibrosisCFTR modulators (3)Best supportive care292 000-312 000841 000-974 000
      08/03/2018Endometriosis
      Short-run (trial)ElagolixPlacebo10 138126 800
      Lifetime81 000
      04/20/2018HemophiliaEmicizumabBypassing agent prophylaxis482 000Cost-saving
      11/15/2018Hereditary angioedemaLanadelumab (prophylaxis)On-demand treatment566 000$1.1m
      C1 inhibitors (2) (prophylaxis)On-demand treatment510 000-539 000328 000-$5.6m
      07/05/2018Migraine
      ChronicCGRP inhibitors (2)Placebo690090 000-120 000
      Episodic150 000
      12/03/2018Opioid use disorderBuprenorphine implantOral buprenorphine9900265 000
      09/28/2017Ovarian cancerPARP inhibitors (3)178 000-195 000
      Recurrent diseaseChemotherapy146 000-295 000
      MaintenanceObservation291 000-369 000
      10/04/2018Prostate cancerEarly antiandrogen therapy (2)Later antiandrogen therapy135 000-142 00068 000-84 000
      08/02/2018PsoriasisBiologic agents (10)Placebo38 000-79 000131 000-188 000
      02/14/2018RPE65 eye disease
      Treat at age 15VoretigeneBest supportive care850 000643 800
      Treat at age 3287 915
      04/03/2019Spinal muscular atrophyNusinersenBest supportive care750 0001.1m
      OnasemnogeneBest supportive care2 000 000
      ICER used a placeholder price for Onasemnogene (Zolgensma) because the manufacturer had not yet announced the actual price.
      243 000
      12/21/2017Tardive dyskinesiaDeutetrabenazinePlacebo90 0711.1m
      ValbenazinePlacebo75 789752 000
      CAR-T indicates chimeric antigen receptor T-cell; CFTR, cystic fibrosis transmembrane conductance regulator; CGRP, calcitonin gene-related peptide; hATRR, hereditary ATTR; ICER, Institute for Clinical and Economic Review; PARP, poly ADP ribose polymerase; QALY, quality-adjusted life-year; WAC, wholesale average cost.
      Includes the 15 reports covered in this analysis (see the “Methods” section).
      WAC values are from the Value-Based Price Benchmarks section of the most recent ICER report (often designated “final”). These reports are publically available online at https://icer-review.org/.
      Bolded cost-effectiveness ratios fall below ICER’s $150 000 maximum acceptable cost-effectiveness ratio benchmark.
      § ICER used a placeholder price for Onasemnogene (Zolgensma) because the manufacturer had not yet announced the actual price.

       Cost-Effectiveness Ratio Changes From ICER’s Draft to Revised Evidence Reports

      Figure 2 summarizes the cost-effectiveness ratio changes between the draft and revised evidence reports. Of the 53 ratios we identified, 20 (37.7%) increased (became less favorable), 25 (47.2%) decreased (became more favorable), and 8 (15.1%) remained unchanged, including 4 that were “dominant” (more effective and less costly than the comparator) in both the draft and revised reports and 1 that was “dominated” (less effective and more expensive than the compactor) in both reports. (Fig. 2 does not display ratios that remained dominated or dominant in the draft and final evidence reports). Of the 45 ratios that changed between the draft and final reports, 26 (57.8%) changed by less than 10%.
      Figure thumbnail gr2
      Figure 2Cost-effectiveness ratio changes from draft to revised report (2017-2019). The middle data point reported for hereditary angioedema represents a therapy that the Institute for Clinical and Economic Review (ICER) reported as saving money and improving health in its draft report and having a cost-effectiveness ratio of $328 000 per quality-adjusted life-year in its revised report. The right data point reported for spinal muscular atrophy represents a therapy that ICER reported as equally effective and more expensive than the comparator (best supportive care) in its draft report and having a cost-effectiveness ratio of $8.16 million in its revised report. In addition to the data points displayed, ICER reported that 4 hemophilia A therapies were both more effective and less costly than their comparators (ie, “dominant”) in both the draft and revised evidence reports; ICER reported that 1 therapy (vivitrol, for opioid use disorder) was both less effective and more costly in both the draft and final reports. For endometriosis, ICER’s revised evidence report combined results for 2 endpoints (dysmenorrhea and nonmenstrual pain) reported on separately in its draft evidence report. We combined ICER’s draft report results (using the revised report weights) for these 2 endpoints so that we could plot the draft-to-revised report cost-effectiveness ratio changes.
      Six ratios shifted from one value category to another. Four of these ratios shifted in a favorable direction, including the ratios for 3 migraine therapies, which shifted from the low-value category (>$175 000 per QALY) to the intermediate-value category ($50 000 to $175 000 per QALY). These changes reflect ICER’s replacement of its draft evidence report “placeholder” prices with actual prices that became available later. The fourth ratio, for a CAR-T therapy, shifted from the intermediate-value category to the high-value category (<$50 000 per QALY). In that case, a comment suggested that ICER include the cost of hospitalization (22.5 days, amounting to an increment of approximately $91 000) for administration of the comparator therapy. With an incremental health gain of 7.18 QALYs (unchanged between the draft and final reports), this cost revision reduced the draft cost-effectiveness ratio of $57 093 per QALY by approximately $12 700. The other 2 ratios that changed categories shifted in an unfavorable direction, including a ratio for a hereditary angioedema therapy, which shifted from cost saving to the low-value category, and a ratio for a therapy for psoriasis, which shifted from the intermediate- to low-value category. Neither change appears to reflect an industry comment.

       Industry Comments in Response to ICER Reports and Their Impact

      We identified 256 industry comments recommending that ICER revise its cost-effectiveness analysis. Of these, we judged that 18 (7.0%) caused ICER to alter the report text but not the analysis, 91 (35.6%) caused ICER to alter its assumptions, and 5 (2.0%) caused ICER to alter its conclusions. Of the comments that caused ICER to alter its conclusions, one, described in the previous section, pertained to ICER’s CAR-T report. Four comments pertained to ICER’s opioid use disorder report. In response to those comments, ICER concluded that it did not have sufficient evidence to compare Sublocade to the generic comparator (sublingual buprenorphine/naloxone). Whereas the draft evidence report (table 4.14) concluded that Sublocade increased costs and was less effective than the comparator, the revised evidence report’s base case did not reach a conclusion regarding the cost-effectiveness of Sublocade.
      Figure 3 illustrates the proportion of comments we judged to be clear (Fig. 3, left: n = 97, 37.9% of all comments) or unclear (Fig. 3, right: n = 159, 62.1% of all comments). Comments judged to be clear influenced ICER’s analysis assumptions and/or conclusions more often (54.6% of all clear comments) than did comments we judged to be unclear (27.0% of all unclear comments). Only comments judged to be clear influenced ICER’s conclusions, although the vast majority of clear comments (92 of 97, or 94.8%) did not influence ICER’s conclusions.
      Figure thumbnail gr3
      Figure 3Pharmaceutical industry comments and Institute for Clinical and Economic Review response (2017-2019—clarity). The size of each pie chart is proportional to the number of responses it represents.
      Comments that provided an alternative to what the commenter considered to be a problematic assumption (n = 111, 43.4%) likewise influenced ICER modestly more than comments that did not (n = 145, 56.6%). Among the former, 58 comments (52.3%) resulted in a revised assumption or conclusion. Among the latter, 38 comments (26.2%) resulted in a revised assumption, and none resulted in a revised conclusion.
      Finally, 243 of 256 comments (94.9%) did not characterize the impact of making the suggested revision. One of these comments (0.4%) resulted in a revised conclusion. Four of the 13 comments (30.8%) that did characterize the impact resulted in a revised conclusion.

      Discussion

      Our results indicate that cost-effectiveness findings developed by ICER at the draft evidence report stage do not change substantially in the revised report. Only 6 of the 53 cost-effectiveness ratios we examined crossed a value category benchmark (ie, $50 000 or $175 000 per QALY). Nor did most change substantially in terms of their magnitude. Even those cost-effectiveness ratios that did shift category did not, in many cases, do so in response to evidence advanced by pharmaceutical companies as part of their comments. For example, in 3 of the 6 cases in which ratios crossed a value category benchmark, the revision reflected ICER’s use of newly available drug price information and not a revision to a scientific assumption.
      We limited attention to comments explicitly addressing ICER’s cost-effectiveness estimates. For comments submitted to ICER by the pharmaceutical industry, however, that category includes most comments (54%) and is substantially larger than any other comment category. For example, comments pertaining to ICER’s clinical rating (including “A,” “B,” etc) represent 15% of all industry comments in our sample.
      Our analysis of industry comments and ICER’s responses depends on a number of subjective judgments, including in particular what constitutes a report revision that amounts to a conclusionary change. Nonetheless, our findings suggest that industry comments have limited influence. Only 5 of 256 industry comments caused ICER to shift its conclusion. In most cases, ICER did not even revise its report text in response to these comments. There could be a number of reasons for this lack of influence. ICER, in its capacity as both the author of these reports and referee in adjudicating comments, may be reluctant to revise its reports. Alternatively, ICER’s draft reports may reflect solid analyses not in need of revision. A third possibility is that industry comments have simply not advanced many sound arguments for revision.
      We suspect that industry comments could be made more influential. For example, our analysis suggests that comments that are more clear have a greater impact on ICER. Offering an alternative to a problematic assumption also seems to increase a comment’s influence. We observed that when commenters provide no alternative assumption, ICER sometimes responds by acknowledging the complaint as a limitation, adding that it lacks data or methods to overcome the identified problem. For example, one commenter suggested that “using short-term efficacy to evaluate the long-term comparative effectiveness . . . does not take into account the sustainability of long-term efficacy” but did not suggest an alternative to ICER’s approach. ICER responded that because the study referenced did not have extensive follow-up for the outcome of interest, “we could only confidently compare the comparative efficacy . . . at the end of the induction period,” adding, “we noted this as a general limitation.”
      Institute for Clinical and Economic Review
      Targeted Immunomodulators for the Treatment of Moderate-to-Severe Plaque Psoriasis: Response to Public Comments on Draft Evidence Report.
      Our most striking finding is that only 5% of the industry comments explain why they matter to ICER’s conclusion. In many cases, that leaves ICER the option of acknowledging the validity of the comment from a scientific perspective but then dismissing it by saying that it does not influence ICER’s findings. For example, one commenter explained that “ICER’s use of a 72% discontinuation/switch rate . . . is not credible considering available real-world evidence.” ICER considered the alternative assumption recommended and noted that “the overall conclusions of the study are relatively unchanged.”
      Institute for Clinical and Economic Review
      Targeted Immunomodulators for the Treatment of Moderate-to-Severe Plaque Psoriasis: Response to Public Comments on Draft Evidence Report.
      Just as strikingly, although admittedly based on a small sample, we found that ICER altered its conclusion in response to 4 of 13 of the comments that did describe their impact. That finding could indicate the importance of flagging a suggested revision’s impact. It is also possible that ICER would have revised its conclusions in response to these 4 comments even if their impact had not been flagged.
      Possibly, because industry commenters are using their 5 allocated pages (the maximum submission length allowed by ICER) to identify a wide range of scientific inaccuracies, they do not develop solid, in-depth cases for the limited number of problems that would alter ICER’s findings. To improve decision making that relies on ICER’s reports, pharmaceutical companies (and probably other commenters) would be better off focusing their attention on a small number of scientific issues that will influence ICER’s conclusions.
      Focusing on assumptions that substantially influence the results in ICER’s own sensitivity analyses would be a good place to start. Because other assumptions may not be addressed by ICER’s sensitivity analyses (eg, model structural assumptions and parameters not included in ICER’s assessment of uncertainty), we believe that it is crucial for ICER to make their simulation models publicly available. Only then, as we have argued elsewhere,
      • Cohen J.T.
      • Neumann P.J.
      • Wong J.B.
      A call for open-source cost-effectiveness analysis.
      • Cohen J.T.
      • Wong J.B.
      Can economic model transparency improve provider interpretation of cost-effectiveness analysis? A response.
      will external stakeholders have the means to identify all the important assumptions that should be central to the scientific debate over ICER’s reports. Without this information, ICER’s influence on the allocation of healthcare resources will be less beneficial than it otherwise could be.
      In conclusion, our findings suggest that changes in ICER’s estimates of cost-effectiveness between its draft and revised evidence reports are generally modest and infrequently result in major shifts in ICER’s characterization of product value. The dialogue around value in the context of ICER reviews would be enhanced by both greater precision in stakeholder critiques of ICER’s assessments and increased transparency by ICER, industry, and others with respect to the economic models that have become increasingly important to healthcare decision making in the United States.

      Acknowledgments

      The authors have no other financial relationships to disclose.

      References

        • Cohen J.P.
        ICER’s growing impact on drug pricing and reimbursement.
        Forbes. April 17, 2019;
        • Butcher L.
        Can ICER bring cost-effectiveness to drug prices?.
        Managed Care. 2019; 28: 30-33
        • White N.P.
        • Sidhu M.
        • Johns A.
        • Pace M.
        Industry Perceptions and Expectations: The Role of ICER as an Independent HTA.
        ICON, Dublin, Ireland2018
        • Institute for Clinical and Economic Review
        Overview of the ICER Value Assessment Framework and Update for 2017-2019.
        Institute for Clinical and Economic Review, Boston, MA2017
        • Neumann P.J.
        • Silver M.
        • Cohen J.T.
        Should a drug’s value depend on the disease or population it treats? Insights from ICER’s value assessments.
        Health Affairs Blog. November 6, 2018;
        • Institute for Clinical and Economic Review
        Targeted Immunomodulators for the Treatment of Moderate-to-Severe Plaque Psoriasis: Response to Public Comments on Draft Evidence Report.
        Institute for Clinical and Economic Review, Boston, MA2018
        • Cohen J.T.
        • Neumann P.J.
        • Wong J.B.
        A call for open-source cost-effectiveness analysis.
        Ann Intern Med. 2017; 167: 432-433
        • Cohen J.T.
        • Wong J.B.
        Can economic model transparency improve provider interpretation of cost-effectiveness analysis? A response.
        Med Care. 2017; 55: 912-914