If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
Address correspondence to: Joshua T. Cohen, PhD, Deputy Director, Center for the Evaluation of Value and Risk in Health, Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, 800 Washington Street, Box #063, Boston, MA 02111.
Cost-effectiveness analysis (CEA) features heavily in pharmaceutical value assessments conducted by the Institute for Clinical and Economic Review (ICER).
What does the article add to existing knowledge?
We found that comments submitted by the pharmaceutical industry recommending that ICER revise its CEAs have little influence on ICER’s findings.
What insights does the article provide for informing healthcare-related decision making?
Industry can engage ICER more effectively by ensuring that their comments clearly identify problematic assumptions or estimates, offer specific alternatives, and support a case for the quantitative impact of making recommended revisions.
The Institute for Clinical and Economic Review (ICER) has gained prominance through its work conducting health technology assessments of pharmaceuticals in the United States.
To understand the influence of industry comments on pharmaceutical value assessments conducted by ICER.
We reviewed 15 ICER reports issued from 2017 through 2019. We quantified ICER’s revisions to its cost-effectiveness analysis (CEA) estimates between release of its draft and revised evidence reports and whether ratios shifted across ICER-specified categories of high, intermediate, or low value. We also reviewed industry-submitted comments recommending revision to ICER’s CEAs, noting ICER’s response as no change, text revised, assumption(s) revised, or conclusion revised. We evaluated each comment in terms of clarity, whether it offered an alternative to ICER’s approach, and whether it characterized the expected impact of revision on ICER’s analysis.
We identified 53 ICER-reported ratios. Of these, 45 (84.9%) changed between the draft and revised report, but 26 changes (57.8%) were small (<10%). Six ratios shifted across value categories. We identified 256 industry comments recommending that ICER revise its CEA. Of these, 159 (62%) lacked clarity, 145 (57%) offered no alternative, and 243 (95%) did not characterize their impact on ICER’s estimated ratio. Ninety-one comments (35.5%) caused ICER to revise its assumptions, but only 5 (2.0%) caused ICER to revise its conclusions. Four of these 5 comments characterized their impact on ICER’s findings.
Changes in ICER’s estimates of cost-effectiveness between its draft and revised evidence reports are generally modest. Greater precision in industry comments could increase the influence of industry critiques, thus enhancing the dialogue around pharmaceutical value.
Between 2017 and mid-2019, the Institute for Clinical and Economic Review (ICER) released a series of reports on drug therapies, in the process thoroughly stirring the US debate about drug pricing and value.
Although it is a private, nonprofit group with no formal regulatory or reimbursement authority, ICER has gained national prominence through its work assessing and communicating the clinical benefit and value of pharmaceuticals as well as other technologies. Its reports have garnered national press attention.
ICER’s evaluation framework, most recently updated in July 2017, aims to secure “sustainable access to high-value care for all patients” by ensuring “long-term value for money” and “short-term affordability” (an acceptable potential budget impact).
(p3,5) Its assessment of long-term value for money classifies a therapy as “low,” “intermediate,” or “high” value as follows: ICER automatically designates as “high-value” therapies with cost-effectiveness ratios below (more favorable than) $50 000 per quality-adjusted life-year (QALY) gained and as “low-value” therapies with cost-effectiveness ratios exceeding (less favorable than) $175 000 per QALY gained. For therapies with cost-effectiveness ratios between these 2 benchmarks, which ICER identified based on its interpretation of the literature,
an independent committee consisting of clinicians, patient/consumer representatives, methodologists, and policy makers and appointed by ICER votes, with each member casting a ballot for low, intermediate, or high value. For therapies treating “ultra-rare” conditions (ie, conditions with a US prevalence below 10 000 individuals), ICER tables the automatic value designations and instead holds a committee vote regardless of the therapy’s estimated cost-effectiveness. Before its 2017 update, ICER committees voted on value for all therapies, regardless of their cost-effectiveness or the prevalence of their targeted condition.
We have shown a strong association between committee value votes and ICER’s estimated cost-effectiveness,
an association that persists in updated tabulations of ICER committee votes (Fig. 1). ICER details its therapy assessment, including cost-effectiveness, in an evidence report, the draft version of which is released 21 weeks after ICER first announces the topic area. During the subsequent 4 weeks, ICER seeks and collects public comments on its draft evidence report. Several weeks after the public period concludes, after which it considers the public comments it has received, ICER releases its revised evidence report. Accompanying the release of its revised evidence report, ICER also publishes a document detailing how it has addressed each comment it received.
This article finds that ICER infrequently makes substantial changes to its cost-effectiveness estimates between releases of its draft and revised evidence reports and that, in particular, pharmaceutical industry public comments have a very limited impact on ICER’s cost-effectiveness analyses. We explore how industry might increase the impact of its comments.
Cost-Effectiveness Ratio Changes From ICER’s Draft to Revised Evidence Reports
We reviewed 15 ICER evidence reports evaluating pharmaceutical therapies issued by ICER since it adopted its 2017 framework, limiting attention to those reports that evaluated pharmaceutical therapies (https://icer-review.org/topics/#past-topics): amyliodosis, asthma, chimeric antigen receptor T-cell (CAR-T) therapies, cystic fibrosis, endometriosis, hemophilia A, hereditary angeoedema, inherited blindness, migraine, opioid use disorder, ovarian cancer, prostate cancer, psoriasis, spinal muscular atrophy, and tardive dyskinesia. The report we omitted—which evaluated treatments for lower back pain—focused only on nonpharmacological therapies. From the 15 reports we retained, we identified 53 cost-effectiveness ratios for which ICER reported an estimate in both its draft and revised evidence reports.
For each retained ratio, we recorded ICER’s cost-effectiveness estimate from the draft report, the corresponding value from the revised report, the change in the estimate (if any), and whether the change caused the estimate to shift categories (less than $50 000 per QALY, from $50 000 to $175 000 per QALY, and greater than $175 000 per QALY). We report descriptive statistics for the proportional change in cost-effectiveness, along with the number of cost-effectiveness ratios that moved from one value category to another.
Industry Comments in Response to ICER Reports and Their Impact
One of us (J.T.C.) reviewed all comments (N = 583) submitted by any pharmaceutical company (n = 34) in response to the 15 ICER reports we included in our sample (see the previous section). We restricted attention to the 317 comments (54%) pertaining to ICER’s cost-effectiveness analyses, disregarding those addressing ICER’s clinical assessment, budget impact analysis, or qualitative comments and comments on other aspects of the evidence report. We further restricted attention to those comments recommending that ICER revise its analysis (n = 256), in contrast to other comments that, for example, sought clarification of methods, assumptions, or findings.
We categorized each comment in our final sample based on ICER’s response: (1) ICER made no change to its report, (2) ICER revised text in its report but did not revise the analysis, (3) ICER revised an assumption but not a conclusion. or (4) ICER revised a conclusion. To count as a change to a conclusion, the change had to be in response to the comment and have a large enough impact on the cost-effectiveness ratio to either (1) shift the ratio from one value category to another (see category definitions in prior section) or (2) cause ICER to add or remove a ratio from its base-case assessment. To reduce the risk of missing conclusion-changing comments, we conducted a secondary review of comments for which our initial review of the draft and revised reports, described in the previous section, identified ratios that changed value categories.
Next, we evaluated each industry comment in terms of 3 attributes. First, we judged a comment to be “clear” if it specified a problematic assumption in ICER’s analysis and the purported reason why it should be changed. Second, we noted whether the comment offered an alternative assumption or methodology to replace the assumption in question. Finally, we judged whether the comment characterized the expected impact on ICER’s analysis of revising the assumption in question.
We report descriptive statistics characterizing the proportion of comments with various attributes and the association between comment attributes and how ICER responded to comments, that is, the proportion of responses resulting in no change to ICER’s report, a change to the text only, a revised assumption only, or a revised conclusion.
Table 1 summarizes the information we extracted from the revised evidence reports published by ICER. Therapy prices ranged widely, from $6900 annually for calcitonin gene-related peptide inhibitors to treat migraine to $2 million for administration of gene therapy for spinal muscular atrophy. Cost-effectiveness estimates also varied, ranging from cost-saving (emicizumab in hemophilia A) to $1.7 million per QALY gained (inotersen for amyloidosis).
Table 1Cost-effectiveness findings from ICER reviews.
Cost-Effectiveness Ratio Changes From ICER’s Draft to Revised Evidence Reports
Figure 2 summarizes the cost-effectiveness ratio changes between the draft and revised evidence reports. Of the 53 ratios we identified, 20 (37.7%) increased (became less favorable), 25 (47.2%) decreased (became more favorable), and 8 (15.1%) remained unchanged, including 4 that were “dominant” (more effective and less costly than the comparator) in both the draft and revised reports and 1 that was “dominated” (less effective and more expensive than the compactor) in both reports. (Fig. 2 does not display ratios that remained dominated or dominant in the draft and final evidence reports). Of the 45 ratios that changed between the draft and final reports, 26 (57.8%) changed by less than 10%.
Six ratios shifted from one value category to another. Four of these ratios shifted in a favorable direction, including the ratios for 3 migraine therapies, which shifted from the low-value category (>$175 000 per QALY) to the intermediate-value category ($50 000 to $175 000 per QALY). These changes reflect ICER’s replacement of its draft evidence report “placeholder” prices with actual prices that became available later. The fourth ratio, for a CAR-T therapy, shifted from the intermediate-value category to the high-value category (<$50 000 per QALY). In that case, a comment suggested that ICER include the cost of hospitalization (22.5 days, amounting to an increment of approximately $91 000) for administration of the comparator therapy. With an incremental health gain of 7.18 QALYs (unchanged between the draft and final reports), this cost revision reduced the draft cost-effectiveness ratio of $57 093 per QALY by approximately $12 700. The other 2 ratios that changed categories shifted in an unfavorable direction, including a ratio for a hereditary angioedema therapy, which shifted from cost saving to the low-value category, and a ratio for a therapy for psoriasis, which shifted from the intermediate- to low-value category. Neither change appears to reflect an industry comment.
Industry Comments in Response to ICER Reports and Their Impact
We identified 256 industry comments recommending that ICER revise its cost-effectiveness analysis. Of these, we judged that 18 (7.0%) caused ICER to alter the report text but not the analysis, 91 (35.6%) caused ICER to alter its assumptions, and 5 (2.0%) caused ICER to alter its conclusions. Of the comments that caused ICER to alter its conclusions, one, described in the previous section, pertained to ICER’s CAR-T report. Four comments pertained to ICER’s opioid use disorder report. In response to those comments, ICER concluded that it did not have sufficient evidence to compare Sublocade to the generic comparator (sublingual buprenorphine/naloxone). Whereas the draft evidence report (table 4.14) concluded that Sublocade increased costs and was less effective than the comparator, the revised evidence report’s base case did not reach a conclusion regarding the cost-effectiveness of Sublocade.
Figure 3 illustrates the proportion of comments we judged to be clear (Fig. 3, left: n = 97, 37.9% of all comments) or unclear (Fig. 3, right: n = 159, 62.1% of all comments). Comments judged to be clear influenced ICER’s analysis assumptions and/or conclusions more often (54.6% of all clear comments) than did comments we judged to be unclear (27.0% of all unclear comments). Only comments judged to be clear influenced ICER’s conclusions, although the vast majority of clear comments (92 of 97, or 94.8%) did not influence ICER’s conclusions.
Comments that provided an alternative to what the commenter considered to be a problematic assumption (n = 111, 43.4%) likewise influenced ICER modestly more than comments that did not (n = 145, 56.6%). Among the former, 58 comments (52.3%) resulted in a revised assumption or conclusion. Among the latter, 38 comments (26.2%) resulted in a revised assumption, and none resulted in a revised conclusion.
Finally, 243 of 256 comments (94.9%) did not characterize the impact of making the suggested revision. One of these comments (0.4%) resulted in a revised conclusion. Four of the 13 comments (30.8%) that did characterize the impact resulted in a revised conclusion.
Our results indicate that cost-effectiveness findings developed by ICER at the draft evidence report stage do not change substantially in the revised report. Only 6 of the 53 cost-effectiveness ratios we examined crossed a value category benchmark (ie, $50 000 or $175 000 per QALY). Nor did most change substantially in terms of their magnitude. Even those cost-effectiveness ratios that did shift category did not, in many cases, do so in response to evidence advanced by pharmaceutical companies as part of their comments. For example, in 3 of the 6 cases in which ratios crossed a value category benchmark, the revision reflected ICER’s use of newly available drug price information and not a revision to a scientific assumption.
We limited attention to comments explicitly addressing ICER’s cost-effectiveness estimates. For comments submitted to ICER by the pharmaceutical industry, however, that category includes most comments (54%) and is substantially larger than any other comment category. For example, comments pertaining to ICER’s clinical rating (including “A,” “B,” etc) represent 15% of all industry comments in our sample.
Our analysis of industry comments and ICER’s responses depends on a number of subjective judgments, including in particular what constitutes a report revision that amounts to a conclusionary change. Nonetheless, our findings suggest that industry comments have limited influence. Only 5 of 256 industry comments caused ICER to shift its conclusion. In most cases, ICER did not even revise its report text in response to these comments. There could be a number of reasons for this lack of influence. ICER, in its capacity as both the author of these reports and referee in adjudicating comments, may be reluctant to revise its reports. Alternatively, ICER’s draft reports may reflect solid analyses not in need of revision. A third possibility is that industry comments have simply not advanced many sound arguments for revision.
We suspect that industry comments could be made more influential. For example, our analysis suggests that comments that are more clear have a greater impact on ICER. Offering an alternative to a problematic assumption also seems to increase a comment’s influence. We observed that when commenters provide no alternative assumption, ICER sometimes responds by acknowledging the complaint as a limitation, adding that it lacks data or methods to overcome the identified problem. For example, one commenter suggested that “using short-term efficacy to evaluate the long-term comparative effectiveness . . . does not take into account the sustainability of long-term efficacy” but did not suggest an alternative to ICER’s approach. ICER responded that because the study referenced did not have extensive follow-up for the outcome of interest, “we could only confidently compare the comparative efficacy . . . at the end of the induction period,” adding, “we noted this as a general limitation.”
Our most striking finding is that only 5% of the industry comments explain why they matter to ICER’s conclusion. In many cases, that leaves ICER the option of acknowledging the validity of the comment from a scientific perspective but then dismissing it by saying that it does not influence ICER’s findings. For example, one commenter explained that “ICER’s use of a 72% discontinuation/switch rate . . . is not credible considering available real-world evidence.” ICER considered the alternative assumption recommended and noted that “the overall conclusions of the study are relatively unchanged.”
Just as strikingly, although admittedly based on a small sample, we found that ICER altered its conclusion in response to 4 of 13 of the comments that did describe their impact. That finding could indicate the importance of flagging a suggested revision’s impact. It is also possible that ICER would have revised its conclusions in response to these 4 comments even if their impact had not been flagged.
Possibly, because industry commenters are using their 5 allocated pages (the maximum submission length allowed by ICER) to identify a wide range of scientific inaccuracies, they do not develop solid, in-depth cases for the limited number of problems that would alter ICER’s findings. To improve decision making that relies on ICER’s reports, pharmaceutical companies (and probably other commenters) would be better off focusing their attention on a small number of scientific issues that will influence ICER’s conclusions.
Focusing on assumptions that substantially influence the results in ICER’s own sensitivity analyses would be a good place to start. Because other assumptions may not be addressed by ICER’s sensitivity analyses (eg, model structural assumptions and parameters not included in ICER’s assessment of uncertainty), we believe that it is crucial for ICER to make their simulation models publicly available. Only then, as we have argued elsewhere,
will external stakeholders have the means to identify all the important assumptions that should be central to the scientific debate over ICER’s reports. Without this information, ICER’s influence on the allocation of healthcare resources will be less beneficial than it otherwise could be.
In conclusion, our findings suggest that changes in ICER’s estimates of cost-effectiveness between its draft and revised evidence reports are generally modest and infrequently result in major shifts in ICER’s characterization of product value. The dialogue around value in the context of ICER reviews would be enhanced by both greater precision in stakeholder critiques of ICER’s assessments and increased transparency by ICER, industry, and others with respect to the economic models that have become increasingly important to healthcare decision making in the United States.
The authors have no other financial relationships to disclose.
ICER’s growing impact on drug pricing and reimbursement.