Advertisement

Real-World Evidence: Useful in the Real World of US Payer Decision Making? How? When? And What Studies?

Open ArchivePublished:October 18, 2017DOI:https://doi.org/10.1016/j.jval.2017.08.3013

      Abstract

      Objectives

      To examine how real-world evidence (RWE) is currently perceived and used in managed care environments, especially to inform pharmacy and therapeutic (P&T) committee decisions, to assess which study factors (e.g., data, design, and funding source) contribute to RWE utility in decisions, and to identify barriers to consideration of RWE studies in P&T decision making.

      Methods

      We conducted focus groups/telephone-based interviews and surveys to understand perceptions of RWE and assess awareness, quality, and relevance of two high-profile examples of published RWE studies. A purposive sample comprised 4 physicians, 15 pharmacists, and 1 researcher representing 18 US health plans and health system organizations.

      Results

      Participants reported that RWE was generally used, or useful, to inform safety monitoring, utilization management, and cost analysis, but less so to guide P&T decisions. Participants were not aware of the two sample RWE studies but considered both studies to be valuable. Relevant research questions and outcomes, transparent methods, study quality, and timely results contribute to the utility of published RWE. Perceived organizational barriers to the use of published RWE included lack of skill, training, and timely study results.

      Conclusions

      Payers recognize the value of RWE, but use of such studies to inform P&T decisions varies from organization to organization and is limited. Relevance to payers, timeliness, and transparent methods were key concerns with RWE. Participants recognized the need for continuing education on evaluating and using RWE to better understand the study methods, findings, and applicability to their organizations.

      Keywords

      Introduction

      The use of administrative data, electronic health records, and other data sets to evaluate health care technologies and programs has exploded over the past decade. These real-world data from patient experiences can inform decisions on how best to use available and emerging health care technologies. Nevertheless, little is known about how managed care organizations (MCOs) use real-world evidence (RWE) in their formulary, utilization management, drug monographs, and other decision-making processes. For purposes of this article we adopt the definition of RWE as proposed by the Food and Drug Administration: “… information on health care that is derived from multiple sources outside typical clinical research settings, including electronic health records (EHRs), claims and billing data, product and disease registries, and data gathered through personal devices and health applications” [
      • Sherman R.E.
      • Anderson S.A.
      • Dal Pan G.J.
      • et al.
      Real-world evidence—What is it and what can it tell us?.
      ].
      Because RWE does not involve random assignment of subjects to treatments, advanced matching and statistical techniques are often needed to control for potential bias, especially confounding by indication. Nevertheless, these state-of-the-art approaches to control for bias are likely unfamiliar to health care decision makers [
      • Perfetto E.M.
      • Anyanwu C.
      • Pickering M.K.
      • et al.
      Got CER? Educating pharmacists for practice in the future: new tools for new challenges.
      ]. A lack of understanding may lead decision makers to mistrust and place a lower importance on information from such studies, limiting their use in the decision-making process. Therefore, decision makers may instead over-rely on familiar sources of evidence, such as randomized controlled trials (RCTs), or use expert opinion [
      • Leung M.Y.
      • Halpern M.T.
      • West N.D.
      Pharmaceutical technology assessment: perspectives from payers.
      ,
      • Moloney R.
      • Mohr P.
      • Hawe E.
      • et al.
      Payer perspective on future acceptability of comparative effectiveness and relative effectiveness research.
      ].
      Observational research has evolved over the past 5 to 10 years, with large-scale investment in new and enhanced real-world data sources, numerous recommendations on how to conduct and evaluate such research, and a proliferation of RWE publications. For example, public investment in data infrastructure has been made by many groups (e.g., Patient-Centered Outcomes Research Institute [PCORI] National Patient-Centered Clinical Research Network [

      Patient-Centered Outcomes Research Institute. PCORnet, the National Patient-Centered Clinical Research Network. Available from: http://www.pcornet.org/. [Accessed June 1, 2017].

      ], the National Institutes of Health Precision Medicine Cohort, and state investment in all-payer claims databases) to inform health care delivery and policy [
      • Collins F.S.
      • Varmus H.
      A new initiative on precision medicine.
      ,
      • Doshi J.A.
      • Hendrick F.B.
      • Graff J.S.
      • et al.
      Data, data everywhere, but access remains a big issue for researchers: a review of access policies for publicly-funded patient-level health care data in the United States.
      ]. Professional societies and others, including the International Society for Pharmacoeconomics and Outcomes Research (ISPOR), PCORI, the Agency for Healthcare Research and Quality, the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance, have issued guidelines for conducting RWE studies [
      • Perfetto E.M.
      • Anyanwu C.
      • Pickering M.K.
      • et al.
      Got CER? Educating pharmacists for practice in the future: new tools for new challenges.
      ,

      Patient-Centered Outcomes Research Institute Methodology Committee. The PCORI methodology report. 2013. Available from: http://www.pcori.org/research-results/research-methodology. [Accessed July 19, 2016].

      ,

      Agency for Healthcare Research and Quality. Methods guide for effectiveness and comparative effectiveness reviews. AHRQ Publication No. 10(14)-EHC063-EF. 2014. Available from: www.effectivehealthcare.ahrq.gov. [Accessed May 31, 2017].

      ,

      The European Network of Centres for Pharmacoepidemiology and Pharmacovigilance. Guide on methodological standards in pharmacoepidemiology (revision 5). EMA/95098/2010. Available from: http://www.encepp.eu/standards_and_guidances. [Accessed May 31, 2017].

      ,
      • Garrison L.P.
      • Neumann P.J.
      • Erickson P.
      • et al.
      Using real-world data for coverage and payment decisions: the ISPOR Real-World Data Task Force Report.
      ,
      • Motheral B.
      • Brooks J.
      • Clark M.A.
      • et al.
      A checklist for retrospective database studies—report of the ISPOR Task Force on Retrospective Databases.
      ,
      • Berger M.L.
      • Mamdani M.
      • Atkins D.
      • et al.
      Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—part I.
      ,
      • Cox E.
      • Martin B.C.
      • Van Staa T.
      • et al.
      Good research practices for comparative effectiveness research: approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: the International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retrospective Database Analysis Task Force Report—part II.
      ,
      • Johnson M.L.
      • Crown W.
      • Martin B.C.
      • et al.
      Good research practices for comparative effectiveness research: analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—part III.
      ,
      • Berger M.L.
      • Martin B.C.
      • Husereau D.
      • et al.
      A questionnaire to assess the relevance and credibility of observational studies to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force Report.
      ,
      • Dreyer N.A.
      • Velentgas P.
      • Westrich K.
      • et al.
      The GRACE checklist for rating the quality of observational studies of comparative effectiveness: a tale of hope and caution.
      ]. Other efforts, such as the Comparative Effectiveness Research (CER), Collaborative between the Academy of Managed Care Pharmacy (AMCP), the ISPOR, and the National Pharmaceutical Council (NPC), sought to help decision makers evaluate and apply this evidence [
      • Berger M.L.
      • Martin B.C.
      • Husereau D.
      • et al.
      A questionnaire to assess the relevance and credibility of observational studies to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force Report.
      ]. These efforts parallel a sevenfold increase in RWE publications over the past decade.
      In brief, the field of RWE has matured to where evidence beyond clinical trials is of sufficient quantity and quality to assist decision makers in a complex and dynamic health care environment. Nevertheless, little is known about the use of RWE by payer decision makers and their organizations.
      This study examined three issues. First, how RWE was perceived and used in managed care environments, including pharmacy and therapeutic (P&T) decisions. Second, the features of RWE studies such as the study design (e.g., prospective vs. retrospective cohorts), type of analytic methods, population, outcomes (e.g., safety vs. efficacy), and data sources (e.g., claims vs. EHRs) that make certain studies more useful to payers were identified. Finally, the barriers associated with the use of RWE studies at the research study, individual evaluator, and payer organization levels were assessed.

      Methods

      A purposive sample of pharmacists and physicians employed by MCOs, pharmacy benefit managers (PBMs), health care systems, and government agencies was invited to participate in the study using up to three outreach attempts via email or phone. The study sample comprised 20 physicians, pharmacists, or researchers from MCOs, PBMs, health insurers, health technology assessment organizations, or health systems. The sample was designed to include a mixture of large, midsize, and smaller organizations. The study plan was to conduct three focus group sessions as adjunct events to professional meetings. Nevertheless, because of cancellation by one meeting planner and by participants for personal reasons, we conducted two focus groups in person, two via conference call, and three individual interviews separately by phone. The University of Arizona Institutional Review Board approved the study and participants provided informed consent before data collection. Participants received a modest compensation for their time.
      To determine payers’ perceptions about RWE in managed care, we conducted a two-phase investigation: 1) an online survey that evaluated the potential use of sample RWE studies published in high-tier journals in payer decision making and 2) focus groups or telephone interviews with a focus group/interview questionnaire on attitudes and use of RWE in general (see Fig. 1 for the overall study design). The advantage of using sequential quantitative and qualitative methods is that they provide the opportunity to capture uncontaminated individual perceptions while also allowing participants the ability to provide insight into their responses. In addition, because RWE can encompass various study designs and data sources, the multiphase approach allowed us to gain insights on attitudes in general and for specific RWE case studies. All quantitative data were derived from the self-administered measures (i.e., focus group questionnaire of attitudes and use of RWE in general and the online survey evaluating the awareness and potential use of specific RWE studies) and all qualitative data were derived from the focus groups and individual interviews. Not all participants chose to participate in every aspect of this study or answer every question. Thus, the number of participants for each question and topic area varies.
      Figure thumbnail gr1
      Fig. 1Study design.
      RWE, real-world evidence.
      To clarify the definition of RWE, we defined observational studies to participants as “(a) analysis of real-world data—published or unpublished—from patient registries, electronic health records, administrative data, claims databases, health surveys and patient-reported outcomes (PROs); (b) non-experimental/non-interventional/non-randomized studies or pragmatic clinical trials; and (c) descriptive analysis conducted for internal purposes.”

      Awareness of and Attitudes toward Specific RWE Studies

      To obtain a granular understanding of the study features that make RWE more useful to payers, participants received two sample RWE studies published in high-impact journals to assess the actual or potential use of these studies in decision making. The two studies used common real-world data sources and assessed medications from therapeutic classes likely to be reimbursed/covered. Study A, conducted by the Food and Drug Administration’s Mini-Sentinel program and published as a viewpoint in the New England Journal of Medicine, examined bleeding events with dabigatran [
      • Southworth M.R.
      • Reichman M.E.
      • Unger E.F.
      Dabigatran and postmarketing reports of bleeding.
      ]. Study B, published in the Journal of the American Medical Association, evaluated the use of long-acting beta agonists with corticosteroids versus long-acting beta agonists alone in older adults with chronic obstructive pulmonary disease (COPD) [
      • Gershon A.S.
      • Campitelli M.A.
      • Croxford R.
      • et al.
      Combination long-acting β-agonists and inhaled corticosteroids compared with long-acting β-agonists alone in older adults with chronic obstructive pulmonary disease.
      ].
      Participants completed a 47-item online survey (see Appendix A in Supplemental Materials found at doi:10.1016/j.jval.2017.08.3013) that asked whether they or their organizations 1) had conducted a review of these therapeutic classes (i.e., novel anticoagulants and long-acting beta agonists), 2) were aware of the study, and 3) had used the articles and whether they would use the articles for decision making in their organizations. Additional questions concerned the studies’ relevance and quality, and whether their organizations did or could use the studies in any way (e.g., drug monographs, physician education, formulary placement, utilization management, or other clinical programs or policy). Participants received and were asked to complete the online survey and article links before the focus group or interview.

      Attitudes and Use of RWE

      To understand the attitudes toward RWE, focus groups and interviews were conducted that lasted between 1.5 and 2 hours depending on the number of participants. At the start of each focus group or interview, participants independently completed an additional eight-item questionnaire to provide attitudes on discussion topics without influence from group interaction (see Appendix B in Supplemental Materials found at doi:10.1016/j.jval.2017.08.3013). During the facilitated discussion, the moderator used a semistructured discussion guide focusing on 11 questions (see Appendix C in Supplemental Materials found at doi:10.1016/j.jval.2017.08.3013). Questions concerned the following topics: the extent that published observational studies and internal real-world data are used to inform decision making in their organizations; the advantages and disadvantages of using observational studies; their attitudes toward observational studies; their individual and organizational skills to evaluate, conduct, and use RWE studies; gaps in knowledge and skills and preferred methods to build those competencies; and how RWE could be improved so that studies would be used more readily.
      All qualitative data derived from the focus group and interview discussions were digitally recorded, transcribed verbatim, and de-identified. Three investigators individually and then collectively sorted and analyzed discussion data to ensure agreement and consistency among thematic categories. Investigators used a thematic content analysis approach [
      • Green J.
      • Thorogood N.
      Qualitative Methods for Health Research.
      ] to identify, categorize, and code themes derived from the data. Themes were intentionally undefined before data collection and allowed to emerge from the data itself.

      Results

      A total of 51 individuals were recruited to participate in this study with an intended sample of 20 participants. Of the 20 individuals who participated in the focus groups or individual interviews, 19 completed the focus group questionnaire and 17 completed the online survey assessing knowledge and attitudes concerning the two published studies. Participants were employed by various health care organizations and could select more than one type of employer. Eleven (65%) participants were employed by MCOs, six (35%) by PBMs, four (24%) by health maintenance organizations/health plan entities, one person was associated with a hospital health system, one with a medical clinic, one from a government payer, one was a consultant to a health plan with Medicare, Medicaid, and commercial lines of business, one was employed by a health information technology firm (formerly with a health plan), and one was employed by a nonprofit technology assessment group. The group had a mean of 28 ± 13.5 years of health care professional experience (median 32; range from 2 to 49 years). Fourteen participants were pharmacists, four were physicians, and one person identified as a pharmacologist/health economics and outcomes researcher (one individual did not answer the survey).
      Most participants (65%) indicated that their organizations provide services at the national level, with varying levels of participation at regional (29%), state (41%), and local (18%) levels. Most participants reported that from 2 to 10 full-time equivalent individuals in their organizations were responsible for creating P&T monographs. Three participants, however, were unsure how many individuals created such documents, largely because the production of monographs was contracted with another organization (e.g., a PBM).

      Use of RWE

      When asked whether observational studies play a role in their organization, 2 (11%) indicated “almost never,” 15 (79%) “sometimes,” 1 (5%) “often,” and 1 (5%) “almost always” (see Table 1). In the interviews, participants explained that their use of RWE varied from “use it all the time” to “we don’t use this in any of our considerations [for P&T].” Another participant stated “… our use of real-world observational data has really been minimal.”
      Table 1Payers’ perspectives on use of RWE in their organizations and advantages compared with RCT data
      Focus group questionnaire on attitudes and use of RWENAlmost neverSometimesOftenAlmost alwaysDon’t know
      • 1.
        Do observational studies play any role in your organization’s medication use policies?
      192 (11%)15 (79%)1 (5%)1 (5%)0 (0%)
      • 2.
        How often do you consider observational evidence when setting the following types of medical/pharmacy policy?
       a. Formulary placement196 (32%)11 (58%)1 (5%)1 (5%)0 (0%)
       b. Utilization management192 (11%)14 (74%)2 (11%)1 (5%)0 (0%)
       c. Tiering of pharmaceutical treatments196 (32%)12 (63%)0 (0%)1 (5%)0 (0%)
       d. Determination of medical necessity for individual appeals185 (28%)9 (50%)2 (11%)1 (6%)1 (6%)
       e. Other types of pharmacy or medical benefits management121 (8%)4 (33%)2 (17%)1 (8%)4 (33%)
      • 3.
        In your opinion, do observational studies have the following advantages as compared with RCT evidence?
       a. Provide information about diverse populations190 (0%)12 (63%)7 (37%)0 (0%)0 (0%)
       b. Provide information about subpopulations of patients190 (0%)11 (58%)8 (42%)0 (0%)0 (0%)
       c. Improve understanding of patient-centered outcomes191 (5%)13 (68%)4 (21%)0 (0%)1 (6%)
       d. Provide information about how treatments work in real-world settings190 (0%)4 (21%)12 (63%)3 (16%)0 (0%)
       e. Provide timely results187 (39%)6 (33%)3 (17%)0 (0%)2 (11%)
      • 4.
        In your opinion, are observational studies considered one source of valuable evidence in your organization?
      192 (11%)11 (58%)6 (32%)0 (0%)0 (0%)
      RCT, randomized controlled trial; RWE, real-world evidence.
      Participants frequently cited “safety” as a reason for using RWE in their organizations. Other uses included filling evidence gaps not covered by randomized controlled studies, determining effectiveness, and examining comparative effectiveness. The uses highlighted in the interviews were similar to the perceived advantages and disadvantages of RWE rated in the focus group questionnaire. Most participants (89%) reported that organizations should be open to using data from observational studies as well as RCTs in making health decisions (Table 2). For example, the capability to provide information about how treatments work in the real world, about subpopulations of patients, and about diverse populations was rated as an advantage. Participants were least likely to perceive observational studies as providing timely results or understanding of patient-centered outcomes (Table 1).
      Table 2Payers’ perceptions and attitudes concerning observational studies
      Focus group questionnaire on attitudes and use of RWENStrongly disagreeDisagreeNeither agree/disagreeAgreeStrongly agree
      a. Organizations should be open to using data from observational studies as well as RCTs in making health care decisions.180 (0%)0 (0%)2 (11%)16 (89%)0 (0%)
      b. Observational studies provide valuable data for health care decision making.180 (0%)0 (0%)6 (33%)12 (67%)0 (0%)
      c. Observational studies should be used in making health care decisions.180 (0%)0 (0%)8 (44%)10 (56%)0 (0%)
      d. Most of my peers use data from observational studies in their decision making.170 (0%)6 (35%)7 (41%)3 (18%)1 (6%)
      e. Others in my organization use data from observational studies in their decision making.170 (0%)2 (12%)9 (53%)6 (35%)0 (0%)
      f. I am adequately prepared to use observational studies.181 (6%)6 (33%)7 (39%)4 (22%)0 (0%)
      g. I am confident that I could use observational studies in my organization.181 (6%)6 (33%)2 (11%)9 (50%)0 (0%)
      h. I am able to identify problems with observational studies.170 (0%)2 (12%)8 (47%)7 (41%)0 (0%)
      i. I have a good understanding of observational study approaches.180 (0%)5 (28%)8 (44%)5 (28%)0 (0%)
      j. I feel confident in my ability to interpret observational study results.170 (0%)2 (12%)11 (65%)4 (24%)0 (0%)
      k. I intend to use data from observational studies in my decision-making role.180 (0%)1 (6%)6 (33%)11 (61%)0 (0%)
      l. My organization has skilled support to conduct observational studies.174 (24%)4 (24%)2 (12%)6 (35%)1 (6%)
      m. My organization has skilled support to evaluate observational studies.180 (0%)3 (17%)4 (22%)9 (50%)2 (11%)
      n. I am interested in learning more about evaluating the quality of observational study data.180 (0%)0 (0%)4 (22%)10 (56%)4 (22%)
      RCT, randomized controlled trial; RWE, real-world evidence.
      Use and frequency of their organizations’ internally generated RWE varied and typically included pharmacy claims and medical claims. EHRs, consumer surveys, and patient registries were used less frequently (Fig. 2). Participants indicated that they used RWE mostly for utilization management, tiering consumer cost sharing for treatments, and formulary placement (reported as “sometimes” by 74%, 63%, and 58% of participants, respectively). One person noted that their organization used this information frequently:“We have access to medical claims as well as pharmacy claims for about 15 million lives. So within our organization we have the capability to marry that data to identify adverse events, for predictive modeling, we use it for pipeline so we can identify disease states that would be impacted by new drugs, and assist our clients in financial projections as to what that might cost and, what is the current number of individuals that we identified that would be eligible for this drug. So yeah, it’s very very helpful.”
      Fig. 2
      Fig. 2Proportions of survey participants reporting on the types of data used for internal analyses to inform coverage decisions (N = 15).
      More than half of the participants (n = 9 [53%]) who completed the online survey indicated that their organizations had individuals with expertise in interpreting observational studies and who commonly prepare the P&T documents. Of the 17 survey participants, 5 (29%) indicated that they used an outside vendor for their P&T monographs. Most participants’ organizations (15 of 17 [88%]) conducted analyses with their own (internal) data to inform coverage decisions. Fifteen participants (83%) who completed the focus group questionnaire indicated that their organizations have done so in the past year and are likely to continue doing so, whereas three (17%) indicated that their organizations have not done so and nor intended doing so in the foreseeable future.

      Factors Contributing to Value of RWE

      Participants noted many features that make RWE studies more useful to payers. The most frequent response was the importance of the research to address a current and relevant question for health care decision makers. One participant sought RWE that delineates the value of therapies once they are used in practice, such as:“Am I stopping relapses, am I keeping people out of the hospital, is their quality of life better?”
      Another advised researchers to: reach out to the healthcare decision-maker and share that ‘here’s a study that we’re embarking on: this is the research question that we think is most relevant.’ Do you agree that this is the most relevant and will be helpful for your decision-making?”
      Participants had no clear preference for any one particular data source (i.e., administrative claims, EHRs, and registries) for conducting RWE studies. “Good evidence is available through several means,” said one participant. Another stated that data sources need to be “fit for purpose.” The recognition that some data sets may not be adequate to answer some questions was reflected in this quote:“Some of the evidence is sometimes administrative claims data that can be useful, but there are so many flaws that when you see that kind of evidence it’s so easy to find holes in it as a clinician. You’re sifting garbage.”
      Similarly, participants had no clear preference for any one particular type of data source or design. Nevertheless, payers did note a slight preference for prospective studies and saw the need for more granular information. Participants identified numerous concerns with individual observational studies regarding study quality and potential bias. Some noted concerns with potential conflicts of interest. Overall, participants wanted and expected conflicts of interest to be disclosed, as reflected by this comment:“So I think barriers include attitudes that some people bring in [and] when they look at them [studies] … they’re done by reputable researchers and done with rigor and they try and honestly point out the things they’ve been able to account for and point out the things they haven’t been able to account for, then they’re definitely worthwhile.”
      Another participant provided a conflicting view that indicated they do not hold anything against the high-quality work performed by the industry but funding from the industry does impact the credibility.
      Similarly, where a study was published affected how some payers perceived study quality. For example, one person noted, “ we consider the reputation, the rigor of their peer review process for the typical trials that we’re evaluating for decision-making so certainly with observational trials also.” In contrast, another participant stated, “No, a good study is a good study.” Others noted that publications, especially studies conducted by the pharmaceutical industry, should appear in credible peer-reviewed journals rather than be simply “data on file.”
      To identify study features associated with greater use for decision making, participants answered a series of questions about the awareness, uses, and perceptions of study relevance (including the comparison group, outcomes of interest, study duration, and study setting) and the quality for two sample studies (Fig. 3). Nearly all participants (94%) indicated that their organizations covered medications within these therapeutic classes. Nevertheless, only five (29%) participants were aware of the study and fewer were aware of the use of the studies by their organizations. When presented with the information, however, most participants (71% for study A and 65% for study B) indicated that they would use the studies for decision making. Across both studies, most participants found the populations to be representative of those they serve, the study setting relevant, outcomes meaningful to the organizations, and the comparison group relevant. Nevertheless, 29% of the participants (n = 5) viewed study methods as unclear. Participants cited general concerns with study A regarding its use of claims data, lack of assessment to examine confounding, apprehension about the potential for new user bias for patients receiving dabigatran, the article not being presented as “original research,” and lack of medical records to validate findings from the claims analysis. About half of the participants (n = 8 [47%]) wanted to see additional information that was unreported, such as details about the patient population, other therapies the patients were receiving, how authors addressed confounding, and the need to include other novel oral anticoagulant agents besides dabigatran. These methodological concerns were not expressed by participants about study B. Only two (12%) reported a lack of clarity of methods but because of the outcomes chosen as one participant explained: the main outcome measure was a composite outcome of death and COPD hospitalization. Well unfortunately that alone makes the study less than useful because it’s a composite outcome … and it’s not COPD hospitalizations, which are important to me, more so than outcomes of death.”
      Figure thumbnail gr3
      Fig. 3Proportions of participants agreeing to statements about specific published RWE studies (N = 17).
      RWE, real-world evidence.

      Barriers to Using RWE

      Participants identified barriers to using observational studies in both the interviews and the surveys. For example, timeliness of study results was frequently mentioned in the interviews and was rated as one of the greatest personal and organizational barriers to using RWE (Fig. 4). Most health plans have compressed evaluation periods because of the 90-day review mandated for Medicare Part D and Medicare Advantage programs. RWE studies are typically generated after product launch and after decisions for coverage and reimbursement have already been made. Another obstacle many participants expressed was difficulty using RWE in the P&T committee process, but they saw a role for RWE in other drug benefits management activities. For example, participants suggested that members of the P&T committee are less familiar with methods used in RWE and would, therefore, have a higher degree of skepticism about study findings. Some participants noted that traditional literature searches often exclude observational studies. Finally, others mentioned the ability to allocate time or financial resources to enable broader evaluation of evidence given competing priorities.
      Fig. 4
      Fig. 4Perceived barriers to use of observational studies in decision making (N = 19).

      Organizations’ Capacity to Evaluate Observational Studies

      Participants discussed how they perceived their organizations’ capacity, familiarity, and experience with evaluating observational studies. Comments ranged widely, from having no capacity—relying instead on outsourcing to other organizations—to having the requisite skills or a specific group devoted to evaluation of RWE. In between were 1) organizations with individuals capable of evaluating observational studies, but did not do it, and 2) organizations with individuals at varying levels of ability or experience who lack coordinated or consistent efforts, have time constraints, or have an insufficient level of collective skill in the organizations. Nevertheless, participants suggested that their organizations are open to, and interested in, evaluating observational studies if given the right guidance (e.g., mentors) or tools (e.g., evaluation tools or checklists that can be applied consistently and objectively). More than half the representatives noted a lack of experience interpreting results (53%) or conducting their own analyses (69%) to be a barrier for themselves and/or their organizations (Fig. 4).

      Competencies Needed to Improve Use of RWE Studies

      Participants felt more comfortable identifying problems with observational studies, but were less confident in their personal ability to interpret observational study results and understand observational study approaches (Table 2). In the focus groups, participants reflected on the skills or knowledge they believed would enhance their ability to understand and use information from RWE studies in decision making. Many suggestions focused on the need to understand methodological concepts such as regression, confounders (e.g., propensity score matching), risk ratios and adjustment, research design, basics of comparative effectiveness research, limitations of RCTs, or overall data interpretation and statistics.

      Discussion

      Payers participating in this study reported using RWE to inform decision making, but the frequency and purpose varied between and within organizations. Nearly all participants indicated that RWE was useful for monitoring safety, conducting utilization management, and examining costs, but was less likely to be considered in P&T decision making, principally because of timeliness. This finding is supported by other studies that report payers relying heavily on RCTs for evaluation of comparative effectiveness [
      • Leung M.Y.
      • Halpern M.T.
      • West N.D.
      Pharmaceutical technology assessment: perspectives from payers.
      ,
      • Moloney R.
      • Mohr P.
      • Hawe E.
      • et al.
      Payer perspective on future acceptability of comparative effectiveness and relative effectiveness research.
      ]. In this study, however, similar to other studies, use of specific RWE studies was not based on common areas of agreement such as specific designs (e.g., prospective vs. retrospective studies), data sources (e.g., claims and EHRs), or tier of the journal in which the results are published, but rather whether the study answered relevant questions, transparently described the methods and results, and controlled for potential biases (e.g., design, funding, and authorship). Finally, the interviews and questionnaires identified several barriers. These also identified opportunities to improve the quality of RWE studies and to improve the ability for their organizations to use RWE studies in their decision making.

      Relevance of Study End Points

      Asking the “right” research question was a key concern for many participants. For example, one payer noted the importance of assessing economic end points and reporting individual end points in addition to composite end points (e.g., hospitalizations, emergency room visits, stays at intensive care units vs. a composite end point of COPD hospitalizations and death). The lack of relevant research findings is not unique to RWE. Chalmers and Glasziou [
      • Chalmers I.
      • Glasziou P.
      Avoidable waste in the production and reporting of research evidence.
      ] found that 85% of biomedical research was wasted, largely because of research questions that are not relevant to clinicians and patients. In our focus groups, several participants advocated for manufacturers to seek stakeholders’ opinions about what the relevant study questions should be. These comments resonate with other studies that have found that payers are willing to be involved in the prioritization of research topics and the initial research design to improve study utility [
      • Wang A.
      • Halbert R.J.
      • Baerwaldt T.
      • Nordyke R.J.
      US payer perspectives on evidence for formulary decision making.
      ,
      • Concannon T.W.
      • Khodyakov D.
      • Kotzias V.
      • et al.
      Employer, insurer, and industry perspectives on patient-centered comparative effectiveness research: final report.
      ]. Despite the increase in recent years in using stakeholder-driven research questions and the research collaborations between payer organizations and biopharmaceutical companies to address high-priority clinical and business questions, the impact of these initiatives on study relevance has not been assessed.

      Timeliness of Study Results and Resources Needed to Evaluate Studies

      Timeliness of study results and resource limitations in an organization were cited as barriers to evaluating RWE for P&T decisions. Typically, RWE studies are initiated after product launch and the initial formulary coverage decision. Nevertheless, increased investments by PCORI [

      Patient-Centered Outcomes Research Institute. Pragmatic clinical studies. Available from: http://www.pcori.org/research-results/pragmatic-clinical-studies. [Accessed November 23, 2016].

      ] in more pragmatically designed studies and a willingness by the Food and Drug Administration to consider such studies may increase the volume of timely and relevant studies [
      • Lim D.
      Califf: nothing prohibits FDA from using real-world evidence in decisions.
      ].
      The time and resources required for an organization to search and evaluate additional sources of evidence beyond RCTs were frequently cited as barriers by the participants. Other groups such as clinical practice guideline groups have cited the time and resources needed to evaluate additional studies as barriers to broader use (Rangarao et al., unpublished data). Nevertheless, the evolving US payment landscape may encourage, rather than discourage, investment in RWE evaluation. For example, predictive analytics using real-world data that identify patients at greatest risk for poor prognosis, adverse events, or treatment nonresponse can potentially reduce costs. Similarly, incentives in new bundled care and value-based payment reimbursements are based on end points often best assessed in RWE settings. Therefore, additional resources and time required to evaluate these studies may be worth an organization’s additional investment in staff capabilities and expertise.

      Transparency of Study Methods

      Lack of transparency in the research methods and analyses was frequently mentioned as a barrier to greater use of RWE. For example, participants in this research distrusted data on file in part because of the lack of peer review. Even among published studies, participants sought information on how patients’ backgrounds and care settings may have differed from their own populations, how authors addressed confounding and selection bias, and what potential bias existed related to authors and funding.
      In an effort to improve study transparency, some researchers submit supplemental data tables and statistical analyses to journals, which, in turn, may make these available as online appendices [
      • Thomas L.
      • Peterson E.D.
      The value of statistical analysis plans in observational research: defining high-quality research from the start.
      ]. Other experts have recommended the registration of observational studies in public repositories to address selective reporting or data dredging [
      • Loder E.
      • Groves T.
      • Macauley D.
      Registration of observational studies.
      ,
      Should protocols for observational research be registered?.
      ]. Collaborations between the ISPOR and the International Society for Pharmacoepidemiology have joined forces to improve the transparency and reproducibility of study analyses [
      International Society for Pharmacoeconomics and Outcomes Research
      ISPOR and ISPE collaborate to advance good practices for use of real-world evidence.
      ]. Finally, multistakeholder groups are recommending broader exchange of health care economic information under Section 114 of the Food and Drug Administration Act such that research methods should be “transparent, disclosed, reproducible, accurate, and valid” [
      AMCP partnership forum: FDAMA Section 114—improving the exchange of health care economic data.
      ].

      Awareness of Tools to Aid Study Evaluation

      Payers and payer organizations noted several internal barriers such as the need to build skilled support staff to evaluate observational studies and providing tools and training to guide the interpretation and quality assessment of RWE studies. Interestingly, existing tools and training programs to guide the interpretation and evaluation of RWE designed for payers (e.g., the AMCP-ISPOR-NPC CER Collaborative and the Good Research for Comparative Effectiveness (GRACE) checklist) were not mentioned in the focus groups or interviews. Nevertheless, training programs such as the CER certificate program that uses the AMCP-ISPOR-NPC CER Collaborative tools have been shown to improve learner confidence in their ability to evaluate RWE studies and incorporate these studies in decision making [
      • Perfetto E.M.
      • Anyanwu C.
      • Pickering M.K.
      • et al.
      Got CER? Educating pharmacists for practice in the future: new tools for new challenges.
      ].

      Study Limitations

      Several limitations should be considered when interpreting the findings of this research. Study participants were recruited from a stratified convenience sample of managed care medical and pharmacy managers and directors. A different sample of participants may have resulted in different answers. Responses, however, often reached saturation for many of the questions across the focus group and interviews.
      In this study, our definition of RWE included pragmatic clinical trials. Nevertheless, participants did not allude to such study designs and the two articles related to RWE did not use such study designs. Therefore, extending the findings of this study to prospective pragmatic clinical trials is not appropriate.
      Finally, the two articles selected as case examples of RWE are not the only studies that may be of interest to payers. These studies were selected on the basis of the topics and journals in the hope that payers would be familiar with them. This was not the case; approximately 70% of participants indicated that they had not seen the studies. Other colleagues in their organizations may have seen and acted on these studies. The volume of research being produced is tremendous and thus unlikely for participants to recall specific studies a year after publication. In addition, these studies are not representative of all the factors (e.g., design, data source, methods used, and types of questions addressed) that determine study utility.

      Conclusions

      Payers recognized the value of RWE but the use of such studies is limited and varies from organization to organization. Although RWE was infrequently used in the P&T process, it was often used elsewhere in the organization, such as for utilization management, financial analyses, and safety evaluations. To improve the quality of RWE research, payers suggested that researchers should address questions that are important to payers, be transparent and complete when reporting methods and results, and address potential study biases. To improve the quantity of research considered and used, payers highlighted the need for additional training on evaluating and using RWE in their organizations. Despite the maturation of RWE over the past decade, further efforts are needed if RWE is to play a significant role informing payer decisions.

      Acknowledgment

      We gratefully acknowledge the representatives from the 20 organizations who participated in this research phase.
      Source of financial support: This project was funded by the National Pharmaceutical Council.

      Supplementary material

      References

        • Sherman R.E.
        • Anderson S.A.
        • Dal Pan G.J.
        • et al.
        Real-world evidence—What is it and what can it tell us?.
        N Engl J Med. 2016; 375: 2293-2297
        • Perfetto E.M.
        • Anyanwu C.
        • Pickering M.K.
        • et al.
        Got CER? Educating pharmacists for practice in the future: new tools for new challenges.
        J Manag Care Spec Pharm. 2016; 22: 609-616
        • Leung M.Y.
        • Halpern M.T.
        • West N.D.
        Pharmaceutical technology assessment: perspectives from payers.
        J Manag Care Pharm. 2012; 18: 256-264
        • Moloney R.
        • Mohr P.
        • Hawe E.
        • et al.
        Payer perspective on future acceptability of comparative effectiveness and relative effectiveness research.
        Int J Technol Assess Health Care. 2015; 31: 90-98
      1. Patient-Centered Outcomes Research Institute. PCORnet, the National Patient-Centered Clinical Research Network. Available from: http://www.pcornet.org/. [Accessed June 1, 2017].

        • Collins F.S.
        • Varmus H.
        A new initiative on precision medicine.
        N Engl J Med. 2015; 372: 793-795
        • Doshi J.A.
        • Hendrick F.B.
        • Graff J.S.
        • et al.
        Data, data everywhere, but access remains a big issue for researchers: a review of access policies for publicly-funded patient-level health care data in the United States.
        EGEMS (Wash DC). 2016; 4: 1204
      2. Patient-Centered Outcomes Research Institute Methodology Committee. The PCORI methodology report. 2013. Available from: http://www.pcori.org/research-results/research-methodology. [Accessed July 19, 2016].

      3. Agency for Healthcare Research and Quality. Methods guide for effectiveness and comparative effectiveness reviews. AHRQ Publication No. 10(14)-EHC063-EF. 2014. Available from: www.effectivehealthcare.ahrq.gov. [Accessed May 31, 2017].

      4. The European Network of Centres for Pharmacoepidemiology and Pharmacovigilance. Guide on methodological standards in pharmacoepidemiology (revision 5). EMA/95098/2010. Available from: http://www.encepp.eu/standards_and_guidances. [Accessed May 31, 2017].

        • Garrison L.P.
        • Neumann P.J.
        • Erickson P.
        • et al.
        Using real-world data for coverage and payment decisions: the ISPOR Real-World Data Task Force Report.
        Value Health. 2007; 10: 326-335
        • Motheral B.
        • Brooks J.
        • Clark M.A.
        • et al.
        A checklist for retrospective database studies—report of the ISPOR Task Force on Retrospective Databases.
        Value Health. 2003; 6: 90-97
        • Berger M.L.
        • Mamdani M.
        • Atkins D.
        • et al.
        Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—part I.
        Value Health. 2009; 12: 1044-1052
        • Cox E.
        • Martin B.C.
        • Van Staa T.
        • et al.
        Good research practices for comparative effectiveness research: approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: the International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retrospective Database Analysis Task Force Report—part II.
        Value Health. 2009; 12: 1053-1061
        • Johnson M.L.
        • Crown W.
        • Martin B.C.
        • et al.
        Good research practices for comparative effectiveness research: analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—part III.
        Value Health. 2009; 12: 1062-1073
        • Berger M.L.
        • Martin B.C.
        • Husereau D.
        • et al.
        A questionnaire to assess the relevance and credibility of observational studies to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force Report.
        Value Health. 2014; 17: 143-156
        • Dreyer N.A.
        • Velentgas P.
        • Westrich K.
        • et al.
        The GRACE checklist for rating the quality of observational studies of comparative effectiveness: a tale of hope and caution.
        J Manag Care Spec Pharm. 2014; 20: 301-308
        • Southworth M.R.
        • Reichman M.E.
        • Unger E.F.
        Dabigatran and postmarketing reports of bleeding.
        N Engl J Med. 2013; 368: 1272-1274
        • Gershon A.S.
        • Campitelli M.A.
        • Croxford R.
        • et al.
        Combination long-acting β-agonists and inhaled corticosteroids compared with long-acting β-agonists alone in older adults with chronic obstructive pulmonary disease.
        JAMA. 2014; 312: 1114-1121
        • Green J.
        • Thorogood N.
        Qualitative Methods for Health Research.
        Sage, London2004
        • Chalmers I.
        • Glasziou P.
        Avoidable waste in the production and reporting of research evidence.
        Lancet. 2009; 374: 86-89
        • Wang A.
        • Halbert R.J.
        • Baerwaldt T.
        • Nordyke R.J.
        US payer perspectives on evidence for formulary decision making.
        Am J Manag Care. 2012; 18: SP71-SP76
        • Concannon T.W.
        • Khodyakov D.
        • Kotzias V.
        • et al.
        Employer, insurer, and industry perspectives on patient-centered comparative effectiveness research: final report.
        Rand Health Q. 2016; 6: 3
      5. Patient-Centered Outcomes Research Institute. Pragmatic clinical studies. Available from: http://www.pcori.org/research-results/pragmatic-clinical-studies. [Accessed November 23, 2016].

        • Lim D.
        Califf: nothing prohibits FDA from using real-world evidence in decisions.
        (October 21) InsideHealthPolicy, 2016
        • Thomas L.
        • Peterson E.D.
        The value of statistical analysis plans in observational research: defining high-quality research from the start.
        JAMA. 2012; 308: 773-774
        • Loder E.
        • Groves T.
        • Macauley D.
        Registration of observational studies.
        BMJ. 2010; 340: c950
      6. Should protocols for observational research be registered?.
        Lancet. 2010; 375: 348
        • International Society for Pharmacoeconomics and Outcomes Research
        ISPOR and ISPE collaborate to advance good practices for use of real-world evidence.
        (May 22) ISPOR News & Press, 2017
      7. AMCP partnership forum: FDAMA Section 114—improving the exchange of health care economic data.
        J Manag Care Spec Pharm. 2016; 22: 826-831