U.S. flag An official website of the United States government
  1. Home
  2. For Industry
  3. FDA User Fee Programs
  4. Prescription Drug User Fee Amendments
  5. Background Materials for REMS Standardization and Evalution Public Meeting: REMS Evaluation
  1. Prescription Drug User Fee Amendments

Background Materials for REMS Standardization and Evalution Public Meeting: REMS Evaluation

 

back to REMS Public Meeting Materials Table of Contents

 

4   REMS Evaluation

 

4.1   Current Methods for Assessing REMS

FDAAA requires that REMS assessments be completed to determine whether a REMS is meeting its goals or whether the goal or elements should be modfied. These assessments are to be completed at least at 18 months, 3 years and 7 years after REMS approval. The REMS goals that are assessed focus on the risks that were identified when determining the need for a REMS. The specific REMS goals vary with the drug, but almost all REMS include a goal to inform prescribers and usually patients about the relevant risks. REMS with ETASU may have additional goals that focus on minimizing certain risks (e.g., teratogenicity, myocardial infarction), limiting use to certified prescribers or certain patients, or ensuring compliance with certain testing.

At the time of the approval of a REMS the Agency also includes an assessment plan for the Sponsor to follow. The complexity of the assessment plans depend on the complexity of the REMS. REMS with Communication Plan (CP) and/or Medication Guide (MG) require Knowledge, Attitude and Behavior (KAB) surveys that measure 1) prescriber and patient knowledge and understanding of serious risks and safe use conditions, and/or 2) prescriber knowledge of proper patient selection. The methodology for the KAB surveys utilized to assess REMS has not been standardized and was the focus of a workshop in June 2012.10 That workshop included discussion about the validity and salience of KAB surveys and alternatives to surveys for assessing knowledge.

REMS with ETASU generally have additional metrics included in the assessment plan. Information about various processes is collected. The assessment plan can include data on compliance with REMS implementation requirements, such as the number of enrolled/certified prescribers, patients, pharmacies; the number of prescriptions by non-enrolled prescriber; the number of Dear Healthcare Provider letters mailed, and any corrective actions taken to address non-compliance. Data that summarizes compliance with certain safe use conditions may also be collected, such as the number of times patients have not completed required laboratory testing; the number of pre-infusion patient checklists received that suggest a patient should not be treated; and the findings from any Root Cause Analyses (RCA) (e.g., reasons for pregnancy).  

Other metrics included in many REMS assessments relate to 1) utilization patterns – demographics of patients and prescribers, use in “at risk” populations (e.g., females of reproductive potential), and prescribing behaviors; and 2) patient outcomes – the number or rate of adverse events that the REMS is attempting to either mitigate (e.g., the number of pregnancies) or detect (e.g., PML).

At this time many of the REMS assessment metrics focus on processes and not outcomes of the REMS. Outcome-related metrics are challenging because there are usually no pre-REMS data or other good comparator data; outcomes (numerator) are often rare events; and drug use (denominator) may be limited. Measures of behaviors that might be indicators of success or failures, such as use of contraceptive while taking a teratogen, or determining whether patients were counseled, can also be difficult to obtain. Proxy measures, such as KAB survey findings, have been used to help determine if certain REMS goals have been met, but determining valid proxy measures is challenging.

As required under FDAAA, the Agency has convened three Drug Safety and Risk Management (DSaRM) Advisory Committee (ACs) since 2011 to discuss REMS with ETASU to determine if they: (1) are assuring safe use of the drug, (2) are not unduly burdensome on patient access to the drug, and (3) to the extent practicable, minimize the burden on the healthcare delivery system. The REMS that have been discussed include isotretinoin (2011), the REMS for teratogens with ETASU (2012), and alosetron (Lotronex) (2013). These meetings have included discussions about the challenges of assessing the effectiveness of REMS. The AC members have acknowledged the difficulties in assessing REMS, but have emphasized the importance of developing better metrics.

In addition to a discussion of suggested metrics that might be used to address the challenges mentioned above, Section III-E of the Federal Register announcement11 lists three important questions that address REMS-related issues that the Agency would like the public to address. First, the assessment of patient and prescriber burden and access to a drug with a REMS are important parts of the overall evaluation of a REMS, but the methodology that might be used to obtain an unbiased determination of these concerns has not been developed – the Agency is looking for feedback. Secondly, for many REMS that are implemented, there are often other risk management activities occurring in parallel, e.g. advisory committee meetings, media coverage, etc. The Agency would like feedback on how to separate the impact of a REMS programs from these related activities. Lastly, determining the evidence needed to modify or release a REMS and still ensure the safe use of a drug is an important discussion—one that would help guide the Agency as it moves forward.
 

4.2   FDA’s Approach to Building a Future REMS Assessment Framework and Guidance

The methods that have historically been used to assess the impact of pharmaceutical risk management programs have been the subject of both scrutiny and quality improvement efforts among legislators, regulators, stakeholders and auditors. While some efforts have been made to better understand the limitations of existing methods and to implement incremental improvements upon them (e.g., improving the design and implementation of knowledge surveys), a more comprehensive analysis of alternative methodology(s) to produce more meaningful information about the impact of risk management program has only recently been initiated. The factors influencing the need for making improvements to REMS assessments, the principles for evolving an improved methodology, a potential REMS assessment framework and, ultimately, the evolution of industry guidance will now be described.

4.2.1   Factors Driving the Need for Improved REMS Assessment Methodologies

Four key factors underlie the impetus to improve upon previous methods for assessing REMS programs: legislation, agreements between industry and FDA, feedback received from various stakeholders, and consistency with the efforts of other regulatory authorities.
FDAAA12 gave FDA the authority to require and enforce the assessment of effectiveness of Risk Evaluation and Mitigation Strategies (REMS). A minimum requirement for a REMS program is to provide a timetable for assessment, and, as noted above, with a minimum assessment frequency of 18 months, 3 years, and 7 years. These assessments have most often been comprised of surveys of knowledge directed to prescribers, patients and/or pharmacists and/ measuring manufacturer and stakeholder compliance with requirements of ETASU. Less often, assessments of frequency/severity of the clinical safety outcome(s) of interest and root causes of program underperformance have been conducted. These “domains” of assessment are determined based on the type(s) of REMS element(s) that comprise the risk management program.

In 2012, the FDA Safety and Innovation Act (FDASIA)13 was signed into law. FDA’s authority regarding REMS assessments was maintained and, when considering a REMS modification, the importance of assessing both the benefit and burden of the REMS program on the healthcare delivery system was added. Hence, there is a legislative imperative to include additional assessment domains.

The fifth Prescription Drug User Fee Act, authorized by FDASIA in 2012, included a mutually agreed upon set of goals between the pharmaceutical industry and FDA.14 Among them was a goal for improving how assessments of REMS programs would be conducted. It states:

“Measure the Effectiveness of REMS and Standardize and Better Integrate REMS into the Healthcare System”, with two specific milestones:

  1. One or more public workshops on methodologies for assessing REMS, including effect on patient access, individual practitioners and overall burden on the healthcare delivery system
  2. Guidance on methods for determining whether a REMS with ETASU is commensurate with the risks and not unduly burdensome on patient access

As part of the Agency’s efforts towards continuous quality improvement, and in anticipation of forthcoming legislation, FDA implemented 3 working groups in 2011, overseen by the REMS Integration Steering Committee, to better clarify and issue guidance on the criteria for requiring a REMS, standardization of REMS tools and the evaluation of REMS program effectiveness. The latter working group has implemented an effort to better understand alternative methodologies for REMS assessment, including developing a REMS assessment framework and, based upon that framework, to develop and publish draft guidance. (Please see Section 1.2 for a complete discussion of the REMS Integration Initiative.)

FDA had previously sought stakeholder feedback on assessing knowledge of risks using social science methodologies like surveys, in 2012. At that meeting, “Social Science Methodologies to Assess Goals Related to Knowledge” pharmaceutical industry presentations included a number of specific recommendations for assessing outcomes other than knowledge by using other methods.15 Presenters cited the need for consensus on key outcomes to be measured as part of REMS assessments, such as exposure, useful/acceptability of information, navigability, comprehension, knowledge, self-efficacy, behavioral intent and actual behavior. Industry presenters also suggested employing additional data collection options: drug utilization studies, patient registries, secondary data sources and patient web-based communities.

Additional feedback about REMS assessments has come in the form of a January 2013 report, “FDA Lacks Comprehensive Data to Determine Whether REMS Improve Drug Safety,” issued by the Office of Inspector General (OIG), based on an audit that was completed in 2012.16 Among the seven major recommendations from that report, one was to “develop and implement a plan to identify, develop, validate and assess REMS components.” Most relevant to the area of REMS assessment were that FDA should:

  • Identify and implement reliable methods to assess the effectiveness of REMS.
  • Decrease its reliance on survey data in sponsors’ assessments and work with sponsors and health care providers to develop more accurate evaluation methods.
  • Continue to hold discussions with stakeholders…about the issues and challenges associated with assessing the effectiveness of REMS components.

In the context of these factors, FDA has been seeking feedback from stakeholders and continues to do so in this public meeting.17

Recently, the European Medicines Agency published its draft guidelines on Good Pharmacovigilance Practices. Module XVI of that guidance discusses proposed assessment domains for risk management programs that are implemented in the EU. These regulators proposed extending the domains of assessment of such programs to include:

  1. Process measures - extent program has been executed and intended impacts on behavior achieved, such as reaching target population, assessing clinical knowledge and assessing clinical actions (drug utilization studies)
  2. Outcomes measures – measure of level of risk control, such as the frequency and severity (pre-post or observed vs. expected epidemiology studies).

Additionally, the EMA suggested measuring unintended outcomes. 

4.2.2   Principles Guiding the Development of a REMS Assessment Framework

As FDA researches and reviews alternative methodologies to identify a more robust REMS assessment framework and use it as the basis for developing guidance, the Agency needs to consider some guiding principles to help prioritize and ensure that the method(s) that is/are chosen will both address the aforementioned factors and generate more meaningful, actionable information.

Three guiding principles are considered vital in this regard, although others may also need to be considered:

  1. Learn from and retain best practices from how REMS assessments have been conducted in the past. The use of knowledge surveys has been refined over time and, although they remain flawed, they do provide information about how well stakeholders understand serious risks and their understanding of their role in achieving the goals of a REMS.
  2. Select a robust, comprehensive and evidenced-based approach, optimally one that has a basis in science, with supportive literature about its effectiveness. A methodology consistent with the scientific method, that is comprehensive in nature and has the potential to evolve into a science on par with that used in pharmacoepidemiology, would be ideal in this regard.
  3. Consider the practical feasibility and utility of any method(s) selected. There is little value to defining solution(s) that end up being too difficult or costly to implement and/or that will not provide actionable information. Pre-testing of selected method(s) for their utility in enhancing existing REMS assessment plans will help to validate the selected method.

4.2.3   REMS Assessment Framework Options and Feasibility

In seeking a framework for measuring the impact of healthcare interventions that addresses the aforementioned factors and that also retains best practices, has a basis in science, and is practical to implement, a number of diverse frameworks could be considered. 

One framework for measuring the impact of a learning program is the Kirkpatrick Four Level Evaluation Model.18 This Model’s four steps of evaluation consist of:

  1. Reaction – how well did the learners like the process?
  2. Learning – what did they learn (gain knowledge and skills)?
  3. Behavior – what changes resulted from the learning process?
  4. Results – what are the tangible results of the learning process?

Another framework, from the implementation sciences, RE-AIM (an acronym for the framework’s functional elements) was developed in 199919 for purposes of assessing the public health impact of an intervention as a function of five factors:

  1. Reach – the proportion of the target population who participate
  2. Effectiveness – success rate (positive – negative outcomes)
  3. Adoption – proportion of settings that adopt the intervention
  4. Implementation – extent to which intervention is implemented as intended
  5. Maintenance – extent to which intervention is sustained over time 

The product of these five dimensions is the public impact score.

Various industries have also employed failure analysis methods20 to either predict and/or retrospectively assess behavioral causes of process failures and the impact of programs designed to mitigate them. While not as comprehensive as the aforementioned options, an assessment framework that could systematically identify and measure the behavioral causes of process failures would help improve our understanding of the impact of healthcare intervention programs, as well as how to improve their design.

To determine the feasibility of using an existing framework to improve REMS assessment methods, FDA envisioned a broad spectrum of possible REMS assessment domains, ranging from program implementation processes and distribution metrics through knowledge and behavior adoption/compliance to clinical outcomes and underlying causes of program failure. The additional measurement domains of particular legislative and industry interest, program burden and impact on patient access, were also incorporated.

As an example, these various domains were aligned with the RE-AIM categories of reach, effectiveness, adoption, implementation and maintenance. RE-AIM was selected as it appeared to fit best with the spectrum of domains envisioned, had extensive evidence of application to public health intervention research,21 and was readily adaptable. Remarkable alignment was achieved between the RE-AIM categories and the spectrum of possible REMS assessment domains, as depicted below.

 CategoryPossible REMS Assessment Domains
Reach Distribution/Availability/Receipt
Participation
Medication access 
EffectivenessKnowledge: awareness/comprehension/understanding
Outcomes: REMS goal, clinical, patient-reported
Unintended effects 
Adoption Application of knowledge
Attitude/intention
Behaviors: adoption, actions, compliance
 
Implementation Process: pretesting, functionality/navigability, sponsor, stakeholder workflow, integration
Consistency
Burden 
Maintenance Persistency
Failures 

Extending this framework to also consider standardized REMS tools (program overall, communication plan (CP), ETASUs A through E), as well as future ones, there is an opportunity to specify a standard set of assessment domains specific to each type of REMS tool. For each domain, the numerator and denominator used to calculate the value will also need to be defined, along with, possibly, threshold values. 

 CategoryPossible REMS
Assessment Domains
Metrics
(overall REMS)
Metrics
(specific tools)
Reach Distribution / Availability / Receipt
Participation
Medication access 
Numerators /
Denominators 
Numerators /
Denominators 
Effectiveness Knowledge
Outcomes
Unintended effects 
Numerators /
Denominators 
Numerators /
Denominators 
Adoption Application of knowledge
Attitude/intention
Behaviors 
Numerators /
Denominators 
Numerators /
Denominators 
Implementation Process
Consistency 
Burden
Numerators /
Denominatosr 
Numerators /
Denominators
 
Maintenance Persistency
Failures 
Numerators /
Denominators 
Numerators /
Denominators 

Finally, the relevant data collection system or data source for each assessment domain needs to be identified, thereby helping to confirm the feasibility of generating the desired information for each domain. REMS assessments may incorporate a spectrum of data systems/sources, including REMS program data, epidemiological studies, drug utilization data, patient registries, surveys, market research, audits, enhanced pharmacovigilance, failure mode and effects analysis (FMEA), root cause analysis (RCA), ethnographic research, and more.

4.2.4   REMS Assessment Guidance Development

Although yet to be validated, building a REMS assessment framework that addresses the identified factors and follows the predefined principles appears to be feasible. It will ideally be based on an existing healthcare intervention assessment framework like RE-AIMS. The framework should address all possible assessment domains, specify a standard set of metrics for each REMS tool and define the relevant data systems/sources of the information for each. As such, it creates a rational basis for evolving guidance for industry on the assessment of REMS programs.

The process of guidance development is underway. It will also need to specify the composition of assessment plans, study protocol and analytical methodologies, the viability of establishing performance thresholds, and the limitations of the selected methodology(s). 


References:

10. June 7, 2012: Social Science Methodology Workshop: http://www.fda.gov/Drugs/NewsEvents/ucm292337.htm

11. Supra, note 5

12. Supra, note 1

13. Supra, note 7

14. PDUFA Reauthorization Performance Goals and Procedures FY 2013-2017: http://www.fda.gov/downloads/forindustry/userfees/prescriptiondruguserfee/ucm270412.pdf

15. Industry Experience in Using Surveys to Assess REMS Impact on Knowledge presentation: http://www.fda.gov/downloads/Drugs/NewsEvents/UCM307706.pdf%20June%207 

16. DHHS OIG Report: FDA Lacks Comprehensive Data to Determine Whether Risk Evaluation and Mitigation Strategies Improve Drug Safety, February 2013, https://oig.hhs.gov/oei/reports/oei-04-11-00510.pdf.

17. Standardizing and Evaluating Risk Evaluation and Mitigation Strategies; Notice of Public Meeting; Request for Comments www.gpo.gov/fdsys/pkg/FR-2013-05-22/pdf/2013-12124.pdf Federal Register, Vol. 75, No. 116, Thursday, June 17, 2010, Notices.

18. Techniques for evaluating training programs. Kirkpatrick, DL et. al., Journal of American Society of Training Directors, 13 (3): 1959; pp21–26.

19. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Glasgow, et.al. Am. J. Public Health, Sept. 1999, Vol. 89, No. 9.

20. DeRosier J, Stalhandske E, Bagian JP, Nudell T. Using health care failure mode and effect analysis: the VA National Center for PatientSafety’s prospective risk analysis system. Jt Comm J Qual Improv. 2002 May;28(5):248-67,209. http://www.patientsafety.va.gov/SafetyTopics/HFMEA/HFMEA_JQI.pdf disclaimer icon

21. www.re-aim.org disclaimer icon

 

 
Back to Top