The content on this page is provided for reference purposes only. This content has not been altered or updated since it was archived.
Minutes from Negotiation Meeting on MDUFA III Reauthorization, March 7, 2011
FDA - Industry MDUFA III Reauthorization Meeting
March 7, 2011, 10:00 am - 5:00 pm
FDA Switzer Building , Washington, DC
To provide an assessment of FDA’s current 510(k) program.
|Ashley Boam||Center for Devices and Radiological Health (CDRH)|
|Malcolm Bertoni||Office of the Commissioner (OC)|
|Nathan Brown||Office of Chief Counsel (OCC)|
|Kate Cook||Center for Biologics Evaluation and Research (CBER)|
|William Hubbard||FDA Consultant|
|Donna Lenahan||Office of Legislation (OL)|
|Don St. Pierre||CDRH|
|Susan Alpert||Medtronic (representing AdvaMed)|
|Hans Beinke||Siemens (representing MITA)|
|David Fisher||Medical Imaging Technology Alliance (MITA)|
|John Ford||Abbott Laboratories (representing AdvaMed)|
|Donald Horton||Laboratory Corporation of America Holdings (representing ACLA)|
|Mark Leahey||Medical Device Manufacturers Association (MDMA)|
|Joseph Levitt||Hogan Lovells US LLP (representing AdvaMed)|
|John Manthei||Latham and Watkins (representing MDMA)|
|David Mongillo||American Clinical Laboratories Association (ACLA)|
|James Ruger||Quest Diagnostics (representing ACLA)|
|Patricia Shrader||Becton Dickinson (representing AdvaMed)|
|Janet Trunzo||Advanced Medical Technology Association (AdvaMed)|
Meeting Start Time: 10:00 am
FDA Presentation of 510(k) Program Assessment
FDA provided a detailed assessment of the current 510(k) program performance and outcomes. FDA noted that current initiatives relating to changes to the 510(k) program are aimed at making the process more predicable and transparent.
Meeting 510(k) performance goals
FDA provided information demonstrating that it is meeting both Tier 1 (90% in 90 FDA days) and Tier 2 (98% in 150 FDA days) 510(k) goals.
FDA also provided information showing the range between the highest and lowest annual performance at the review branch level.
In addition to the program’s performance, FDA provided observed outcomes.
FDA discussed average total time to decisions, showing both FDA and industry/submitter time. While FDA’s time has increased from 58 days in 2006 to 77 days in 2010, the time industry spends responding to FDA questions has increased from 35* days in 2006 to 71* days in 2010. Industry asserted that companies are receiving requests for information that FDA had not requested in the past. Industry inquired whether the slower response to requests is because FDA is asking for more information. FDA indicated that it did not find evidence that FDA reviewers were asking for more information. FDA asked Industry to help pinpoint some possible explanations.
FDA also provided information on the average number of review cycles, noting a steady 0.1% increase per year. The number of Additional Information (AI) requests on the first cycle also has been increasing steadily, reaching 77% in fiscal year (FY) 2010. Industry asked what was driving the increase. FDA responded that there is not a simple answer given the complexity and diversity of the program, yet submission quality, changes in technology, and increasing complexity could all be influencing the outcomes. A discussion followed regarding the meaning of “submission quality” with Industry questioning FDA’s methodology.
Additionally, FDA discussed average time for first review cycle and average FDA days to complete a 510(k) review cycle, noting that the average FDA days to complete a 510(k) review cycle decreased during the first few years of the medical device user fee program. First cycle reviews leading to a final decision increased from 48 days in FY 2005 to 63 days in FY 2010, and first cycle reviews leading to an Additional Information request increased from 47 days in FY 2005 to 55 days in FY 2010. Overall average FDA days to complete a review cycle have remained stable, fluctuating in the range of 35 to 37 days, since FY 2005.
Discussion led to the identification of a number of possible factors leading to an increase in total time to decision, including increasing complexity, staff turnover/new hires, the quality of the submissions, the overall capacity of the review system, and FDA’s implementation of refuse to accept (RTA) letters.
FDA also provided information illustrating that in 2009 more clearances occurred within, but closer to, the performance goal date when compared to clearances in 2005.
Industry and Review Staff Perceptions
FDA presented the results of annual “industry perception” surveys that ODE and OIVD have conducted in recent years to obtain feedback from regulated industry on their experience with the device review program. FDA conducts the survey through its customer satisfaction survey program, and uses the survey results to identify areas to target for program management improvements. FDA described the random sampling methodology and telephone interview process. Industry expressed concerns about the potential bias of responses due to the fact that FDA employees conduct the interviews, and the survey instrument solicits responses to questions framed only as affirmative statements. FDA noted that the interviewers do not work in the review offices and the respondents are told that the responses are blinded to the review offices. FDA also noted that a high percentage of respondents provide detailed narrative responses, and that many of those responses reflect candid feedback.
FDA recently conducted an internal survey of review staff to assess their perceptions regarding factors influencing reviews and review times. Review staff perceived an increase in overall complexity and a decrease in overall quality of submissions coming to FDA. They also reported increased use of consults, which have been going up since FY 2005, due to an increase in the need for additional expertise to review applications. Industry and FDA discussed some of the reasons for the significant increase in consults.
Assessment of NSE decisions
FDA reviewed other outcome metrics relating to the overall state of the program, including an assessment of submissions that were ultimately determined to be not substantially equivalent (NSE). Since FY 2006, the substantial equivalent (SE) rate has decreased, with a corresponding increase in the rates of deleted and withdrawn submissions. More recently, the NSE rate doubled from 4% in FY 2009 to 8% in FY 2010, which contributed to a sharp drop in SE rate from 80% in FY 2009 to 73% in FY 2010. Industry raised concerns that FDA may be meeting its MDUFA goals by issuing more NSEs. Industry also expressed concern that some of the NSE decisions, withdrawals, and deletions might be attributed to changing requirements not reflected in guidance. FDA noted that public health concerns sometimes require changes in the requested performance data. FDA also noted the tradeoff between predictability and flexibility to allow new and innovative devices to come to market through the 510(k) program. To assess the safety and effectiveness profile of a device that features changes in technology and indications, FDA must often ask for data to support those changes.
In examining NSE decisions from FY 2003 to FY 2009, few NSE decisions were due to a lack of a predicate, a new intended use, or a new technology. These results suggest to FDA that the agency is not changing review standards and is showing flexibility in allowing new technologies to be reviewed via the 510(k) program. FDA also noted that an NSE decision for a lack of a predicate, new intended use, or new technology is not necessarily a negative outcome, as the NSE decision would make the device potentially eligible for the de novo process. The majority of the devices found NSE were due to a lack of adequate performance data. Submissions falling into this category either showed subpar performance (i.e., less safe or less effective) when compared to the predicate, or insufficient information was provided to demonstrate substantial equivalence. The majority of the performance-related NSE decisions resulted from a failure to adequately respond to repeated requests for data demonstrating substantial equivalence.
Industry questioned whether these results were due to lack of guidance, less informal assistance by reviewers, and/or FDA asking more questions (including requests for information that is “nice to know” rather than necessary to make a clearance decision). FDA provided examples in which sponsors did not follow available guidance or standards, or did not provide the information specifically requested. FDA noted that some existing guidance documents may be out of date and that where a company follows such guidance, FDA may ask additional questions. FDA also noted that one additional resource to obtain current information for IVD products is 510(k) decision summaries, which are posted on the web. Industry noted that this makes it difficult to know FDA’s expectations with respect to the content of a submission. In other instances, sponsors provided the requested information, but that information raised additional concerns of safety and/or effectiveness. FDA did acknowledge seeing some examples in which questions in the “nice to know” category were communicated, but those examples were in the minority and in each case, FDA had identified other requests for necessary information, such as biocompatibility testing.
Industry and FDA also discussed the number of cycles associated with each NSE decision. Traditionally, FDA follows a policy in which FDA will give a sponsor/manufacturer two additional opportunities to complete an initial submission before finding the device not substantially equivalent. FDA indicated that the data showed that submissions that are ultimately determined to be NSE are often given additional rounds (in which FDA requests missing information from the sponsor).
Analysis of Additional Information (AI) letters
FDA also analyzed two separate cohorts of AI letters.
In the first cohort, 83% had at least one quality issue as defined by FDA. This analysis involved review of submissions with at least one AI letter, where the first AI letter was issued in September 2010. Identified quality issues included inadequate device descriptions, discrepancies throughout the submission, failure to address necessary information as outlined in guidance documents, problems with the proposed indications for use, completely missing performance testing, and/or completely missing clinical data. Industry expressed concern that the measure of “inadequate device description” may be subjective, rather than objective. FDA noted it is difficult to review a submission that fails to identify the components and basic premise or use of the device. FDA reported that 50% of the submissions reviewed had inadequate device descriptions. FDA and Industry discussed potential reasons for this result, which included submission of a poorly compiled application, possible perceived incentives to provide minimal information in the original submission, or insufficient reviewer expertise.
In the second cohort, FDA found that 82% of the first round AI letters had at least one quality issue of the types described for the first cohort. The second analysis involved a review of a sample of consecutive submissions with at least two AI cycles. The cohort included first cycle AI letters issued in calendar year 2010 and the second cycle AI letter issued before the analysis cut-off data of 1/20/2011. FDA examined factors that necessitated a second AI letter, thus requiring an additional cycle of review. Of this cohort, 65% of responses to the first AI letter had not addressed prior requests from FDA. In instances when an applicant provided the requested information, this new information raised new questions in 62% of submissions reviewed. FDA provided examples of responses leading to additional questions. In 4% of submissions reviewed, FDA raised new questions that should have been raised in the first AI letter.
During the discussion, Industry expressed concerns that a perceived quality issue may be due to reviewers asking for more information than necessary or due to outdated guidance documents. FDA’s analysis of AI letters did not include the level of detail necessary to identify those distinctions. FDA acknowledged that the NSE analysis identified some areas that could benefit from greater consistency, and noted several specific areas currently being addressed.
FDA provided examples of the submission quality issues raised during this portion of the discussion. FDA explained that its examples were intended to be typical or illustrative of submission quality issues routinely seen by reviewers.
Experience, Training and Oversight of Reviewers
FDA provided data regarding potential internal contributors to the observed outcomes relating to years of experience, attrition, limitations with management oversight, and insufficient reviewer training. 50% of reviewers have 6 years of experience or less reviewing 510(k) submissions. CDRH also encounters higher attrition rates relative to other Centers in FDA. The current reviewer-to-manager ratio of 14:1 and higher is greater than FDA believes is necessary for proper management oversight. FDA described efforts to achieve an employee-to-manager ratio closer to 10:1. FDA also noted that the travel and training budget is only $1800 per reviewer per year. However, many reviewers forgo in-house training opportunities due to review submission workload. FDA also mentioned steps it is taking to further ensure consistency, including hiring Regulatory Advisors to assist the Program Operations Staff (this includes 510(k) and PMA Staffs).
Third Party Review Program
FDA provided an overview of the third party program. FDA identified problems with the program that need to be addressed, but emphasized it supports reauthorization of the program in 2012.
Submissions through the third party review program appear to have quality of review issues similar to those previous described. Historically, submissions reviewed by a third party reviewer were directed to a branch chief for sign-off on the submission. Now, FDA scrutinizes these submissions more closely due to the poor quality of some of the third party reviews. For example, in FY 2010, 132 out of 226 had quality issues with the review conducted by the third party, with at least four submissions having four review cycles after submission to FDA due to identified deficiencies. Of the fourteen 510(k)s found NSE over the fiscal year, only two were correctly identified by the third party reviewer as being NSE. Some of the quality-of-review issues are a result of inconsistent information throughout the submission process and failure to follow guidance. As a result, FDA currently re-reviews the work of third parties.
FDA noted several challenges to the program, including the lack of adequate training of third party reviewers, lack of experience, and most importantly, lack of access to confidential review memoranda for predicate devices (which FDA review staff can access). FDA is currently assessing possible improvements to the third party program.
FDA announced that it is moving forward with its plans to exempt certain in vitro diagnostics (IVDs) and radiological devices. During the last MDUFA reauthorization, FDA and Industry set a qualitative goal to ascertain whether certain IVDs could be exempted from premarket review. FDA hopes to announce a list of products for down-classification in the near future.
Industry Feedback on Reasons for AI Letters and Major Deficiency Letters
In response to a previous FDA request, industry provided some general preliminary feedback regarding industry perceptions and concerns with respect to AI letters for 510(k)s and major deficiency letters for Pre-Market Approval (PMA) applications.
FDA Summary and Conclusions
FDA summarized its conclusions from the data and analysis presented at the meeting. FDA is meeting the MDUFA Tier 1 and Tier 2 goals for 510(k)s, yet the data also show unintended and undesirable trends in increasing average total time to decisions, and a greater portion of submissions taking almost 90 FDA days before FDA reaches a decision. FDA’s analysis identified a number of potential root causes of these trends, including submission quality; submission complexity; capacity of the review system; experience, training, and oversight of reviewers; and challenges with the third party review program.
All agreed to meet March 30, 2011.
Meeting End Time: 5:00 pm