• Decrease font size
  • Return font size to normal
  • Increase font size
U.S. Department of Health and Human Services

About FDA

  • Print
  • Share
  • E-mail

Analysis of Premarket Review Times Under the 510(k) Program

PDF Printer Version Center for Devices and Radiological Health
U.S. Food and Drug Administration

TABLE OF CONTENTS

  • Executive Summary
  • Introduction
    • Requests for Additional Information
    • Additional Information Letter Analysis
  • Study
    • Methodology
  • Results
    • Submission Quality
    • Increasing Number of Review Cycles
    • Appropriate and Inappropriate Requests for Additional Information
    • Additional Information Letters and NSE Decisions
  • Conclusion
  • Appendix I

Executive Summary

Recent reports sponsored by the medical device industry have raised concerns that there are delays in the FDA’s review of premarket applications for devices submitted under the 510(k) pathway – the most common pathway to market for medical devices. FDA data shows that total review time – the time it takes FDA to review an application and for companies to respond to questions that arise during that review – has increased primarily due to companies taking more time to respond to requests for additional information. In addition, the number of Additional Information (AI) Letters per submission that FDA sends to a company identifying questions they need to address – what are called “cycles” – has increased. And, the percentage of submissions for which an Additional Information Letter is sent has also increased. However, FDA is meeting or exceeding its goals for 510(k) review times agreed to with industry under the Medical Device User Fee Act (MDUFA).

To determine why total review time and the number of cycles have been increasing, FDA conducted an analysis of AI Letters. We found that the principal cause for sending AI Letters and for the increasing number of cycles was the poor quality of submissions – those that did not contain required information to complete a review – from companies and companies’ failure to fully address these quality issues when raised in an AI Letter.

Two separate analyses of AI Letters were conducted: one to assess incoming submission quality (Cohort 1) and one to assess the drivers of the increasing numbers of review cycles (Cohort 2).

Results indicate that 83% of the submissions in Cohort 1 and 82% of the submissions in Cohort 2 contained at least one deficiency related to quality, as defined below. The remainder of the submissions in each cohort had deficiencies that fell outside the conservative definition of quality used in this report. In Cohort 1, 52% of these quality issues involved the device description, meaning the sponsor either did not provide sufficient information about the device to determine what it was developed to do, or the device description was inconsistent throughout the submission. Like Cohort 1, the Cohort 2 analysis shows that roughly 50% of submissions that received at least one AI Letter lacked an adequate device description.

Results of the analysis of Cohort 2 showed that a second AI Letter was most often sent because the applicant failed to fully provide what was asked for in the first AI Letter, and/or provided a response that raised a new question(s), such as testing results that identified a new safety risk or changing the indication for which the device is intended to be used.

We further analyzed the AI Letters to determine how often the questions that were asked were appropriate or inappropriate, i.e. were the AI Letters justified or did the reviewer ask for information or data that were not permissible as a matter of federal law or FDA policy, or unnecessary to make an SE determination. Results from Cohort 1 showed that reviewers asked for data that had not previously been requested for particular device types 12% of the time. Of those requests, 4% were appropriate, and 8% were inappropriate. Results of the first-round AI Letters from Cohort 2 showed that reviewers asked for appropriate data that had not previously been requested for particular device types 4% of the time, and 2% of the time those requests were inappropriate.

We conclude that actions taken to improve submission quality could significantly improve total review times and time to market for many devices reviewed under the 510(k) program.

Introduction

Enabling efficient review of a premarket submission under the 510(k) pathway – the most common premarket review pathway for medical devices – is a shared responsibility. FDA is responsible for providing clarity about what information companies (or “sponsors”) must submit in their applications, requesting information that is appropriate for making a substantial equivalence determination, and making timely and consistent decisions. Sponsors are responsible for providing FDA with the information requested and for responding to FDA in a timely manner.

Recent reports sponsored by the medical device industry have raised concerns that there are delays in the FDA’s review of premarket applications for devices submitted under the 510(k) pathway. Some reports purport that the delays are primarily due to FDA “changing the game” and “raising the bar” by asking for additional information that it had not asked for in the past or that is unrelated to a clearance determination.

FDA is meeting or exceeding its goals for 510(k) review times agreed to with industry under the Medical Device User Fee Act (MDUFA). FDA reviews 90% of 510(k)s within 90 days, and 98% of those devices within 150 days. Devices submitted under a 510(k) account for 95% of the more than 4,000 submissions subject to user fee performance goals that FDA reviews each year. Despite FDA’s successful efforts to meet and maintain its performance goals, review times are rising. However, total review time – the time it takes FDA to review an application and for companies to respond to questions that arise during that review – has increased primarily due to companies taking more time to respond to requests for additional information (see Chart 1).

Chart 1: Average Time to 510(k) Decision

Chart - line graph. X-axis is fiscal year showing receipt cohort from 1999 to 2011. Y-axis is days in increments of 20 from 0 to 160. Three lines plot number of days for submitter, FDA and total. Results for each year are: 2000, submitter 21, FDA 75, total 96. 2001, submitter 21, FDA 80, total 102. 2002, submitter 19, FDA 78, total 97. 2003, submitter 24, FDA 77, total 101. 2004, submitter 28, FDA 64, total 92. 2005, submitter 34, FDA 56, total 90. 2006, submitter 34, FDA 60, total 99. 2007, submitter 50, FDA 66, total 116. 2008, submitter 52, FDA 66, total 119. 2009, submitter 65, FDA 73, total 138. 2010, submitter 67, FDA 73, total 140.

* SE and NSE decisions only; Averages may not sum to total due to rounding
** 2009, 2010 some cohort still open as of July 5, 2011 data may change

Requests for Additional Information

Once a submission is received, the review clock for FDA begins. When a submission contains insufficient information and a reviewer identifies a need for additional information, the reviewer will either call the submitter (Interactive Review) or prepare a letter outlining the additional information needed (Additional Information (AI) Letter). These letters include both formal letters sent via U.S. mail as well as “telephone hold” memos and e-mails. These letters include a comprehensive list of deficiencies associated with incoming original 510(k) submissions. Once an AI Letter is sent, the submission to which the letter pertains is placed on "hold" and is not considered to be under active review while the reviewer is waiting for a response. In other words, the clock stops during this time.

AI Letters request a response within 30 days. If additional time is needed, sponsors may request an extension up to 180 days. The more quickly the sponsor is able to respond to the AI Letter, the shorter the total review time will be. As is demonstrated by Chart 1, above, average industry time in responding to these types of requests has significantly increased over the past few years.

When deficiencies that a reviewer believes can be quickly and easily resolved are noted, the reviewer may choose to use the Interactive Review process rather than send an AI Letter. Under this process, the reviewer calls or e-mails the sponsor to request additional information or clarification and a specific response time is agreed upon. The review clock does not stop while the reviewer awaits a response from the sponsor. The Interactive Review process may be used at any point during the review process, even before a comprehensive list of deficiencies is identified.1

Total review time to reach a 510(k) decision can include more than one review cycle if the company did not submit all the required information or the information submitted raised new questions, such as when the results of the company’s testing suggest there is a new safety risk or the company changes the device’s indications for use. A cycle ends each time the review clock is stopped while a reviewer waits to receive additional information, and a new cycle begins when the sponsor submits a response to an AI Letter.

Additional Information Letter Analysis

Since 2002 the number of Additional Information (AI) Letters per submission that the FDA sends to sponsors identifying questions they need to address – what are called “cycles” – has increased (see Chart 2). And, the percentage of submissions for which the FDA sends an AI Letter has also increased steadily since the start of the user fee program (see Chart 3).

Chart 2: Number of Review Cycles per 510(k) Submission

Chart - line graph. X-axis is fiscal year showing receipt cohort from 20001 to 2010. Y-axis is cycles in increments of .5 from 0 to 2.5. One line plot number of cycles for each year. Results for each year are: 2001, 1.4. 2002, 1.4. 2003, 1.5. 2004, 1.6. 2005, 1.6. 2006, 1.7. 2007, 1.8. 2008, 1.9. 2009, 2.0. 2010, 2.1.

Chart 3: Percent with AI Request on First Cycle

Chart - line graph. X-axis is fiscal year showing receipt cohort from 20001 to 2010. Y-axis is percent with AI Request in increments of 10 from 0 to 90. One line plots percent for each year. Results for each year are: 2001, 38. 2002, 36. 2003, 40. 2004, 44. 2005, 50. 2006, 56. 2007, 61. 2008, 65. 2009, 72. 2010, 77.

To gain greater insight into the causes underlying these increases, the FDA undertook an analysis of selected AI Letters that were sent in 2010. Our analysis showed that, in the majority of cases, the FDA appropriately chose to send an AI Letter. These cases include, but are not limited to, circumstances where: (i) the sponsor did not submit required information without justification – such information includes supporting data required under current guidance or performance data that FDA consistently requires for certain device types; (ii) the sponsor failed to identify a predicate; or (iii) the sponsor employed different device descriptions or indications for use for the subject device throughout its submission. In all of these cases, FDA could not reach a substantial equivalence determination without the sponsor providing additional information or rectifying deficiencies in the submission. Our analysis also showed that, in some cases, the FDA sent AI Letters for inappropriate reasons, such as asking for additional testing that was outside the scope of what would be required for a 510(k) submission, or asking for supporting documentation that was already covered by a standard government form. As a result of this analysis, the FDA is taking steps to reduce the number of inappropriate AI Letters it sends.

Study

Methodology

Two separate analyses of AI Letters were conducted: one to assess incoming submission quality (Cohort 1) and one to assess the drivers of the increasing numbers of review cycles (Cohort 2). Because Cohort 2 also created an additional pool of letters for which we could analyze submission quality, the same analysis conducted on Cohort 1 was repeated on a corresponding subset of Cohort 2 to confirm results.

Cohort 1

Cohort 1 consists of AI Letters from 100 510(k)s sent by the Office of Device Evaluation (ODE) between September 13 -24, 2010. These submissions were originally received between June 22, 2010, and September 14, 2010 (the “Cohort 1 Receipt Period”). The date range for these letters was set so as to obtain a sample of letters that were as recent as possible.

During the Cohort 1 Receipt Period a total of 727 510(k) applications were received by ODE, 79% of which received first action AI Letters. Cohort 1 contains 17% (100/575) of the submissions for which AI Letters were sent during this period. It is considered to be a random sample with a maximum confidence interval of +/-9%.

The sample of submissions in Cohort 1 was comparable to the overall distribution of 510(k) applications received during the Cohort 1 Receipt Period.

Cohort 2

Cohort 2 included 98 510(k) submissions received by ODE between October 13, 2009, and January 26, 2010, and 36 510(k) submissions received by the Office of In Vitro Diagnostics (OIVD) between November 6, 2009, and July 9, 2010. (OIVD receives fewer 510(k) submissions than ODE, resulting in the longer time frame covering of a smaller number of submissions.) The date ranges for these letters was set so as to obtain the most recent AI Letters possible for submissions where review had been completed. Because Cohort 2 included only completed submissions with two or more AI Letters, the date range did not overlap with that of Cohort 1 due to the additional time needed to address two or more AI Letters.

A sample of 134 submissions was drawn from a total pool of 930 submissions with a first AI Letter sent during calendar year 2010 and a second AI Letter sent before the analysis cutoff date (January 20, 2011). Of these 930 submissions, 297 were excluded because they did not have a final decision as of the analysis cutoff date. The remaining 633 submissions were arranged in ascending order by date of receipt, from most to least recent. Of these 633 submissions, the 98 and 36 most recent 510(k)s submitted to ODE and OIVD, respectively, that were available for analysis were selected. Files were selected only if they were accessible electronically from CDRH’s electronic premarket submission database (IMAGE).

Charts 4 A and B below show that Cohort 2 is representative of submissions that result in at least two AI Letters.

Chart 4A: Pool from which Cohort 2 Was Selected

Pie chart showing percent of submissions with an AI request. Three segments. 335, or 24%, had no AI request. 575, or 42%, had one AI request, and 476, or 34%, had 2 or more AI requests.

Chart 4B: All Submissions with AI Letters in FY2010

Pie chart showing percent of submissions in receipt cohort with an AI request as of March 4, 2011. Three segments. 910, or 23%, had no AI request. 1746, or 45%, had one AI request, and 1224, or 32%, had 2 or more AI requests.

The rates for zero, one, and two or more AI Letters in the sample period from which Cohort 2 was drawn are identical to that of all 510(k) submissions received in fiscal year 2010; therefore, Cohort 2 is a representative sample of AI Letters received.

Results

Submission Quality

For Cohort 1, staff familiar with the review process but not involved in the review of any of the submissions sampled reviewed all 100 letters. A list of deficiency categories was created (see Appendix I for the full list of deficiencies). From this list, we identified a subset of deficiencies that we considered to be indicators of poor quality. We defined these categories conservatively so as to err on the side of not overestimating the proportion of AI Letters that contained quality-related deficiencies.

A poor quality submission was defined as having at least one of the following deficiency categories:

  • Inadequate device description - Every 510(k) submission is required to have a description of what the device is intended to do. Without this description, the reviewer cannot determine if the device has been evaluated properly by the sponsor. In other words, if the reviewer can’t tell from the submission what the device does, he or she cannot determine if the documentation included in the submission supports the device’s intended use. Therefore, it is essential that a thorough and clear description of the device be provided. Without it, a substantive review of the submission cannot be performed.
  • Discrepancies throughout submission - Discrepancies in this category most often related to device description or indications for use. Differences in device description can have a substantial impact on the review of a device because, under the 510(k) pathway, the intended use and technological characteristics of the new device are compared to that of a predicate device. And, when the indications for use statement is inconsistent in different parts of a submission (e.g., cover letter, indications for use form, 510(k) summary, device labeling have a different indications for use statement), we cannot determine if the device has the same indications for use as a predicate or if any differences alter the intended therapeutic/diagnostic effect of the device when compared to the predicate. Therefore, discrepancies preclude substantive review of a submission and require clarification.
  • Problems with indications for use - In order to be found substantially equivalent, a device must either have the same indications for use as a device already on the market (“predicate” device), or any differences in the indications for use between the device and the predicate must not alter the intended use (i.e., the device’s intended therapeutic/diagnostic effect). Furthermore, the type of performance data necessary to assess equivalence is dependent upon the indications sought. Therefore, a clear indications for use statement is necessary to determine if the methods used to evaluate the device accurately reflect its intended use. Quality issues related to indications for use include: lack of identification of any predicate for the indication, the indication requires a Premarket Approval (PMA) and for which a PMA already has been approved, the indication for use for a device that uses a drug is inconsistent with the drug labeling.
  • Failure to follow or otherwise address current guidance document(s) or recognized standards - FDA issues guidance documents or recognizes a national or international standard to help manufacturers determine what information to include in a 510(k) submission generally and for certain device types specifically. If a manufacturer fails to follow current guidance (i.e. that which is up-to-date) for a certain device type or a recognized standard, and offers no explanation for its failure to do so, FDA would consider that submission to be of poor quality and would issue an AI Letter that quotes current guidance to obtain the missing information. For our analysis we only determined that a submission had this deficiency if the AI Letter cited or quoted a guidance document.
  • Performance testing required for certain device types is completely missing (i.e., no performance data provided at all) - Performance testing is required for all traditional 510(k)s. Because concerns with the adequacy of the testing provided in 510(k) submissions can pertain to the adequacy of the science, for our analysis, we only determined that a submission had this deficiency if no performance testing information was provided at all. Without performance testing, we cannot evaluate whether a device’s performance is substantially equivalent to that of a predicate.
  • Clinical data required for certain device types is completely missing (i.e., no clinical data provided at all) - For some device types, FDA requires clinical performance data to demonstrate substantial equivalence. FDA considers a submission to be of poor quality when such testing is clearly outlined in a device-specific guidance document or in a pre-IDE, but is completely omitted from a 510(k) submission. We did not consider it a deficiency if some clinical data, though inadequate, was provided.

First round AI Letters from the ODE subset of Cohort 2 were analyzed according to these deficiency criteria as well.

Results indicate that 83% of the submissions in Cohort 1 and 82% of the submissions in Cohort 2 contained at least one deficiency related to quality, as defined above.

Chart 5: Number of quality issues observed per submission – Cohort 1

Chart - bar graph. X-axis is number of quality issues observed from 0 to 4. Y-axis is percent of submissions evaluated in increments of 10 from 0 to 90. One bar plots percent for each number of quality issues. Results are: 0, 17/100 or 17%. More than 1, 83/100 or 83%. 1, 39/100 or 39%. 2, 29/100 or 29%. 3, 11/100 or 11%. 4, 4/100 or 4%.

Chart 6: Number of quality issues observed per submission – Cohort 2

Chart - bar graph. X-axis is number of quality issues observed from 0 to 4. Y-axis is percent of submissions evaluated in increments of 10 from 0 to 90. One bar plots percent for each number of quality issues. Results are: 0, 18/98 or 18%. 1, 22/98 or 22%. 2, 33/98 or 34%. 3, 20/98 or 20%. 4, 5/98 or 5%.

As Charts 5 and 6 above show, the majority of submissions had at least one deficiency. And, even though less than twenty percent of the submissions in each cohort had deficiencies that did not fall within our definition of being related to quality, they still raised issues that would best be addressed through an AI Letter because there were multiple deficiencies in the submission or new testing information was requested, which requires the review clock to be stopped while the sponsor gathers that information. These deficiencies include problems such as a missing or inadequate comparison of the device with its predicate, content issues such as problems with the performance testing provided (this can include inadequate methods, documentation or results), unsupported claims in the labeling, not providing test reports at all, problems with the instructions for use, missing software documentation, or inadequate biocompatibility information.

Data were further analyzed to determine the frequency of each type of quality deficiency in these submissions. The specific distributions of the deficiencies in each Cohort are set out in Charts 7 and 8 below:

Chart 7: Type of deficiency observed per submission – Cohort 1

Chart - bar graph. X-axis is type of deficiency observed. Y-axis is percent of submissions evaluated in increments of 10 from 0 to 90. One bar plots percent for each deficiency observed. Results are: Inadequate device description, 52/100 or 52%. Discrepancies throughout submission, 22/100 or 22%. Failure to follow current guidance, 24/100 or 24%. Indications for use, 26/100 or 26%. Performance testing completely missing, 8/100 or 8%. Clinical data completely missing, 14/100 or 14%.

As Chart 7 shows, the majority (52%) of quality issues involved the device description, 22% of the submissions had discrepancies with the indications for use or device description throughout the submission, and 24% failed to follow a current guidance document.

Chart 8: Type of deficiency observed per submission – Cohort 2

Chart - bar graph. X-axis is type of deficiency observed. Y-axis is percent of submissions evaluated in increments of 10 from 0 to 90. One bar plots percent for each deficiency observed. Results are: Inadequate device description, 47/98 or 48%. Discrepancies throughout submission, 9/98 or 9%. Failure to follow guidance document or recognized standard, 58/98 or 59%. Indications for use, 40/98 or 41%. Performance testing completely missing, 4/98 or 4%. Clinical data completely missing, 10/98 or 10%.

Like Cohort 1, the Cohort 2 analysis shows that roughly 50% of submissions that received at least one AI Letter lacked an adequate device description. The Cohort 2 analysis includes more than double the instances of failure to address guidance document(s) and/or standards and instances of inconsistent indications for use. The higher rate of deficiencies in Cohort 2 may be due to the fact that Cohort 2 included only submissions that had at least two AI Letters indicating a more problematic submission.

Although these cohorts are unrelated, the distribution of deficiencies among them is strikingly similar. Problems with submissions quality are contributing to an overall increase in review times. During 2009 and 2010, FDA received 4,103 and 3,880 510(k)s, respectively. Of those, 72% and 77%, respectively, received at least one AI Letter. As our analysis shows, the majority of AI Letters are sent due to poor quality submissions, including an inadequate or inconsistent device description and failure to follow current guidance or a national or international standard recognized by FDA. FDA develops guidance documents and recognizes standards established by national and international standards development organizations to provide greater predictability, consistency and transparency in our premarket review programs.

Increasing Number of Review Cycles

Staff familiar with the review process but not involved in the premarket review of the sampled submissions analyzed the deficiencies in all 134 AI Letters for each cycle. Second round AI Letters were analyzed according to the criteria listed in Appendix I.

It was determined that second round AI Letters were sent for three primary reasons:

  • The sponsor did not address the requests in the first round AI Letter - This category includes cases where the sponsor provides an inadequate rationale for not providing the scientific or regulatory information requested in the first AI Letter, the sponsor provides different information in lieu of what was asked for without an adequate rationale for the substitution, the sponsor simply states that they will do the testing requested (but have not yet done so), the sponsor fails to provide any written reply to a given deficiency, or the sponsor objects to the deficiencies identified and requests justification for FDA’s requests. In all such cases, FDA is not provided with the information needed to complete the review despite the initial written request; therefore, a second request for the same information is sent.
  • The sponsor’s response to the first round AI Letter raised issues for follow-up by FDA - This category includes cases where, upon review of the sponsor’s response to the initial AI Letter, new questions were raised by newly submitted information. For example, once a new test report is provided, FDA may have questions regarding why a test was conducted a certain way, why particular sizes or models of the device were or were not evaluated, and/or unexpected results may raise new concerns that require further investigation, such as test results that suggest the device will not perform as intended. In other words, newly submitted data indicated that the device would function differently than initially described, the device had different technological characteristics than the predicate, or there were safety issues that needed to be addressed before the device could be cleared. If questions were asked about “new” issues under these circumstances it was to address an issue relevant to a 510(k) review that arose directly from the newly-provided data.
  • The FDA reviewer raised new questions that should have been raised previously - Industry has asserted that FDA raises new questions late in the review process. Therefore, we prospectively looked for cases in which a new question was raised in the second AI Letter that the reviewer should have identified earlier in the process. Results showed the incidence of this occurrence to be low, which is consistent with FDA's policy to conduct a comprehensive review of the entire submission prior to sending the first AI Letter. However, in a small number of cases (discussed below) FDA did raise a question in a second AI Letter that we should have raised in the first letter.

Results of the analysis of Cohort 2 showed that a second AI Letter was most often sent (66% of the time) because the sponsor did not address the deficiencies raised in the first AI Letter at all in its response or addressed the deficiencies only in part and did not justify its failure to provide a full response. Almost as often (63% of the time), the sponsor provided information in response to an initial AI Letter that raised another issue for follow-up by the reviewer, such as a new safety risk. In a small number of cases (4%), FDA asked a new question(s) in the second AI Letter that should have been raised previously and was erroneously omitted from the first AI Letter.

Chart 9: Reasons for Sending a Second AI Letter

Chart - bar graph. X-axis is reason for sending a second AI Letter. Y-axis is percent of submissions evaluated in increments of 10 from 0 to 90. One bar plots percent for each reason. Results are: Sponsor didn't address prior request from FDA, 88/134 or 66%. Sponsor's response raised issues for follow-up by FDA, 85/134 or 63%. FDA raised new questions that should have been raised previously, 6/134 or 4%.

We further analyzed the 85 submissions (63%) in which the sponsor’s response raised issues for follow-up by FDA to determine why questions were asked at that time. Results showed that in most cases (68%), the information submitted in response to the initial AI Letter also did not completely address the original deficiency. Complete results are outlined in the table below:

Types of New IssuesPercentage (N = 85)
Information submitted did not address all deficiencies completely68% (58/85)
Information submitted did not support labeling claims28% (24/85)
Test results submitted in response to first AI Letter indicated problems with device design or function21% (18/85)
Testing was not performed correctly19% (16/85)
Justification for omission of response to first AI Letter raised additional questions or was insufficient16% (14/85)
Information submitted to address deficiencies, but corresponding changes to update the 510(K) summary, labeling, or other aspects of submission were still needed16% (14/85)
Predicates used were not suitable8% (7/85)
Sponsor did not agree with request in first AI Letter2% (2/85)
Reviewer unable to determine device design or new issues regarding device design raised by response to first AI Letter2% (2/85)

We also analyzed the 6 submissions (4%) in which FDA raised a new question in the second AI Letter that should have been raised in the first. Results showed that in most cases (67%), the reviewer found additional issues with the submitted labeling that needed clarification or correction, such as inaccurate, contradictory or unsubstantiated claims in the 510(k) summary, Indications for Use, Intended Use or Instructions for Use. These kinds of changes are usually caught and resolved during the first review cycle, but in these case were not identified until later in the review process. Complete results are outlined in the table below:

Types of New IssuesPercentage (N = 6)
Reviewer found additional labeling or other issues that needed clarification or correction, such as inaccurate, contradictory or unsubstantiated claims in the 510(k) summary, Indications for Use, Intended Use or Instructions for Use67% (4/6)
Consult received after 1st round completed and deficiency sent to sponsor in round 217% (1/6)
Inadequate device description17% (1/6)

The analysis of the second AI Letters from Cohort 2 gives us additional insight into how an insufficient response from a sponsor can further delay a submission. In 93% (or 125) of the submissions analyzed, the sponsor had either failed to address questions raised in the initial AI Letter and/or provided information in response to the deficiencies that did not support a determination that the device was substantially equivalent and, therefore, FDA sent a second AI Letter to give the sponsor a second opportunity to address these problems rather than determine that the device is not substantially equivalent.

Appropriate and Inappropriate Requests for Additional Information

For Cohorts 1 and 2, we further analyzed the AI Letters to determine how often the questions asked were appropriate or inappropriate, i.e. were the FDA’s requests for additional information justified or did we ask for information or data that were not permissible as a matter of federal law or FDA policy, or unnecessary to make a substantial equivalence determination.

Cohort 1

Out of the 100 first-round AI Letters in Cohort 1 sent by ODE, reviewers asked for data that had not previously been requested for particular device types 12% of the time (i.e., questions appeared in 12 out of 100 AI Letters). Of those, 4% (or, questions in 4 out of 100 AI Letters) were appropriate, and 8% (or, questions in 8 out of 100 AI Letters) were inappropriate.

Appropriate requests included questions to address issues raised by data included in the original submission that may not have been asked for that device type in the past, but were scientifically justified and permissible as a matter of law to make a substantial equivalence determination and were consistent with the least burdensome principle. This category includes requests that were scientifically justified, but were not reflected in current guidance because the need for additional data for that device type arose after the publication of the guidance and the guidance had not yet been updated.

Inappropriate requests included: (i) requests for data beyond that which had been asked for in the past and which were not scientifically justifiable; (ii) requests for data that had not been consistently required across the Center; (iii) requests for information that were inconsistent with current guidance; (iv) requests for unnecessary additional testing; and (v) requests for data to support a standard government form, which was specifically created to avoid the submission of the related underlying data as part of a 510(k). Although these requests were inappropriate, the AI Letters in which these requests were made also identified many other deficiencies that were appropriate. However, the FDA does not condone inappropriate requests for additional information, and, as part of our increased reviewer training program, we are taking steps to prevent and diminish inappropriate requests for additional information.

Cohort 2

Out of the 134 second round AI Letters in Cohort 2 sent by ODE and OIVD, reviewers asked for data that had not previously been requested for particular device types 4% of the time, or questions appeared in 6 out of 134 AI Letters. Of those, 2% (or, questions in 3 out of 134 AI Letters) were appropriate, and 2% (or, questions in 3 out of 134 AI Letters) were inappropriate. The lower rate of inappropriate questions in this case is most likely due to the fact that second-round AI Letters are sent to address issues raised by information provided by the sponsor in response to a first-round AI Letter and not in the original submission. Therefore, the letters are more targeted and contain fewer deficiencies than first-round AI Letters.

Appropriate requests in Cohort 2 included additional targeted requests for performance data based on new data submitted in response to the initial AI Letter, such as cybersecurity performance testing, performance testing in conformance with current recognized standards versus significantly outdated standards, and clinical data to address potential safety concerns.

Inappropriate requests included asking for theoretical data instead of performance data to demonstrate substantial equivalence; asking the sponsor to provide information about quality systems (manufacturing) activities when the statutory standard to request such data had not been met; and asking for unnecessary information such as lot numbers of predicate devices used in the performance testing.

Additional Information Letters and NSE Decisions

In some cases, insufficient responses to AI Letters eventually lead to a finding of “not substantially equivalent” (NSE). Companies who receive an NSE decision after multiple rounds of AI Letters often believe it is due to FDA “raising the bar” or changing its standards mid-review. To determine what the most frequent reasons were for reaching an NSE determination, FDA analyzed submissions that resulted in an NSE decision. The results of this analysis are available on our website. Through this analysis, we determined that the vast majority of NSE decisions are due to the manufacturer failing to provide adequate performance data to support a substantially equivalent decision.

Conclusion

Although the FDA is meeting its 510(k) performance goals under MDUFA, overall review time for 510(k) submissions is steadily increasing due primarily to an increase in the number of review cycles and the amount of time companies take to respond to requests for additional information.

Our analysis shows that that poor submission quality and sponsors’ failure to address deficiencies identified in first-round AI Letters are major contributors to the increase in total review times. For example, 65% of the time FDA sent a second-round AI Letter because the sponsor failed to submit information requested in the first AI Letter. However, the FDA has also contributed to the increase by making inappropriate requests for additional information.

The FDA will continue to work with industry to identify additional actions to reduce the average number of review cycles and the percent of 510(k) submissions for which an AI Letter is sent. The FDA has already taken steps to address some of the issues identified in this analysis. We are working to provide greater predictability for industry by communicating justified changes in data requirements more quickly and transparently. We recently issued draft Standard Operating Procedures for Notice to Industry Letters, which were primarily created for this purpose. The FDA is also enhancing training for FDA staff and industry, which is aimed at reducing inappropriate requests for additional information and helping sponsors understand when they are required to submit data.

Through these and other steps we are taking to address weaknesses in the 510(k) program, we aim to reduce the amount of time to clearance for 510(k) devices, while assuring that we maintain the same levels of safety and effectiveness. It is our hope that taking actions to increase submission quality and avoid inappropriate requests for additional information will prevent avoidable delays and reduce review times, which will, in turn, get safe and effective devices to market faster.


Appendix I

Deficiency CategoriesCohort 1 (n=100)Cohort 2 (n=98)
Number of submissions with deficiencyPercent of submissions with deficiency†Number of submissions with deficiencyPercent of submissions with deficiency†
Problems with indications for use*; includes:
  • not provided at all,
  • no predicate identified for indication sought,
  • indication requires a PMA (i.e., PMA devices approved for given indication),
  • inconsistent throughout submission,
  • not supported by data,
  • missing important qualifying information, model numbers, prescription vs over-the-counter, or
  • inconsistent with drug labeling
26 26% 40 41%
Inadequate device description* 52 52% 47 48%
Discrepancies in device description or indications for use* 22 22% 9 9%
Failure to follow or otherwise address guidance document(s) [Cohort 2: or standard (s)]* 24 24% 58 59%
Performance testing completely missing (i.e., no performance data provided at all)* 8 8% 4 4%
Performance testing inadequate; includes:
  • missing tests,
  • missing test methods,
  • missing description of samples tested, and/or
  • concerns with results
52 52% 63 64%
Clinical data that is required for that type of device is completely missing (i.e., no clinical data provided at all)* 14 14% 10 10%
Predicate comparison missing or inadequate 32 32% 44 45%
Instructions for use inadequate; includes:
  • not provided at all;
  • illegible, not in English, or otherwise unclear;
  • missing device description, instructions for using the device, magnetic resonance compatibility information, resterilization instructions, warnings, and/or precautions; and/or
  • containing discrepancies
26 26% 57 58%
Software documentation inadequate; includes:
  • not provided at all,
  • missing description,
  • missing traceability matrix,
  • missing list of anomalies, and/or
  • missing validation
20 20% 14 14%
Missing Form 3654 (required to document use of recognized standards) 30 30% 23 24%
Biocompatibility information completely missing 8 8% 10 10%
Biocompatibility information inadequate; includes:
  • missing testing of some components or accessories,
  • inappropriate test methods,
  • missing some tests, but not all,
  • complete test reports not provided, and/or
  • concerns with results
20 20% 25 26%
Missing sterilization validation 13 13% 10 10%

* Deficiency related to submission quality.
† Note, most submissions had multiple deficiencies, so this percentage reflects the percent of total submissions with this deficiency.


1 See Interactive Review for Medical Device Submissions: 510(k)s, Original PMAs, PMA Supplements, Original BLAs, and BLA Supplements