Clinical Outcome Assessment (COA): Frequently Asked Questions


New! Change in Process for Qualification of Drug Development Tools

The process for qualification of drug development tools is changing under new FD&C Act Section 507. FDA is posting information about these updates to the DDT submission processes.

Check for details, documents and information consistent with new section 507(c).

1.  What is the standard of evidence for COA qualification?

The measurement principles of content validity, reliability, construct validity, and ability to detect change apply to all types of COAs. The PRO guidance, while developed for patient-reported outcomes, provides many recommendations that are applicable to the development of all COAs, including clinician-reported outcome (ClinRO) assessments, observer-reported outcome (ObsRO) assessments, and performance outcome (PerfO) assessments. In addition, we often refer instrument developers to the ISPOR Task Force publications on content validity.

The COA Wheel and Spokes (PDF - 1MB) provided here identifies the key components of various stages of instrument development and the points at which qualification may occur.

2.   What quantitative information is useful to provide for Agency review to support a clinical outcome assessment (COA) drug development tool (DDT) for qualification for exploratory use? (Spoke III)

This stage of instrument development typically involves cross-sectional quantitative (psychometric) analysis. The primary objective of these analyses, in conjunction with the qualitative data, is to select items and refine the conceptual framework of the instrument for further confirmatory evaluation. Each quantitative analysis planned should provide evidence that the items perform well psychometrically and together they assess what the instrument is intended to assess, (i.e., the concept(s) described in the COA conceptual framework). The analysis results should inform the retention or removal of items, refinement of the conceptual framework, and the development of provisional scoring algorithm(s). The Agency encourages the submitter to focus on basic analyses and build the evidence methodically using a systematic approach. The quantitative evidence described below should be gathered in a sample of patients with characteristics consistent with the targeted patient population expected in trials and targeted context of use:

  1. Item descriptive statistics including frequency distribution of both item response and overall scores, floor and ceiling effect, and percentage of missing response
  2. Inter-item relationships and dimensionality analysis (e.g., factor analysis or principal component analysis and evaluation of conceptual framework).
  3. Item inclusion and reduction decisions, identification of subscales (if any), and modification to conceptual framework
  4. Preliminary scoring algorithm (e.g. include information about evaluation of measurement model assumptions, applicable goodness-of-fit statistics). The scoring algorithm should also include how missing data will be handled.
  5. Reliability
    1. Test-retest(e.g., intra-class correlation coefficient)
    2. Internal consistency (e.g. Cronbach’s alpha)
    3. Inter-rater (e.g. kappa coefficient)
  6. Construct validity
    1. Convergent and discriminant validity (e.g., association with other instruments assessing similar concepts)
    2. Known groups validity (e.g., difference in scores between subgroups of subjects with known status)
  7. Score reliability in the presence of missing item-level and if applicable scale-level data
  8. Final instrument, conceptual framework, provisional scoring algorithm for exploratory use, and plans for further revision and refinement

Many of these steps may be iterative (e.g. after item-reduction earlier analyses may need to be repeated with the revised set of items). Additional analyses may be useful or expected (e.g. inter-rater reliability for some ClinRO or ObsROs), or at times analyses listed above would not be feasible in their traditional sense (e.g. test-retest of a rapidly changing characteristic of an acute disease).

3.  What is FDA’s position on use of modern psychometric methods (e.g., Rasch model, Item Response Theory, mixed methods, and other modeling) in instrument development?

FDA recognizes that different approaches to instrument development may be appropriate. FDA will consider different approaches to instrument development than what is described in the FDA PRO guidance.

FDA does not require the use of modern psychometric methods in instrument development. Whichever method a submitter choses to use, the Agency recommends first testing and documenting that the necessary assumptions associated with the method have been met.

4. Is it necessary for an instrument to be qualified in order to use that instrument as the basis for a primary or secondary endpoint in a clinical trial?

No. A tool that is not formally qualified may still be acceptable for use, and should be discussed with the review division within an IND. We recommend discussing outcome assessments and endpoints with the FDA as early as possible.

5. Are drug sponsors (IND/NDA/BLA holders) required to use qualified instruments when they exist?

No. While we believe there are benefits of using a qualified tool, drug sponsors may select any well-defined and reliable tool(s) they believe will be best suited for their clinical trial(s). We encourage drug sponsors to discuss those decisions with the appropriate review division.

6. An instrument has been used to support claims in labeling. Does this mean that tool is qualified?

No. Only tools that have been reviewed through the formal Drug Development Tool (DDT) qualification process, about which a positive qualification decision has been made, are considered qualified. Tools that have not been formally qualified may still be acceptable for use and could support labeling claims.

7. Who participates in a COA Qualification Review Team (QRT)?

The COA QRT is comprised of representatives from across CDER and always includes representatives from the CDER’s Clinical Outcome Assessments Staff, the appropriate review division(s), and the Office of Biostatistics. All consultation and advice as well as final qualification decisions are made jointly with input from all QRT members. Representatives from other Centers within the Agency may also participate in QRT meetings when appropriate.

8. How do FDA and EMA work together on COA qualification?

A confidentiality agreement between the FDA and the EMA enables submitters of DDTs to engage with each agency in parallel. The FDA and EMA may discuss DDT submissions during regular conference calls, or on an ad hoc basis. In addition, FDA and EMA may participate in joint discussions with the submitters. While each agency makes its own qualification decisions and offers its own advice letters to submitters, we try to coordinate to the fullest extent possible.

9. Is the qualification route the only way a drug sponsor can interact with CDER’s Clinical Outcome Assessments Staff?

The Clinical Outcome Assessments Staff works in a consultative basis with the Review Divisions within CDER’s Office of New Drugs as well as with CBER and CDRH. For individual medical product development programs, the Clinical Outcome Assessments Staff is consulted on a case-by-case basis and provides advice to the primary review team, who issues the final comments/agreements to the drug sponsor.

The Clinical Outcome Assessments Staff manages the DDT qualification program and serves as the primary point of contact for the qualification program. As stated above, a drug sponsor may submit an instrument they have developed within their drug development program for CDER review under the DDT qualification program with the understanding that, if qualified, it will be available for public use.

10. What is the relationship between CDER’s and CDRH’s qualification programs?

CDRH has its own qualification process for Medical Device Development Tools, including COAs (Medical Device Development Tools Draft Guidance for Industry, Tool Developers, and Food and Drug Administration Staff). CDRH’s qualification process is similar to but operates independently from CDER’s. We encourage submitters to consider whether their COA may have applicability in medical device studies and, when appropriate, submit for qualification by CDRH.

11. Are there any other means of seeking Agency input on clinical outcome assessments before submission of an IND?

Yes. The Critical Path Innovations Meeting (CPIM) is another route for seeking early Agency input on clinical outcome assessments for a particular context of use outside of a specific drug development program. CPIMs do not result in any formal agreements. For more information, please refer to the CPIMs website: http://www.fda.gov/Drugs/DevelopmentApprovalProcess/DrugInnovation/ucm395888.htm

12. When submitters enter the qualification process, they agree that the qualified DDT will be made publicly available. Does this mean that the DDT needs to be free of charge for public use?

No. CDER DDT qualification does not displace any intellectual property, copyrights, or ownership rights. Although qualified COA instruments must be made publicly available, this does not prevent the DDT owner from charging a reasonable fee for its use.


New! Change in Process for Qualification of Drug Development Tools

Get updates and details.

13. Are only newly developed COA instruments eligible for qualification?

No. The FDA will consider COA qualification program letters of intent for both new and existing measures.

14. How many development projects are currently in the COA DDT Qualification program?

Please see the COA current projects.

15. How does CDER prioritize what COA qualification projects to accept into the COA qualification program?

CDER is committed to strategic growth of the COA DDT Qualification program. Given the growing interest in the qualification program by many stakeholders, we must consider how to prioritize acceptance of each proposal to the COA qualification program very carefully, and have developed a framework for considering whether to accept a COA DDT project into the program or whether such a decision is deferred. In general, the public health benefit and scientific merit of the proposed COA DDT qualification effort is weighed taking into consideration available CDER staff resources. Specifically, the following will be considered:

  • Does the COA DDT fill a critical measurement gap (i.e., is drug development stalled or slowed)?
  • Does the proposed COA DDT represent significant improvement over currently available, acceptable COA DDTs?
  • Is the COA patient centric (i.e., measures something of relevance and importance to patients in their daily lives that is not being evaluated in that clinical context due to lack of acceptable assessments)?
  • Are there other efforts already underway in the proposed disease area either under the COA DDT qualification program or another mechanism?
  • If the proposed tool is already deemed acceptable, are there other mechanisms for communicating its acceptability without the need to go through a formal qualification process? E.g. Indication specific guidance
  • Is the submitter willing to develop a new tool or modify an existing tool making improvements as needed based on existing or new research findings?
  • Are Review Division and Clinical Outcome Assessments Staff resources sufficient to collaboratively engage in consultation and advice during the qualification process in a reasonable timeframe?


Page Last Updated: 06/08/2017
Note: If you need help accessing information in different file formats, see Instructions for Downloading Viewers and Players.
Language Assistance Available: Español | 繁體中文 | Tiếng Việt | 한국어 | Tagalog | Русский | العربية | Kreyòl Ayisyen | Français | Polski | Português | Italiano | Deutsch | 日本語 | فارسی | English