Nonclinical Studies Subcommittee
Advisory Committee on Pharmaceutical Science
This document defines the objectives and proposed activities of the Nonclinical Studies Subcommittee of the Advisory Committee for Pharmaceutical Science. Included is a discussion of the scientific opportunities for improving the pharmaceutical development process and identification of focus areas in which the Subcommittee could provide guidance on implementation of collaborative approaches to improve this process.
The principal goals of this Subcommittee are to recommend approaches and mechanisms that:
Scope of Activities
The Nonclinical Studies Subcommittee will focus on the improvement of the design and application of laboratory-based studies for safety and efficacy assessments. The major emphasis will be on nonclinical studies that support candidate selection, nonclinical safety assessments, and clinical product development, and that either provide mechanistic support for clinical studies or serve as the basis for safety decisions related to effects that are not easily evaluated in clinical trials. Important objectives are to increase the efficiency of development and enhance the safety and efficacy of pharmaceutical products.
Pharmaceutical development is a lengthy and costly process, with an average development time for a unique new entity of fifteen years and an average cost in excess of $400M. Recent advances in the discovery phase of this process have greatly improved the efficiency of selecting agents with high-affinity receptor-binding, but the overall process remains relatively inefficient and failure-prone. Eighty percent of investigational new drugs that reach the clinic still fail prior to marketing. Nonclinical studies play a major role in this development process. They are the basis for the selection of candidates that will advance to clinical trials, they are also a principal line of defense against the introduction of human pharmaceuticals with the potential to cause delayed and low-incidence health affects that are prevalent in the population but difficult to monitor clinically, and they are a source of mechanistic information that provides an interface between laboratory findings and clinical studies. Improvements in the predictivity of nonclinical studies for clinical outcome, and in the efficiency of the interface between mechanistic nonclinical findings and clinical endpoints, hold major promise for improving health protection while at the same time increasing the overall efficiency of the development process.
An important focus of the Nonclinical Studies Subcommittee is the achievement of these two principal goals: l) improved predictivity of clinical outcome by nonclinical studies, and 2) an improved interface between nonclinical and clinical studies that optimizes the use of mechanistic data in clinical study design and interpretation, and thus contributes to improved developmental success rates, decreased development timelines, and better and safer products.
Current nonclinical safety practices in the United States have evolved from early guidelines (Goldenthal: FDA papers 2, 1968) to a series of guidance documents (ICH, 1997) that seek uniformity in chemistry, pharmacology, toxicokinetic, and toxicology submissions prior to marketing approval. Considerable effort on the part of CDER, CBER, and sponsors to work toward more scientifically innovative and robust methods to establish the needed safety data base has occurred, but significant areas for improvement still exist that are in need of investigation.
Scientific advances achieved over the past decade provide a number of opportunities to dramatically increase the predictivity of nonclinical studies for human outcome, and to provide "bridging biomarkers" that can couple nonclinical findings directly to clinical observations, thereby reducing the uncertainty of extrapolation of nonclinical study results to the human. Using these scientific advances to improve the drug development process holds the promise to reduce resource requirements for the development of optimal safety and efficacy data, to shorten the current development time dramatically, to reduce the current failure rate of developmental pharmaceuticals, and to improve health protection through the use of more efficient and lower-cost technologies. Achieving these goals is a common interest of the FDA, pharmaceutical companies, and the public.
Decisions at all stages of the discovery and development process influence the selection and eventual marketing of products. Thus, each of the foregoing groups has an interest in the entire discovery and development process. However, the FDA traditionally has not become involved in the process directly until phase 1 clinical development. Thus, although it is useful to separate the discussion of opportunities into the two traditional major segments of the development process, 1) discovery, and 2) development, significant opportunities for improvement of the development process may arise at the transitional interface of these two segments. It should be noted that technologies applied to each phase often have common scientific elements that are applicable to the other stages of development. Further, it should be pointed out that studies conducted at the interface between these 2 process stages are often pivitol in enabling product clinical testing and in strategic selection of long term development candidates.
The major development cost occurs during the later phases of clinical testing the stage of development in which 80% of lead products entering clinical testing fail. Thus, it is clear that more effective selection prior to clinical development and earlier identification of problems in the clinical phase of development have the potential to greatly lower the overall cost and increase the efficiency of the development process. Thus, a major focus should be on improved predictivity of nonclinical studies, as well as a more effective integration of nonclinical data with clinical measurements so as to identify problems with safety, efficacy, or bioavailability as early as possible in the clinical phase.
During the past decade, major advances have been made in the discovery process. These include, 1) development of robotized high-throughout screening techniques for receptor-ligand interactions, 2) the development of combinatorial chemistry techniques that allow the efficient synthesis and de-convolution of molecular variants, 3) computer-assisted molecular design, and 4) an improved understanding of the genetic basis of disease processes that has led to dramatic improvements in rational targeting and the incorporation of genomic technologies into drug design and discovery. The combination of these techniques have resulted in a great improvement in the efficiency of the identification of compounds with high-affinity receptor-binding and potential efficacy. Indeed, these new technologies have created a basic change in the strategy of drug discoveryfrom a heavy reliance on the screening of natural products to the screening of synthetic combinatorial libraries and molecular design of therapeutics targeted to known receptors, as well as direct gene therapy involving replacement of genetic components defective in diseased individuals. However, to be effective as a drug, favorable bioavailability characteristics and low toxicity at therapeutic doses (i.e., a high therapeutic index) are also required in addition to efficient receptor-binding and in vivo targeting. Evaluation of safety and bioavailability involves relatively expensive and time-consuming studies, and has become an even more severe bottleneck to the development process because high-throughput technologies for the early identification of low toxicity and high bioavailability have not been developed in parallel with the effective screening procedures for efficacy and receptor-binding.
Computational toxicology, incorporating advances in computer technology, toxicology databases, and the application of quantitative structure activity relationship (QSAR) software offers a rapid and cost-effective means for screening large numbers of compounds to eliminate those with a potentially unfavorable toxicity profile. Development of mechanisms to use the existing databases upon which new drug applications are based, without compromising propriety interests, has the potential to greatly improve existing predictive software.
Another major paradigm shift in drug development is the introduction of biotechnology-derived therapeutics. This includes gene replacement therapies, protein and peptide therapeutics derived from genetically engineered biofactories, antisense therapeutics, novel vaccines such as recombinant DNA plasmids that express antigens in tissue, and the increased utilization of cellular signaling molecules as therapeutic agents. The advent of biotechnology has created special needs in pre-clinical testing, because many of the conventional approaches are not appropriate models for the evaluation of these new classes of therapeutics. For example, the cellular targets toward which these newer human therapeutics are directed may be different or absent in laboratory animal models used for pre-clinical evaluation, making it difficult to fully evaluate potential side effects and toxicities.
A major need in the discovery phase of drug development is the development of high- throughput technologies whereby better prediction of relative efficacy and toxicity (or psuedo-therapeutic index) and bioavailability can match the current methods for receptor and target-binding capability. By incorporating cellular biomarkers of toxic damage into the high-throughput screening phase of discovery, for example, promising agents could be selected on the basis a pseudo-therapeutic index rather than simply on the basis of high-affinity binding. This approach would be expected to significantly increase the efficiency of selecting the most promising candidates from among the many thousands, hundreds of thousands, or even millions, of candidates that are evaluated during high-throughput screening.
The increased knowledge about cellular responses to general classes of toxic damage that has occurred in the last decade provides for the generalization of this approach. Biomarkers of damage class, such as structural alterations of proteins (induction of molecular chaperones), DNA damage (induction of repair- and cell replication-response genes), and intracellular reactive free radical generation (induction of free radical defense genes) are examples of generalized biomarkers of damage class that could be built into a high-throughput format to be used in this way during discovery (MacGregor, et al., Fund. Appl. Toxicol. 26, 156-173, 1995). Such an approach has the added advantage that, in addition, to identifying toxic damage and allowing the ratio of potentially toxic to efficacious dose to be determined during the high through put screening phase, mechanistic data is obtained about the spectrum of damage induced by various candidates within the class being screened. This mechanistic data would permit very early choices that could minimize safety problems later in the development phaseproblems that often lead to clinical delays and additional testing. Likewise, high-throughput screening to predict bioavailability, using techniques such as cellular or artificial lipid membrane barriers to penetration in combination with sensor chips or other high-throughput technology has the potential to assess bioavailability and absorption characteristics very early in the screening process.
FDA, industry, and the public would all benefit from the development and evaluation of such technology. It should be noted that the general approaches and biomarkers that can be built into a high-throughput screening program are in many cases the same or analogous to those which would be effectively used during the development phases that are subject to regulatory evaluation. However, the specific technologies employed may in some cases be built into assay systems that are most useful in one or the other phases of development. This is important from the point of view of the interest and responsibility of the partner members of collaborative undertakings, as FDA and other regulatory agencies' responsibilities begin in the early development phase that follows discovery. For example, induction of stress genes would be a useful biomarker that could be employed in a high-throughput mode in discovery, as a biomarker in nonclinical animal studies during early development, and as biomarkers for clinical monitoring during clinical evaluations. However, specific assays that might be most efficiently used in discovery, such as the construction of cell lines with convenient reporter genes linked to promoter elements, would not be useful in these latter models because specific cell constructs employed in a high-throughput reporter mode would not be applicable in the same format to animal and human studies. The measurement of common damage-response elements using different reporter systems throughout the discovery development and post-marketing periods would allow the mechanistic and dose-response information achieved at each stage to be related to findings in the subsequent stages, providing a major enhancement in our ability to assess risk based on data from the various laboratory and non-laboratory models.
The Discovery/Development Interface
Late in the discovery stage preclinical studies are conducted to enable the conduct of early clinical trials. These studies are used in the final selection of candidate drugs to enter into clinical study, provide the preclinical information upon which the starting dose to be used in clinical trials is based, and provide the initial guidance for clinical monitoring of potential toxicities. The dependent early clinical development studies provide important human data used in the final selection of lead drug candidates and drive the resultant extensive investment in long term toxicology testing and clinical effectiveness trials. Traditionally, FDA only becomes involved after the enabling toxicology studies have been completed with the submission of the Investigational New Drug Application. The enabling data are currently generated in large part through the use of animal models that, relative to discovery costs, are time-consuming and expensive. Thus, although large numbers of compounds can be screened for potential efficacy, few (perhaps two or three) are generally brought into the early pre-clinical safety studies prior to choosing the compound for which an IND is submitted. Thus, these enabling studies are often a "bottleneck" in the development process. Clearly, processes and approaches that could improve the enabling study design and predicitivity for human effects and those that could generate more useful human data early in the process could greatly improve the overall efficiency and safety of the drug development process.
Traditionally, following the discovery stage the FDA becomes involved and remains involved throughout the development and post-marketing process. During these later stages, nonclinical safety studies play two major roles: 1) the development of mechanistic information, such as target organ specificities and likely mechanisms of toxicities expected to be observed should toxic doses be approached in the human, and 2) the provision of a basis for protection against delayed and low incidence, but severe, health affects that are difficult to monitor and evaluate in clinical studies (e.g., cancer, adverse reproductive outcome, heritable genetic damage, and certain neurological deficits).
The increased use of macromolecular therapeutics such as recombinant proteins has also created the problem that animal models may develop neutralizing antibodies against the therapeutic (which are unlikely to occur in the human) frequently making the conduct of longer-term nonclinical studies with these types of agents meaningless. Thus, nonclinical approaches for many agents in this class of therapeutics requires a more thoughtful research-based design than the usual reliance on the conventional approach to conducting standard rodent and non-rodent nonclinical safety studies of various durations. This requires the definition of new testing paradigms that are appropriate to the various new classes of agents that have arisen through the new understanding and use of biotechnology.
As is the case in the discovery phase, the advances in our knowledge of cellular response to damage and mechanisms of toxicity provide a major opportunity to improve the value and efficiency of nonclinical studies regardless of whether they are used in the enabling or later development stages. Mechanistic biomarkers of tissue pathology and cellular damage hold the promise of improving the basic safety testing paradigm used in nonclinical practice, including the incorporation of better biomarkers for identifying toxic effects in nonclinical studies and the incorporation of selected subsets of the same markers into clinical studies to allow a greatly improved quantitative extrapolation of the meaning of mechanistic data obtained in nonclinical studies to quantitative risk in human populations.
A major problem with nonclinical studies is that quantitative extrapolation of adverse health affects observed in animals to the human is still uncertain. Many factors, including metabolic differences, differences in binding affinity for cellular targets, kinetic differences, etc., between animal models and the human contribute to this uncertainty. New biomarkers, such as those described above, have the potential to greatly improve the product development paradigm. In addition to the improvement in health protection afforded by the ability to link laboratory findings directly to human risk assessment, the ability to more effectively identify specific problems earlier in development should have a major impact on the efficiency of product development. An integrated scheme of biomarkers that allows the very early identification of compounds with the ability to induce potential problems, coupled with improved biomarkers for early identification of cellular and tissue toxicity and the ability to directly link findings from nonclinical and clinical studies, should allow a very significant improvement in the ultimate success rate of agents entering clinical trials (the phase that now is responsible for the greatest cost and time delay in the development process). Examples of such potential biomarkers include chemokine signals produced by tissues undergoing pathologic damage, general damage response elements such as defensive gene induction in response to generalized classes of damage, better markers of conventional pathology (such as biochemical methods for identifying apoptotic and necrotic cells), markers of cell proliferation and cellular infiltration into tissues, etc. In addition, specific biomarkers for important classes of damage, such as mutational events known to be associated with oncogene induction and tumor development provide the opportunity to consider the use of "intermediate biomarkers" as early identifiers for more time-consuming disease models.
When biomarkers and other responses of the cell/tissue/organism are compared across species, there is an underlying assumption of similarity in the exposure patterns for animals in safety studies and the projected human use of the drug. To improve the cross-species relationships, attention must be focused upon the relative concentrations of parent drug and biologically-active metabolites. Having such information aids in determining whether the pharmacologic properties of certain metabolites should be explored further. Substantial progress has been made in this area. For example, the possibility of unfavorable metabolic profiles is no longer explored in early clinical stages, but prior to human testing; in some cases, prior to animal testing. It is neither practical nor desirable to conduct all metabolic/interactions studies in vivo. Studies in vitro, which are inexpensive and readily carried out, generally serve as an adequate screening mechanism that can rule out the importance of a metabolic pathway, making in vivo testing unnecessary, and facilitate the interpretations of cross-species results. Even more efficient screens for metabolism, i.e., less labor- and time-intensive, will be required as the impact of high-throughput screening pushes more candidates toward the clinic. Knowledge of metabolic pathways helps to identify the implications and the importance of certain drug-drug interactions. Even if a new drug is not metabolized itself, it may substantially alter the metabolism of other drugs.
Induction of drug metabolism remains as the area in which our tools and technology in vitro are the weakest. Although it is encouraging to see the numbers of groups that are standardizing their approaches and the kinds of results that are emerging, studies in vivo are our primary source of information.
Another area in which scientific advances have created new opportunities for improved linkage of nonclinical and clinical studies is that of noninvasive and minimally invasive technologies. These technologies provide a means to link nonclinical and clinical studies by allowing common endpoints to be studied in the laboratory and in clinical trials.
Two general areas of recommended focus have been identified, based on potential favorable impact on the drug development process and perceived probability of success. The rationale for selection of these focus areas is given below. In each case, the recommended approach is to form an expert working group (EWG) to develop a specific project implementation plan.
1. New biomarkers for improved predictivity of nonclinical studies and an interface between nonclinical and clinical studies
Historically, laboratory toxicology studies have been based on the assumption that biochemical similarities among different species make possible the use of laboratory animals and cellular models for toxicity testing and the prediction of human risk. Unfortunately, many factors make it difficult to extrapolate from laboratory data to human risk, including differences between cells and organisms in uptake and distribution, metabolism, affinity of toxicants for cellular targets, cellular levels of key metabolic factors, cell-cell and receptor interactions, and other factors. In addition, the traditional principal endpoint in classical toxicology, histopathology, is laborious and insensitive and does not reflect certain key biological endpoints, such as mutations, stable chromosomal aberrations, and aneuploidy. Indeed, these latter have until recently been difficult or impossible to measure in vivo.
Recently, new knowledge and technologies have become available that will provide information about molecular damage underlying in vivo and in vitro toxic effects and improve the reliability and efficiency of laboratory toxicology studies. Among these emerging technologies are:
Among the array of cellular responses to toxic stimuli, the activation or induction of specific genes provides a means of characterizing the nature of the toxic insult. Genes that respond in a characteristic manner to toxic damage have been termed stress genes. Many of these gene products play a role in countering effects of a given toxicant, either by detoxifying it, by transporting it out of the cell, by repairing the damage it causes to cell components, or by intercepting toxic intermediates. Examples of cellular damage that induce such stress genes are lipid oxidation, DNA damage, osmotic imbalance, protein misfolding, disruption of electron transport, and membrane permeabilization. A large number of damage-inducible genes have now been isolated and characterized. By using simple techniques to measure the induced response of damage-inducible genes, the presence of classes of toxicants or types of toxic damage caused by cellular exposure to particular compounds can be monitored and characterized very efficiently.
An expert working group should be assembled to assess the potential for new classes of biomarkers to be integrated into the development process, with emphasis on those with the potential to be used in both clinical trials and laboratory models.
Development of efficient high through-put methodologies based on families of response elements sensitive to important classes of toxic damage has the potential to break "bottleneck" that currently exists in predicting the toxicity expected from the large number of potential development candidates generated by HT efficacy screening. Development of class-specific biomarkers that can be measured in both laboratory models and in humans during early clinical trials has the potential to provide a "bridge" between mechanistic laboratory findings and human relevance, allowing clinical problems to be identified much earlier in development and facilitating solutions through mechanistic studies that link human and laboratory findings.
2. Noninvasive Technologies
Noninvasive technologies provide a means to link nonclinical and clinical studies by allowing common endpoints to be studied in the laboratory and in clinical trials. Two approaches appear particularly promising: PET imaging and high-resolution magnetic imaging. In particular, PET imaging should provide the capability to efficiently monitor molecular biomarkers in vivo. For example, monitoring of cell surface markers associated with cell death and proliferation should, in principal, be possible and initial demonstrations of feasibility have already been accomplished.
Improvements in magnetic imaging technology, using small bore Magnetic Resonance Imaging instruments and high field strength magnetic gradients, has made feasible magnetic resonance microscopy. This technique shows great promise in the ex vivo analysis of tissue samples from animals used in toxicity studies, in addition to the obvious advantage of being able to monitor tissue damage noninvasively in both humans and animals in vivo. It has many significant advantages relative to standard histopathology, including: the tissue need not be sectioned or stained, data collected is intrinsically 3-dimensional, the images can be viewed in any plane, and applicable to subsequent clinical studies. The ability to detect toxicologic pathologies has been demonstrated in the case of neurotoxic lesions induced by excitotoxins in rat brain. Lesions could be detected with this technique that were not observed using classical histology.
As these technologies are new, it will be important to determine the limits of resolution and sensitivity of the methods for various classes of lesions and damage responses. This could ultimately lead to the establishment of standardized approachs of using MRM and PET as screens for toxicity in preclinical ex vivo and in vivo specimens, with the opportunity to extend appropriate measurements into human clinical trials.
An expert working group should be assembled to assess the potential for PET and MR imaging to be integrated into the development process, with emphasis on the potential to monitor cell and tissue damage, or damage response, in both clinical trials and laboratory animal models.
The ability to monitor cell and tissue damage noninvasively would provide a much-needed bridge between laboratory animal and human studies. It would extend the information obtainable from conventional laboratory animal studies, allow repeated measurements of developing pathologies in individual animals, and allow direct confirmation in the human of the relevance of mechanistic information about sub-pathological damage obtained in animal models.
History and Current Status
8/31/99 Subcommittee (NCSS) organizational meeting: defined objectives, operating procedures & broad focus areas
9/24/99 ACPS endorsed concept and provided mandate to proceed
12/14/00 NCSS identified two broad initial focus areas: 1) biomarkers of toxicity, and 2) noninvasive technologies to link nonclinical and clinical studies
3/9/00 NCSS identified three specific focus areas for initial Expert Working Groups (EWGs): 1) biomarkers of cardiac toxicity, 2) biomarkers of vasculitis, 3) PET imaging in nonclinical studies [FDA regulatory staff subsequently requested that the PET group be deferred to a later date]
7/26/00 Federal Register notice of request for nominations for two expert working groups (biomarkers of cardiotoxicity and biomarkers of vasculitis) published, with a closing date of 9/29/00. Requests for nominations for these working groups were subsequently sent to scientific societies specializing in toxicology, cardiology, and immunology, and to participants from FDA, PhRMA, BIO, NIH, and academia. A public docket was opened to receive nominations and supporting materials.
1/01 EWG members selected
5/3-4/01 Expert Groups on biomarkers of Cardiotoxicity and Vasculitis meet
Current NCSS Membership
(Brenda Gomez is Exec. Secretary)