FOOD AND DRUG ADMINISTRATION
ADVISORY COMMITTEE FOR PHARMACEUTICAL SCIENCE
Thursday, July 19, 2001
CDER Advisory Committee Conference Room
5630 Fishers Lane
Food and Drug Administration
Rockville, Maryland 20857
STEPHEN R. BYRN, PH.D.
Charles B. Jordan Professor
Head, Department of Industrial & Physical Pharmacy
1336 Robert E. Heine Pharmacy Building
West Lafayette, Indiana 47907
NANCY CHAMBERLIN, PHARM.D., Executive Secretary
Advisors and Consultants Staff
Center for Drug Evaluation and Research
Food and Drug Administration (HFD-21)
5600 Fishers Lane
Rockville, Maryland 20857
GLORIA L. ANDERSON, PH.D., Consumer Representative
Fuller F. Callaway Professor of Chemistry
Morris Brown College
643 Martin Luther King Jr. Drive, N.W.
Atlanta, Georgia 30314-4140
JOSEPH BLOOM, PH.D.
University of Puerto Rico
School of Pharmacy
4th Floor, Office 416
P.O. Box 365067
San Juan, Puerto Rico 00935-5067
JUDY BOEHLERT, PH.D.
PRESIDENT, Boehlert Associates, Inc.
102 Oak Avenue
Park Ridge, New Jersey 07656-1325
JOHN DOULL, M.D., PH.D.
Professor Emeritus of Pharmacology and
Toxicology and Therapeutics
University of Kansas Medical Center
3901 Rainbow Boulevard
Kansas City, Kansas 66160-7471
WILLIAM J. JUSKO, PH.D.
Professor of Pharmaceutics
Department of Pharmaceutics
School of Pharmacy
State University of New York at Buffalo
Buffalo, New York 14260
COMMITTEE MEMBERS: (Continued)
VINCENT H.L. LEE, PH.D.
Department of Pharmaceutical Sciences
School of Pharmacy
University of Southern California
1985 Zonal Avenue
Los Angeles, California 90033
NAIR RODRIQUEZ-HORNEDO, PH.D.
Associate Professor of Pharmaceutical Sciences
College of Pharmacy
The University of Michigan
Ann Arbor, Michigan 48109
JURGEN VENITZ, M.D., PH.D.
Department of Pharmaceutics
School of Pharmacy
Medical College of Virginia Campus
Virginia Commonwealth University
Box 980533, MCV Station
Room 450B, R.B. Smith Building
410 North 12th Street
Richmond, Virginia 23298-0533
WILLIAM H. BARR, PHARM.D., PH.D.
Executive Director, Center for Drug Studies
Medical College of Virginia
MCV West Hospital
1200 East Broad Street
Virginia Commonwealth University
Richmond, Virginia 23298
(ROBERT) GARY HOLLENBECK, PH.D.
Associate Professor of Pharmaceutical Science
University of Maryland School of Pharmacy
20 North Pine Street
Baltimore, Maryland 21201
GUEST PARTICIPANTS: (Continued)
WILLIAM KERNS, D.V.M., M.S., A.C.V.P.
Pharma Consulting, Inc.
P.O. Box 322
112 Bolton Road
Harvard, Massachusetts 01451
LEON LACHMAN, PH.D.
Lachman Consultant Services, Inc.
1600 Stewart Avenue
Westbury, New York 11590
MARVIN C. MEYER, PH.D.
Professor, Chair and Associate Dean
for Research and Graduate Programs
Department of Pharmaceutical Sciences
College of Pharmacy, Health Science Center
University of Tennessee
847 Union Avenue, Room 5
Memphis, Tennessee 38163
JEANNE MOLDENHAUER, PH.D.
Vectech Pharmaceutical Consultants, Inc.
24543 Indoplex Circle
Farmington Hills, Michigan 48335
KENNETH H. MUHVICH, M.S., PH.D.
Senior VP Regulatory Compliance
QSA Consulting Services
The Validation Group, Inc.
1818 Circle Road
Ruxton, Maryland 21204
G.K. RAJU, PH.D.
MIT Pharmaceutical Manufacturing Initiative (PHARMI)
MIT Program on the Pharmaceutical Industry
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, Massachusetts 02139
INDUSTRY GUEST PARTICIPANT:
LEON SHARGEL, PH.D., R.PH.
Vice President, Biopharmaceutics
Eon Labs Manufacturing, Inc.
227-15 North Conduit Avenue
Laurelton, New York 11413
INDUSTRY GUEST SPEAKER:
GORDON HOLT, PH.D.
4 Sparrow Valley Court
Montgomery Village, Maryland 20886-1265
WALLACE ADAMS, PH.D.
Office of Pharmaceutical Science
YUAN-YUAN CHIU, PH.D.
Office of New Drug Chemistry
BADRUL CHOWDHURY, M.D.
Medical Officer Team Leader
Division of Pulmonary Allergy Drug Products
ERIC P. DUFFY, PH.D.
Office of Pharmaceutical Science, DNDCII
AJAZ S. HUSSAIN, PH.D.
Division of Pulmonary Drug Products
CAPT. DAVID HUSSONG, PH.D.
Director Regulatory Scientist Officer
Office of Pharmaceutical Science/Microbiology
ROBERT J. MEYER, M.D.
Division of Pulmonary Allergy Drug Products
FDA PARTICIPANTS: (Continued)
BRYAN S. RILEY, PH.D.
Office of Pharmaceutical Science
VILAYAT A. SAYEED, PH.D.
Office of Pharmaceutical Science, DCII
HELEN N. WINKLE
Office of Pharmaceutical Science
JAMES D. BLANCHARD, PH.D.
3929 Point Eden Way
Hayward, California 94545
CAROLE EVANS, PH.D.
P.O. Box 13341
Research Triangle Park, North Carolina 27709
DAVID RADSPINNER, PH.D.
Analytical Technology/Dry Powder Technology
Cheshire CW4 88E, UK
JOEL SEQUEIRA, PH.D.
Senior Associate Director
Shering Plough Research Institute
2000 Galloping Hill Road
Kenilworth, New Jersey 07033
C O N T E N T S
AGENDA ITEM PAGE
CONFLICT OF INTEREST STATEMENT
by Dr. Nancy Chamberlin 10
INTRODUCTION TO THE MEETING
by Ms. Helen Winkle 15
REPORT FROM THE ORALLY INHALED AND NASAL DRUG
Introduction to the Issues -
by Dr. Vincent Lee 24
Difficulties with Showing a dose response
with Locally Acting Nasal Sprays and
Aerosols for Allergic Rhinitis -
by Dr. Badrul Chowdhury 27
Clinical Study Options for Locally Acting
Nasal Suspension Products: Clinical
Studies and Pharmacodynamic Studies -
by Dr. Robert Meyer 46
Recommendations of the OINDP Subcommittee -
by Dr. Wallace Adams 61
Committee Discussion 72
REPORT FROM THE NONCLINICAL STUDIES SUBCOMMITTEE:
Introduction to the Issues -
by Dr. John Doull 92
Working Group Progress -
by Dr. William Kerns 95
by Dr. Gordon Holt 107
Future of Subcommittee -
by Ms. Helen Winkle 118
C O N T E N T S (Continued)
AGENDA ITEM PAGE
CHEMISTRY, MANUFACTURING, AND CONTROLS:
Introduction and Overview of Proposal -
by Dr. Yuan-Yuan Chiu 125
Results from AAPS Workshop -
Drug Substance - by Dr. Eric Duffy 127
Drug Product - by Dr. Vilayat Sayeed 142
Microbiology - by Dr. David Hussong 152
GMP - by Dr. Eric Duffy 155
Proposed Next Steps
by Dr. Yuan-Yuan Chiu 161
Committee Discussion 164
OPEN PUBLIC HEARING PRESENTATIONS:
by Dr. David Radspinner 174
by Dr. Carole Evans 178
by Dr. James Blanchard 181
by Dr. Joel Sequeira 187
OPTIMAL APPLICATIONS OF AT-LINE PROCESS
CONTROLS ON PHARMACEUTICAL PRODUCTION:
Introduction and Overview -
by Dr. Ajaz Hussain 198
Case Study -
by Dr. G.K. Raju 217
Committee Discussion 243
C O N T E N T S (Continued)
AGENDA ITEM PAGE
Introduction to the Issues -
by Dr. David Hussong 252
Overview of Technology -
by Dr. Bryan Riley 255
Validation Issues -
by Dr. Kenneth Muhvich 260
Industry Perspective -
by Dr. Jeanne Moldenhauer 265
Committee Discussion 273
P R O C E E D I N G S
DR. BYRN: Good morning, everyone. I'd like to welcome you to the Advisory Committee for Pharmaceutical Science meeting on July 19th.
First I'd like to ask Nancy Chamberlin to read the conflict of interest statement.
MS. CHAMBERLIN: Good morning.
The following announcement addresses conflict of interest with regard to this meeting and is made part of the record to preclude even the appearance of such at this meeting.
Since the issues to be discussed by the committee at this meeting will not have a unique impact on any particular firm or product, but rather may have widespread implications with respect to entire classes of products, in accordance with 18 U.S.C. 208(b), all required committee participants have been granted a general matters waiver which permits them to participate in today's discussions.
A copy of these waiver statements may be obtained by submitting a written request to the agency's Freedom of Information Office, room 12A-30 Parklawn Building.
With respect to FDA's invited guests, Dr. Robert G. Hollenbeck, Dr. Jeanne Moldenhauer, Dr. G.K. Raju, Dr. William Kerns, Dr. Gordon Holt, Dr. Leon Shargel, Dr. Roger Dabbah, and Dr. Leon Lachman have reported interests which we believe should be made public to allow the participants to objectively evaluate their comments.
Dr. Hollenbeck would like to disclose ownership of stock in Aerogen, Inc. and University Pharmaceuticals of Maryland, Inc. He is also Vice President and serves as a scientific advisor to University Pharmaceuticals of Maryland, which is a contract research and clinical studies manufacturer. Additionally, he consults with various companies in the pharmaceutical industry.
Dr. Moldenhauer would like to disclose that she is employed by Vectech Pharmaceutical Consultants. Currently, she has a paper being prepared for publication with David Jones regarding feasibility of Scan RDI Technology for biological indicators, based upon original research performed at Jordan Pharmaceuticals. She also is editing a book on lab validations which includes some chapters on rapid microbiology methods. However, she has no financial interest in the chapters of the book. Additionally, Dr. Moldenhauer receives honoraria from Parenteral Drug Association for teaching a college course on aseptic processing.
Dr. Raju would like to disclose that some of his past research has been funded by Purdue University as part of a project funded by the Camp Consortium, a non-profit consortium of Pharma Companies. Currently, he is serving as the principal investigator on a project funded by the Camp Consortium. He consults for a number of other pharmaceutical companies. Additionally, he has other fiduciary relationships with Light Pharma, a consulting company.
Dr. Kerns would like to disclose that he is a scientific advisor to Canfite Biopharma, Elsai Co., Ltd., Biocentra, and Omniviral Therapeutics.
Dr. Holt would like to disclose his employment with Oxford GlycoSciences, a toxicology biomarker company.
Dr. Shargel would like to disclose that he is employed by Eon Labs Manufacturing Company.
Dr. Dabbah would like to disclose that he is employed by U.S. Pharmacopeia.
We would also like to disclose that Dr. Leon Lachman is President of Lachman Consultant Services, Inc., a firm that performs consulting services to the pharmaceutical and allied industries.
In the event that the discussions involve any other products or firms not already on the agenda for which an FDA participant has a financial interest, the participants are aware of the need to exclude themselves from such involvement and their exclusion will be noted for the record.
With respect to all other participants, we ask in the interest of fairness that they address any current or previous financial involvement with any firm whose products they may wish to comment upon.
DR. BYRN: Thank you very much.
Now let's go around and introduce some members that are seated at the panel, and also we'll test the microphones, and we can start over here to the left. What you do is press the talk button. It should light, and then you can introduce yourself.
DR. KERNS: Good morning. My name is Bill Kerns. I'm representing the expert working group on vasculitis in a presentation later this morning.
DR. HOLT: I'm Gordon Holt. I represent the cardiotoxicity expert working group this morning.
DR. SHARGEL: I'm Leon Shargel. I'm Vice President, Eon Laboratories, a generic pharmaceutical manufacturer.
DR. MEYER: I'm Marvin Meyer. Two weeks ago I retired from the University of Tennessee. At that time I was chair, professor, and associate dean for research in the College of Pharmacy.
DR. LEE: I'm Vincent Lee, professor and chair at the University of Southern California.
DR. BLOOM: Joseph Bloom, from the University of Puerto Rico.
DR. BOEHLERT: Judy Boehlert, and I have my own pharmaceutical consulting business.
DR. JUSKO: William Jusko, professor at the University of Buffalo.
DR. RODRIGUEZ-HORNEDO: Nair Rodriguez, associate professor of pharmaceutical sciences, University of Michigan.
MS. CHAMBERLIN: Nancy Chamberlin, Exec. Sec.
DR. BYRN: Steve Byrn, chair and professor at Purdue and chair of the committee.
DR. ANDERSON: Gloria Anderson, Callaway Professor of Chemistry and chair at Morris Brown College in Atlanta.
DR. VENITZ: Jurgen Venitz, associate professor, Department of Pharmaceutics, Virginia Commonwealth University.
DR. DOULL: John Doull, clinical toxicologist, University of Kansas Medical Center.
DR. BARR: William Barr. I'm professor and director of the Center for Drug Studies at Medical College of Virginia, Virginia Commonwealth University.
DR. CHOWDHURY: I'm Badrul Chowdhury. I am a medical team leader, the Center for Drugs, U.S. Food and Drug Administration, Division of Pulmonary and Allergy Drugs.
DR. ADAMS: Good morning. I'm Wallace Adams, Office of Pharmaceutical Science in CDER, and involved with the nasal BA/BE guidance that we'll be discussing this morning.
DR. BYRN: I'd like to introduce Helen Winkle, who will give an introduction to the meeting. Helen is acting director of the Office of Pharmaceutical Science.
MS. WINKLE: Well, first of all, I want to say good morning to the committee. They spent a long day with us yesterday. We went through some different training sessions on what we actually do in the Office of Pharmaceutical Science. They already had a long day yesterday, so hopefully today we can really get into the science and talk more about the things that they probably have a real interest in. So, I want to, first of all, welcome them. I appreciate their time and effort in participating with us on this advisory committee.
I also want to welcome the prospective new members. We're still processing the paperwork for these members, but this is Dr. Meyer, who's already introduced himself. And also Art Kibbe, who will be joining us at the next advisory committee meeting.
Also for the first time at this advisory committee meeting we have some industry members on the advisory committee. They will not be voting members, but they will represent the industry in discussions that we have. Dr. Shargel has joined us for this purpose, and also Dr. Shek will join us in the future. Dr. Shargel represents the generic side of the industry, and Dr. Shek represents the innovator side.
I also want to welcome our distinguished guests who are here to participate with us today in discussions on various issues that we're bringing before the committee, as well as the audience.
Before I start, I want to go quickly through the agenda so everyone will understand what we're going to talk about today, the issues we're going to address. But before I do that, I sort of want to make three points about the importance of this advisory committee. We talked yesterday at the training session some about this importance, but I want to emphasize that today again.
Basically the three points I want to bring out is the importance of this committee in enhancing the science base in CDER. Secondly, I want to talk a little bit about how this advisory committee fits into developing our standards and regulations and guidances, and also the fact that this advisory committee is very important to us in CDER and in the Office of Pharmaceutical Science in fostering communication.
First of all, enhancing the science base. We talked about this yesterday as well, but I want to make this point again. This is a very unique committee. Most of the advisory committees in FDA and in CDER especially are looking at product-specific areas. They will discuss products that are before us for approval and get into the various aspects of the process and science that affect those products.
However, this committee, again as I said, is unique. It really is dealing with all types of issues that affect us in the Office of Pharmaceutical Science, issues that cross the borders. I already mentioned we have generic and innovator representatives here. We also look at a variety of disciplines and support those disciplines through some of the recommendations that come from this committee. So, it's very unique. It's dealing with a variety of issues. These issues are very important to us in the Office of Pharmaceutical Science in making regulatory decisions. Without their scientific expertise, we really cannot make the decisions that are necessary to help us in developing standards and guidelines, et cetera.
The development of these standards and guidelines are important in how we do business in CDER. The standards are used externally by companies. They're very important to go out to companies and let them know how we expect business to be done in order to enhance the regulatory process. But they are also very important to us internally within FDA. We use these guidances and these policies to help our own reviewers in doing their day-to-day processing so that we can ensure appropriate and consistent decisionmaking. So, it's very important, the decisions we make here will have an effect, again, not only on industry but us here as well.
Fostering communication. This is difficult, I think, from the committee's standpoint to understand this concept, but these are public meetings and the public is available to hear what we have to say and the types of issues that we're grappling with and the information and recommendations that we get from outside organizations, from outside scientists and get to hear their expertise and how that expertise helps influence what we do. So, it's very important to us at the FDA to have this process go on.
We're very appreciative of the people who give up their time to come in here. Each one of you has, I'm sure, other things that you're very busily involved with and we appreciate the time that you take to come in here and spend with us to help us in these really important aspects of the regulatory process.
As I said, I want to go through the agenda. I'll try to do that quickly, but I want to give you a feel for the next two days. I'm hoping that on all of these topics we can have a lively discussion. Some of the topics are more heads up than actual topics to obtain recommendations, but others are important issues where we really want the input of the members of the advisory committee.
You all have the agenda in front of you. The first two agenda items are focused on updates from the two subcommittees of the advisory committee. The first subcommittee, the Orally Inhaled and Nasal Drug Products Subcommittee actually met this week on Tuesday to address the issue of dose response for nasal sprays. This subcommittee is chaired by Dr. Lee, and the discussion on the dose response was generated by a need to address the issues on the current draft guidance on BA and BE. The subcommittee representatives, who have already introduced themselves, will provide you with background on the issues and will provide you also with the recommendations of the subcommittee.
It was a very interesting subcommittee meeting. I think that we feel that this is an issue now, as far as the guidance, that we can move forward with, after we are able to address these recommendations to you and get your concurrence on it.
The second subcommittee that will present today is the Nonclinical Studies Subcommittee, which is chaired by Dr. Doull. This subcommittee met in May, and at the subcommittee meeting the two working groups that are under this subcommittee met to determine future direction, and Dr. Holt and Dr. Kerns are here today to talk about the issues that were discussed at these two expert working groups, and to talk about the process of these working groups. Later on in that presentation I will also talk a little bit about the future of this subcommittee. We are looking at other alternatives on how we will handle this subcommittee in the future.
The next item on the agenda is an update to the committee on what we are doing on our initiative on risk-based CMC reviews. If you all remember, we had an introduction to this particular topic at the November advisory committee last year, and today we're going to talk a little bit about the outcome of our workshop, which was held on this topic in June, and to update you on the direction that we're going as far as this particular initiative is headed.
After lunch we'll have an open public hearing, and then when we will discuss the topic of optimal application of at-line process controls for pharmaceutical products. This is a new initiative that we're undertaking, and Dr. Hussain will introduce the initiative to the advisory committee, along with a case study that will be presented by Dr. Raju.
As science and technology change and are further advanced, we in FDA need to be sure that we're on top of these changes and that we can explore the ways that these changes are going to affect our regulatory process. Today we're going to look for the committee's thoughts on this initiative and to explore with you any ideas that you may have on our best way to pursue this initiative in the future. This is an area that I can assure you that we'll be talking more about in the next few years with this advisory committee.
The next item on the agenda is a similar item. It's also a new topic. And it's to solicit the committee's input on establishing acceptance limits for microbiological tests that use newly developed technologies. I think this is one of the things we have to grapple with day to day in the FDA, the changing technologies. And as we look at those changes, we have to look at how we're going to change our regulatory processes, or what we need to do differently in our regulatory process to adapt to these changes.
Tomorrow the first thing on the agenda is to discuss clinical pharmacology issue on drug transfer into breast milk and to get some input from you as to how best to interpret data that we will be gathering. We're in the process of developing a guidance on lactation studies and would like your recommendations on moving forward with that guidance.
The second item on the agenda, after the public hearing, is to discuss some of the issues and concerns we have as we move forward regulating liposome drug products. Obviously, this form of drug delivery is expanding, and we need to make sure that we are correctly addressing all the issues. This too is a technology that we need a lot of assistance on in deciding how we will move forward in the regulatory process.
Tomorrow we will mainly present to the committee where we are in regards to this area of regulation and what issues we've identified that we still need to address. We also had a workshop on this subject in the spring, and FDA has some issues that came out of that workshop that we would like to address with you.
One of the topics you may find that I didn't mention today is dermatopharmacokinetics. This is an issue that's come up several times in this committee. We at OPS are reconsidering the direction that we want to take with this methodology for determining bioequivalence for dermatological products, and we're really not ready at this time to discuss what direction we're going. However, I want stress the fact that we are definitely interested in the importance of alternative methods for doing bioequivalence, and we're really trying to commit to eliminating or reducing the testing requirements for these products, where possible. We are looking at exploring new alternative methods besides just DPK. So, in November of this year, when the next advisory committee meets, we will bring that topic up again.
I know this is a full agenda. I look forward to the committee's discussion and input. I think all of these topics are very important to us in the Office of Pharmaceutical Science, and I know that your input will help us in setting our regulatory direction.
Before I end this morning, I do want to recognize that this is Dr. Byrn's last meeting as chairman of this committee. I want to publicly acknowledge how much we in CDER appreciate Dr. Byrn's support and dedication to this committee. He's worked very hard. We've worked with Dr. Byrn in various different settings. He's very dedicated to the whole idea of product quality research and product quality regulation, and we really appreciate all his help. So, I want to publicly announce that FDA really will miss you, Steve. We appreciate everything, and thank you.
So, unless there are any questions, I'm going to hand it back to Dr. Byrn. Thank you.
DR. BYRN: Thanks very much, Helen. I enjoyed my participation on this committee, even the site-specific stability work.
DR. BYRN: Some of you realize what was involved in that. It was very enjoyable.
We're going to move ahead with the subcommittee, and as Helen said, we have two reports of subcommittees. The first one is the Oally Inhaled and Nasal Drug Products Subcommittee, and Dr. Vince Lee will introduce the issues, and then we'll proceed. Thanks very much, Vince.
DR. LEE: Thank you, Steve. I didn't realize that this is your last meeting. It could be a long meeting.
In any event, I'm here to report to you to set the stage for the presentation to follow. We met about two days ago in this very room to talk about issues concerning the nasal aerosols and nasal sprays. Before I begin, I would like to say that I'm so impressed with how quickly the government got this documentation printed. This was done last evening. Thanks to e-mail, we had the visual material sent here, and then I was impressed to find that it got printed by the government Kinko for us.
DR. LEE: So, the meeting was very interactive. It was meant to last until 5:30 in the afternoon. I'm pleased to report that we finished our business before 4 o'clock.
The specific issue that we were asked to address was suspension formulations. Helen already talked about that we were there to talk about dose response for these formulations. And the main issue is to see whether or not we can use it as a way to determine the comparable in vivo performance for local delivery. Those of you following the guidance must be aware of the four points above it. You have it and Wally might reiterate it in his presentation.
So, by the time we come to this point, we already know the comparability in the actives and inactives, the device, the in vitro performance, and the in vivo performance in regards to systemic exposure.
The subcommittee was asked to address two questions, and I highlighted the main points we were asked to consider. The first point was whether or not the placebo-controlled traditional 2-week rhinitis study conducted at the lowest active level would be sufficient to confirm equivalent local delivery of the suspension formulation for allergic rhinitis. So, that was the first question.
The second question was similar, except that the test would be different. There we were asked to look at the placebo-controlled park study or the EEU study conducted at the lowest active level. In order to address these two questions, a special panel was constituted, and this is the subcommittee of 10 individuals þ- actually 11. Dr. Shek was not able to join us. Dr. Leon Shargel attended. I was there, and Gloria Anderson was there. Both of us were in red because we were members of this committee.
The individuals in blue were really the experts. They were the practitioners in the clinic settings and they were useful in the discussion. Dr. Hauck many of you might know.
The individuals in green are the industrial representatives.
The individuals in purple -þ and this by the way is the Lakers' color --
DR. LEE: -- were the representatives from the agency. In fact, Dr. Chowdhury and Dr. Meyer will be giving us the background leading to the recommendations of the subcommittee.
So, this is a very busy slide. It was work of Wally Adams. He asked me to put it up there so that you can all read it to understand the issues. So, the point that we were asked to address that the draft guidance recommends the conduct of a clinical study for allergic rhinitis to confirm equivalent local delivery, and I would like to emphasize that point.
So, this is the background for the report this morning, and understand that we have Dr. Meyer and Dr. Chowdhury to teach us so that we can all understand the recommendations of the subcommittee. Wally is going to come up after those two presentations to tell us what the recommendations of the subcommittee were.
DR. BYRN: Thanks very much, Vince.
The next speaker will be Dr. Chowdhury, who will talk about difficulties with showing a dose response with locally acting nasal sprays and aerosols.
DR. CHOWDHURY: Good morning.
I'll be talking about the point that it is very difficult to show a dose response for locally acting drugs for allergic rhinitis. We had the same discussions in the presentations two days ago. I'll go through the same points again and make the point that for locally acting drugs, which is used for treating allergic rhinitis, it is indeed very difficult if not impossible to show a dose response. As you remember, dose response is one of the points that is typically asked for for showing bioequivalence.
I will use three drugs and five clinical trials as examples to make my point. Before I go into that, I would like to briefly introduce a topic about nasal sprays and aerosols, and talk a bit about allergic rhinitis, the disease that we're talking about, and the clinical trials that are done for drugs which are to be approved for allergic rhinitis, to kind of introduce the topic, the background, and then I'll go to the clinical trials itself. I think that will make the clinical trials more easy to follow.
Now, we are talking about nasal sprays, which can be either solutions or suspensions, or nasal aerosols, which means that these have some propellant in them. The point for discussion today are really the suspensions. These are some examples of drugs which are currently available for treating allergic rhinitis, which falls in these categories. Examples of solutions are an antihistamine, anticholinergic drug, which is Atrovent, sodium cromoglycate, and some steroids. The suspension nasal sprays and the suspension nasal aerosols are all steroids, and the focus of discussion today is actually on the suspensions.
Allergic rhinitis is a pretty common disease. The patients who have allergic rhinitis are sensitized to something in the environment that are called allergens, and when they get exposed to it, they have disease which is typically manifested by a constellation of symptoms, which I'll go through in my second slide as I talk about clinical studies.
The clinical studies that are used for looking at these drugs in terms of efficacy for the purpose of approval are of three general types, and they're named as natural exposure study, day-in-the-park study, environmental exposure unit study, or EEU study. I'll go through them one by one.
Natural exposure studies are typically done in a season when the patients are exposed to the allergens and are symptomatic. For example, a person who is ragweed sensitive would be studied in ragweed season, which around here is late fall. And the patients for these studies are recruited. They're usually symptomatic, and once recruited, they're taken through a couple of days when they are either given placebo or nothing, to establish the symptoms, and this period is the baseline period.
After that, they are put on the drug in question. They are treated for a couple of days, typically for seasonal allergic rhinitis, about 2 weeks. For perennial rhinitis it's about 4 weeks in a double-blinded fashion.
Again, the patients score their symptoms and they get some measure of the drug's effect. The difference between the baseline, which is the run-in, and the treatment is taken into consideration to find if the drug is better than placebo or not. They're typically parallel-group studies, double-blinded.
A day-in-the-park study is pretty similar. However, the study is done over a very short time period, typically 1, 2, or 3 days. The patients are taken in a park where there is a lot of exposure to allergens, and then they're given the drug and the symptoms are scored again. These again are typically parallel group studies.
The EEU study is really an artificial situation. The patients are put in a room -- for example, a room like this -þ and are exposed to allergens in a very controlled setting. This out of season. The patients are not asymptomatic. They are given this exposure for a couple of days to make them symptomatic, and then they're brought back in and they're given the drug or the placebo to have an efficacy assessment.
So, let's go over these. A natural exposure study is more natural, typical outpatient, and as you move down they become more pharmacodynamic in nature. For dose response which we're talking about today, typically the natural exposure study and day-in-the-park study are used and we have more experience with. The EEU study is used mostly for questions like pharmacodynamic issues like onset of action, offset of action, and things like that.
When I go through my examples, I'll have an example from one day-in-the-park study and the rest will be natural exposure studies.
For assessing efficacy, we typically depend on patients' rating of symptoms. They are listed here. The nasal symptoms like itching, sneezing, rhinorrhea, or congestion. Or they can be non-nasal symptoms. As I go through the examples, this will become more clear.
The symptoms are more typically scored by the patient and there are various scales used. The one that we use now are typically 0, 1, 2 and 3 scales. However, other scales can be used, as my examples will show.
Now, in addition, there are often other measures which potentially may be used. I say potentially because these are more experimental and they are used mainly to study disease pathology or pathogenesis. For example, objective measure like nasal passage patency are markers of inflammation. The point I want to make is, these are very interesting tools. However, they are not yet to the point where they can be used for assessing the drugs in a clinical setting because they are not clinically validated, and perhaps may or may not relate to the disease's activity.
Let me go through the examples one by one. The examples that I will be using, some of them are actually in the public domain, others are not. Just to be fair, I'll not name the drugs. I'll call them drug A, B and C. I'll have an example of a solution nasal spray. I realize this is not the question for today. However, I want to put up an example to show that the difficulty in dose response is not unique for suspensions. Then I'll have suspension nasal sprays and aerosols.
A total of five clinical trials I'll be using to make the point. For the solution it will be a day-in-the-park study. For the others they will be a natural exposure study, dose-ranging, and then there will be two comparative studies in which the aerosol and the aqueous spray were used in the same study.
Let's go with one example at a time. The first one is a day-in-the-park study, using a solution nasal spray I'll call drug A. This was a two-center U.S. study conducted about 11 years ago. It was conducted in seasonal allergic rhinitis patients, ages 12 and above, and the patients were in the park for 2 days. Three dose levels were used, which will become clear in the next transparency. Drugs were given on a b.i.d. schedule. Since the patients were in the park for 2 days, on the first day they got a drug in the morning, in the afternoon, and the next day they got it again in the morning.
Efficacy was instantaneous scoring. Instantaneous means the patients scored the symptoms at the time they were scoring. For example, how do I feel right now? And the six symptoms which were scored are listed here, some nasal symptoms, some eye symptoms. And the scale here was 0 to 5. On the days when the patients were in the park, which was day 1 and day 2, early in the morning, they scored the symptoms very frequently, and then in the evening less frequently. All the scores were summed up and the result that I am going to show will be as major symptoms complex, which is a summation of all the scores.
Now, this is the baseline. Throughout my trial presentation, the same format of graphs will be used, the bar graphs, and the bars from left to right will match with the legend from top to bottom. Placebo will be on the left all the time and blue in color.
In this study, the three dose levels were one spray b.i.d., two sprays q.d., and two sprays b.i.d. It's a pretty large study with about 50 patients. And the baseline is here, which is pretty close but not exactly the same.
Now, this is the result of the study, and I'm expressing the result as mean percentage change from baseline to take care of the baseline differences. We note here the placebo response was about 10 percent. The study had an active control, which is an antihistamine, chlorpheniramine, and the effect size here was about 40 percent. So, the difference was about 30 percent here in this study. And if you look at the result, this is the lowest dose, one spray b.i.d., and this is two sprays b.i.d., so this is two-fold higher. This is one spray b.i.d., and this is two-fold higher.
If you look at it, perhaps there is a trend of dose response. However, if you look between these two, this is two sprays q.d., and this is two sprays b.i.d.. This is higher than this, and the effect goes in the opposite direction. So, the point here is, you don't really see a dose response. It's almost like a random phenomenon. This will become more clear as I go through my other examples.
A second example will be the suspension nasal spray, and the first one will be a natural exposure study. This study was conducted in the U.S. a couple of years ago. It was again a natural exposure study, the first one which I talked about. It was conducted in ragweed sensitive seasonal allergic rhinitis patients, ages 6 and above. The study had a 1-week baseline where no drug was given, followed by 4 weeks of treatment in which the experimental drug was given. The treatment was q.d. dosing of four dose levels, and note here the doses were over an eight-fold range. The previous example was just a two-fold range.
The efficacy assessment here was 12-hour reflective. The reflective is different than the instantaneous which I said earlier. Reflective means the patients scored their symptoms noting how they feel over the previous 12 hours or so. The symptoms scored here were three nasal symptoms: runny nose, congestion, sneezing. The scale was the typical 0 to 3 scale. The sum of the three scores was used, and we just called it a nasal index score.
This is the result. Nasal index score. This is the baseline, treatment, and this is the change, which means the difference between this and this. The baseline here is very similar, so I'm using the raw score. And if you look at the change here, this is the placebo, and the placebo actually had some response, quite a bit of response. And this will come back later on also that for allergic rhinitis, placebo indeed is almost a drug, has some good response.
And the dose levels used here were between 32 and 256. There's an eight-fold difference, and these are the four bars showing the four dose ranges. The point here is virtually flat. The lowest, 32, and the highest, 256, were very close. So, essentially it cannot really make a difference in the clinical study with this example over an eight-fold range.
This study had three symptoms. Just to look at it a bit further, I looked through if these three symptoms looked individually would make any difference or not. And if you look through it, it really did not. Perhaps for congestion there was some trend. However, they were almost flat, and the changes were really, really very small.
The next example, in the same study a spray and aerosol was used. The drug substance, drug B, was the same. It was a natural exposure study. It was a Canadian study done in seven centers in 1994. Patients were ragweed-sensitive, seasonal allergic rhinitis patients, ages 12 and above. The design was very similar: 1 week baseline, 3 weeks of treatment, and the dosing was q.d. of three dose levels.
Efficacy assessment was the same again: 12-hour reflective of three nasal symptoms. And these are the results here. This is the primary efficacy endpoint in this study, which was the nasal symptoms. This is a summation of the scores of rhinorrhea, sneezing, and congestion. Eye symptoms are also here. The point really can be made with the total symptoms here, and if you look at it, this is a spray, spray, 256, 400. These two powers. And this is a lower dose, this is a higher dose. However, on efficacy it goes in the opposite direction.
This is the aerosol, 200 b.i.d., same as this as a nominal dose. It really does not fall on top of each other. Essentially the theme is recurrent here that these differences that we see are essentially fluctuation over a baseline efficacy.
Now, the last example is drug C. It is again a suspension nasal spray. Now, when I was preparing for this talk, I went through almost all the clinical studies which were done for getting these drugs approved, and I picked up drug B as a classic example. This is what we typically see. Drug A is a solution example, and drug C is perhaps the best-case scenario for a dose response that we have, and I'll show even for this drug we don't really see a typical dose response.
This study was a 15-center U.S. study conducted in 1992, a pretty large study. This was done on SAR patients, ages 18 and above. The study had a 1-week baseline period followed by 4 weeks of treatment. And in this study q.d. dosing was used over a 16-fold range, very large 16-fold range. Efficacy was very similar to the studies before, 12-hour reflective. And eight symptoms were measured here, some nasal and some eye symptoms. They were scored on a 0 to 6 scale.
In this particular study, the primary endpoint was on physician-rated symptoms, so that's what I'm showing first. And I'm showing the results over the whole study time period, day 3, day 7, 14, 21, and 28. All the times can be used. I'll just use day 21 to make my point here.
The placebo response here as a change from baseline percentage was close to 30 percent. The drug response was 50 to 60 percent, so the separation between drug and placebo was not that large. If you look at it, there was perhaps a dose response, at least numerically trending. However, the separation here between the lowest and the highest, a 16-fold difference, is really less than 10 percentage points. Very tiny area to work with.
If we look at other times, like day 14, day 28, it really doesn't hold true. Indeed, on day 28 the lowest and the highest dose, 16-fold apart, you really could not pick up a difference between these two.
Now, typically we use a patient-rated score in this study which was also done, so we looked at that to see if that would be different. The result is here, and the answer is really no. On day 21, the symptoms that the patients rated were very similar to what the physicians rated.
The last example is again the same drug substance. It was one study where aerosol and spray suspensions were used. The study was conducted in 32 centers in the U.S. two years ago. It was conducted on seasonal allergic rhinitis patients, ages 12 and above. There was 1-week baseline period followed by 2 weeks of treatment, and the dosing was three dose levels from two devices and over 8-fold range. Again, a pretty large separation.
Efficacy was 12-hour reflective of four nasal symptoms, scored on a 0 to 3 scale, and the sum was nasal symptom score.
The result is here. The first bar here is the placebo, and the second, third, and fourth bar are with aerosol, and the last three are with the spray. This is mean percentage change from baseline. First week, second week, total, which is week 1 and 2.
If you look at it here, again for this drug there is some trend of a dose response at least numerically, but again, we are within 5 percentage points or less. So, a very small difference between the lowest and the highest dose, quite a bit of separation within the two doses.
So, again, the same point here. Even with a very good example here perhaps, the lowest and the highest, you cannot really pick up a difference. If you look at week 2, they're almost flat.
So, the point that I made here with all the seasonal allergic rhinitis studies, because typically those were studies that are used for showing dose response, if you look at patients who had perennial allergic rhinitis, the same would hold true.
The question comes up, we do not really see a dose response, which is very clear based on the examples, and why we did not see a dose response. Perhaps the clinical studies that we used for looking at efficacy, which is the typical outpatient kind of study, are sensitive enough to pick up a separation from drug and placebo, however, are not discriminative enough perhaps to pick up a dose response, if it existed. They are pretty crude measures. Unfortunately, that's what we have.
The second point may be that perhaps for some of these drugs which are already approved, they may already be at the high end of the dose-response curve, so a large separation of dose would not really mean any differences as far as efficacy is concerned.
So, that's all I have. If there are any questions.
DR. BYRN: Questions for Dr. Chowdhury? Yes, Bill?
DR. JUSKO: William Jusko. I think another reason for the lack of showing efficacy is the fact that the patients being studied in many of these studies, particularly the first one, did not present with very serious scores. For example, the baseline scores were 10 on a scale that could go up to 30. So, a major problem, if this persists for the other studies, is the fact that the patients being studied only have very modest disease symptoms.
DR. CHOWDHURY: That is true for the first study. However, not always because in some of the studies patients who are more symptomatic were recruited, and some of the studies are designed like that. In the placebo run-in period, those who respond to placebo or those who do not have the cut-off symptom scores are excluded. So, they start off being symptomatic.
However, again we're talking about a large study with 100 patients in one treatment arm. Some may be more symptomatic than the others, and when you average out, it's almost impossible to get the extreme high level of symptoms which you would ideally want to get, but that doesn't really happen.
DR. BYRN: Yes, Marvin?
DR. MARVIN MEYER: You didn't really mention anything about the variability in the results. Are there differences in the variability between the three types of studies? I'm particularly interested in the EEU. Is that less variable in the measurements?
DR. CHOWDHURY: As you go down, you are moving from a natural exposure to a controlled setting, and that would be the case. The variability would be less, and again in the EEU setting, you're taking in patients, making them symptomatic, so they'll be higher on the dose response. Not on the dose response. I take it back. Higher on the symptom scores and perhaps closer to each other.
DR. MARVIN MEYER: But I take it that's not an acceptable way to normally study these drugs because it's not actually clinical?
DR. CHOWDHURY: They're good study designs for answering pharmacodynamic questions, but again, we are moving away very much from the real life of the patients who are exposed to the pollens in a real-life environment. So, they're not very good study designs for looking at drug efficacy. For the dose response questions, it again is not perhaps a very good tool, and also we do not have a lot of experience in the EEU study for dose response questions.
DR. MARVIN MEYER: Could you argue, though, that the typical clinical trial is somewhat constrained as well, a little artificial in that the selection process and the monitoring, et cetera, as opposed to the real world person?
DR. CHOWDHURY: The clinical trials try to mimic the real world as much as possible, but again, the study itself is an artificial setting, but as close to the real world, which is a natural exposure study, is better we have a handle of the drug itself and the disease itself.
DR. BARR: Did you do repeated measures that would give you an estimate of the intra-subject variability that you're dealing with? Do you have some estimate of that?
DR. CHOWDHURY: Not in these presentations, but again, the intra-subject variability is indeed high. I cannot really give numbers right here, but again, the patients who are symptomatic to begin with may change over the time, yes.
DR. BARR: What was also surprising is that you are at the top of the dose response curve, that platform for compounds that have different modes of action, which is kind of surprising that you were successful in getting to that upper level in all cases, and it may indicate perhaps more a lack of methodology rather than the drug effect itself.
DR. CHOWDHURY: Perhaps true because these are really pretty crude study designs, which I pointed out, based on how the patients are feeling, and that's what really we have for assessing drugs for the purpose of approval.
DR. BYRN: Do we know whether during development any of the firms were able to get dose-response curves?
DR. CHOWDHURY: The ones that I showed towards my last two slides, that's what really what we see. There is a numerical dose response, but again, we are in a very flat portion of the curve.
DR. BYRN: Right. Was anybody able to get down on the curve that you know of?
DR. CHOWDHURY: No. The answer is no. If you look at that particular slide, which was the second to last slide, the placebo is very close to almost where the flat portion is. So, I don't think based on the examples I'm showing that we are on a steep dose-response curve to begin with. The curve itself is pretty flat. Between the lower drug and the placebo, the separation is not that much. The placebo response in this study is about 30, 35 percent, and the drug response is about 50 to 60 percent. So, we do not really have much of room to work with.
DR. BYRN: One more question. Marvin.
DR. MARVIN MEYER: If you have no dose response curve, and these are typical results for what people would see, why would you even want to market the higher strengths, or approve the higher strengths?
DR. CHOWDHURY: Typically one would like to approve as low a strength as possible. However, it becomes an issue. We can almost go to a placebo and still show a separation depending on sample size.
Another factor that comes in is the safety. These drugs are relatively pretty safe. So, the equation brings in both efficacy and safety. So, if a drug is better than placebo, and with all available tests that drug is safe, that drug is safe and effective for marketing. But your question is well taken. The lower dose, which is safe and effective, the better it is.
DR. DOULL: I think the point is, you can't say that there is no dose response. What you can say is you haven't shown a dose response. But clearly there is a dose response there. If you were to use lower doses, you probably could in fact show that.
DR. CHOWDHURY: The answer is yes and no. I mean, perhaps there is a dose response, but the method that we have didn't show it.
DR. BYRN: Thanks very much, Dr. Chowdhury.
Our next speaker is Robert Meyer, who is going to address clinical study options for locally acting nasal suspension products.
DR. ROBERT MEYER: Thank you very much. Just let me follow up on that last point because I think the belief that we've had, and I think to some degree continue to have, is that although it may be very difficult to show dose response in these studies, clinically there has been a practice and a rationale for having a range of doses available and titrating patients up who would fail to respond to lower doses, as long as the safety profile assures us that those doses are safe. Whether that's been scientifically established or not, that's the rationale.
What I'd like to do actually then today is take you through a bit of the presentation that I gave the subcommittee, but I do want to pause on the first slide here to really just recap how we came to have this subcommittee discussion, and how we actually came to have what we had in the draft guidance, which was a recommendation for any one of three potential study designs, with a requirement to show a dose response within those study designs.
Back in about 1995, the Division of Pulmonary Drug Products at the time -- the "allergy" has been added since I became director two years ago, but the Division of Pulmonary Drug Products had advised the Office of Generic Drugs that we felt that clinical study really wouldn't be needed for locally acting nasal spray on the basis of the fact that we thought that the in vitro characteristics and perhaps the added assurance of some pharmacokinetic assessment would fully assure us of bioequivalence.
However, a letter to the center from one of the corporate sponsors raised the issue that one could not in these products fully assess the particle size in the actual drug formulation, for the suspension products, anyway. For the suspension products, the excipients of these products are such that there was no accurate and validated way to actually assess the particle sizing of the drug substance. And furthermore, the generic manufacturers wouldn't have access to the particle sizing or the micronization characteristics of the drug substance. This sponsor pointed out that perhaps particle sizing would matter quite a bit in terms of local bioavailability, which would be definitely tied in to efficacy.
While they present no data to substantiate that concern, I think it was a concern we took seriously and actually led to us in the draft guidance for nasal suspension products intended for local activity, asking for a clinical study -- not just any clinical study, but one that establishes a dose response. And the reason for this is to really assess or to show within that clinical study the sensitivity of the study to reflect differences in local bioavailability or dose, should one exist between the test and reference product.
So, in talking today -- and again, this is a recap of what I said to the subcommittee -- I want to focus on the options for a clinical study, many of which Dr. Chowdhury has already gone through, but I'll spend a little time talking about those and then turn to really what is the question that's being put to any clinical study that might be required as a part of a bioequivalence package for a locally acting nasal suspension product. Once we focus a little bit on what the question is that's taken to that study, I think then we can get to what is the best answer and the subcommittee's advice on that. And I'll close with some observations and recommendations that we had on Tuesday for the subcommittee.
Again, as Dr. Chowdhury has pointed out, the disease in question here is allergic rhinitis, which is primarily experienced and historically assessed subjectively. The basis for approval for our drugs has come from subjective symptom scoring, such as the total nasal symptom score that Dr. Chowdhury took us through. More, if you will, pharmacodynamic type questions in terms of onset of action, appropriate dose interval and so on, are frequently addressed through differing study designs, but still most often approached through clinical symptom scoring.
So, in the draft guidance, we had proposed three potential study designs. There was, if you will, the natural clinical study, and this is essentially a 2- to 6-week study. Seasonal allergic rhinitis studies tend to be shorter, in the range of 2 weeks, and the perennial allergic rhinitis, as one advertisement likes to say, the outdoor versus indoor allergens, but these are more like cats and dogs and indoor, if you will, allergens. The perennial allergic rhinitis allergen studies tend to be somewhat longer. They're parallel-group studies looking at comparative changes in total nasal symptom score over the treatment period.
As Dr. Chowdhury pointed out, the patients are enrolled prior to or at the start of their season, and randomized when they are sufficiently symptomatic, albeit not always terrifically symptomatic, and this allows for the assessment of efficacy, but also because it is a 2-week study and it's used more typically how a patient might use it in a general real-world setting, it allows for assessment of safety and tolerability over a reasonable period of use.
The EEU study takes a patient out of season and exposes him to a high level of a specific pollen to which they are allergic. It really takes cohort of patients at the same time and it assesses the symptoms over a short period of time, commonly over a period of hours. These, at least in new drug applications, are often used for assessing such parameters as onset of effect or perhaps in dose-finding, but I'll have more of a comment about that in a minute.
A day-in-the-park study is somewhat intermediary between these two. This is again a cohort of patients with a known allergy sensitivity, but typically a fairly low level of symptoms at the start of the day, and they're taken to an outdoor setting, a park if you will, in a cohort for natural exposure to an allergen. Days where the allergen exposure is high are targeted, although you won't know that necessarily prospectively. These typically are fairly short-term studies, so we get short-term efficacy and safety assessed those data. Again, in the NDA, new drug application, setting we don't consider these necessarily the best of pivotal trials because they don't so much generalize as far as all the findings that come out of them, but they are used and typically used in trying to assess dose effects, duration of effect, and so on.
From the approval purpose then, as I've tried to emphasize, the Division of Pulmonary and Allergy Drug Products regards the natural clinical study to be the most informative, and we regard the EEU and the day-in-the-park studies as useful, but typically used for more pharmacodynamic-type assessments.
Other objective endpoints in any of the study designs really, if you will, more true pharmacodynamic endpoints, such as nasal patency through acoustic rhinometry or other measures of air flow or specific markers of inflammation in the nasal mucosa or nasal secretions are regarded as interesting, but they are not clinically validated. I would also point out that they're not really validated as a reliable detector of dose response.
Let me just take a slide that Dr. Conner showed the other day, just to remind us why we're focusing on this issue of the clinical study at all for talking about the question that we're taking to it. It really stems from the fact that for a topically acting drug, a nasal suspension or a nasal spray, the therapeutic effect is coming from local delivery and local activity and is not predicted through assessment of pharmacokinetics, although there could be a small contribution of any drug that gets to the blood, either through local absorption or through systemic absorption from the GI tract or any that might come through the lung. There may be contribution to the therapeutic effect, right over here, but in general most of the therapeutic effect comes from the local delivery.
On the other hand, the assessment of what gets into the blood, only some of this is coming through nasal absorption, and even for drugs of fairly low bioavailability through the GI tract, if they have low bioavailability through the nasal mucosa, a substantial portion of what does get into the blood will be coming through these other routes, primarily the GI tract.
So, to fully get a handle on the therapeutic effect and for bioequivalence, if one really places a lot of concern over any differences in local delivery, one needs to assess more than just the pharmacokinetics. One needs to get some kind of handle on the clinical or pharmacodynamic measurements to accurately reflect the local bioavailability.
So, the question as I framed this the other day for the subcommittee is what are we really asking the clinical study to do in the bioequivalence package for a nasal suspension spray. I need to emphasize that I don't want to use these in the regulatory term sense, but just using these in a more casual sense. Are we regarding the clinical study as necessary, but playing a confirmatory role, or are we really looking for it to primarily establish the bioequivalence? That really depends on your interpretation of the unknowns left after you've done a full in vitro assessment and, again, you have Q1 and Q2 sameness for the two products and, in fact, your pharmacokinetic assessments as well.
For a more confirmatory role, the study would then be necessary to really confirm that given the unknowns that might remain after you've shown sameness in a fairly rigorous in vitro package in pharmacokinetics and Q1 and Q2 sameness, that the unknowns left there just require the clinical study to confirm a lack of important clinical differences as a part of this larger bioequivalence package. In a more pivotal sense, then, if you're asking the study to establish the bioequivalence in and of itself, the clinical study would really need to be able to discern differences in dose in quite a sensitive manner, and then to show that no differences exist between the test and reference product.
So, again, in a more confirmatory, necessary but confirmatory role, the design would be to broadly assure that no important clinical differences exist. A rigorous showing of dose response and strict equivalence over the dose response between test and reference is not required. The comparison therefore could be on one dose level, such as the lowest dose level, to assure that you're not on any kind of downslope of a curve, for each of the test and reference to show comparable efficacy, safety, and tolerability.
If you really look to the study to fully establish bioequivalence almost as a stand-alone question, the design must show sensitivity of the assay. That is, it must show the study could have detected a dose response if any difference in dosing local bioavailability were to exist, and then you must show a rigorous equivalence between the test and reference product. Of course, even in this role you could look at the comparability of safety and tolerability.
As Dr. Chowdhury has pointed out, our experience is that certainly the standard clinical study, the 2-week to 6-week standard natural, if you will, clinical study does not typically show sensitivity to dose, and in fact in our experience that it's even very difficult to show for EEU or day-in-the-park studies. So, we feel that the clinical study could be very good in terms of assuring that there is not an important clinical difference left in a bioequivalence determination when all other points show comparability, but it would be very difficult to use the standard clinical study despite our draft guidance of 1999. It's a herculean task to show dose response and therefore to rigorously establish bioequivalence through the clinical study.
While the more pharmacodynamic studies, if you will, the EEU and day-in-the-park studies might be a better approach because of less variability, it's not really been established that they firmly can establish sensitivity to dose effects either.
As I pointed out when I discussed these briefly earlier, using more true pharmacodynamic endpoints such as markers of inflammation or measures of nasal patency are both unproven in sensitivity to dose response as far as data to which we have access. Nor are they clinically validated, as representing important features of predicting the response to allergic rhinitis drugs.
Other endpoints that might be potentially used in standard trials -- and these have been suggested in comments to the docket about our guidance -- are unproven as being superior in sensitivity to dose response. I include things like well validated, health-related quality of life instruments.
So, where we came to is in the guidance we're assuming that to get to the clinical study you would need to show, or a sponsor of a new product would have to show, equivalence in vitro by a fairly substantial package of attributes. They would have to have, even before that, the same qualitative and quantitative makeup of the product, and if not the same actuator spray device, at least one that is very similar in attributes. And after establishing all that they would have to show equivalence to systemic exposure, or if measurement of systemic levels is impossible, through pharmacodynamic equivalence to things like HPA axis assessment for corticosteroids, for instance.
So, the main uncertainty left at the point for nasal suspension products that we're talking about is what contribution any differences in particle sizing in the formulation itself might present in terms of clinical efficacy, given everything else being the same. Clearly that's an issue, as I hope I've already conveyed, for the aqueous suspension sprays, and it's more difficult in fact for these sprays than the aerosols perhaps, but even for the aerosols, the MDIs, we don't have a proven, validated way to particle size in the way we do for the orally inhaled, for instance.
Given all the difficulties of establishing dose response, but also given a real rethinking of what questions are left at the point that we're discussing or coming to a clinical trial, the FDA presented to the subcommittee that fact that we're now contemplating shifting the question that we're asking of that study in the bioequivalence package. I must emphasize no matter what, however, the clinical study would not trump a lack of equivalence from the prior data set. So, it would have to be Q1/Q2 the same, they would have to be equivalent in the in vitro characteristics, and all the attributes tested, as well as systemic bioavailability.
If you have all that, if you have that equivalence established, then we're really seeing perhaps the clinical study as a necessary part to establish bioequivalence, but that it's doing so in a more confirmatory sense, that establishing at the lowest level dose that there is not really an important clinical difference between the test and the reference product.
Under this paradigm, then, we put to the subcommittee the question of what would be the best study design if you took this question to the clinical study. Should it be the traditional 2-week clinical study and SAR, seasonal allergic rhinitis? Should it be an EEU study, an environmental exposure unit study, or should it be a day-in-the-park study?
I'll stop there and see if there are any questions.
DR. BYRN: Questions?
DR. BOEHLERT: Question. Judy Boehlert. I have a question with regard to the particle size. I assume that the agency would always require a meaningful test for particle size on the active ingredient, so that you know what's going into the product. The challenge, then, in a suspension where you have active ingredient and perhaps other suspending agents is determining whether there's any growth in that particle over the shelf life of the product. Is that correct?
DR. ROBERT MEYER: Well, we certainly want the particle sizing of the micronized drug substance characterized and expect that to be done. There is the question that you have, but also the challenge for a generic drug manufacturer, be it that no matter how well they characterize their micronization process, they don't have access to the data of the innovator or the reference product to match that.
DR. BOEHLERT: I would agree that that's the case, but the agency, when they review those submissions, would be able to evaluate whether or not they have a meaningful test for particle size.
DR. ROBERT MEYER: Oh, absolutely. Absolutely. But I think within certain bounds we also have some uncertainty to what small differences in the micronization in the drug substance might mean in the drug product, so it's both that uncertainty for the innovator and, to some degree, for us, but then any change in the attributes of the particle sizing within the drug formulation would also come into play.
DR. BARR: It seems to me that the basic problem is just the inadequate bioassay that we have available to us. The global assessment is extremely insensitive to the point where we can't measure any changes between doses, except an all-or-none effect. So, in trying to go back to something that's more sensitive, for example, a pulmonary patency or the measures of inflammation, you indicated they had no clinical relevance. I'm not sure how you're going to be able to show clinical relevance if, in fact, the measure of clinical relevance that you have is so insensitive itself, and it's very difficult to show that. But it would seem to me that that would be some approach to get to something that is more reproducible, more sensitive. I wonder if there's an approach to that.
DR. ROBERT MEYER: Yes, when I say that they're not well clinically validated, I'm not putting that to a very, very high standard. We don't have a lot of data relating then to how they perform in clinical studies compared to standard assessments, so we don't even know that a change in any specific biomarker would in any way predict clinical response. So, you have both the question of the predictive value of it, but also whether any differences seen are meaningful.
So, I think the upshot is that for nasal solutions we're really talking about getting away from even having a clinical study at all because we're assuming that the in vitro characteristics can really characterize sufficiently how a test product and a reference product relate.
Here we're talking about suspension products, which are a bit more complex and we have the main issue being the particle size that is in the drug formulation. So, given that, given the pharmacokinetic assessments that would be a part of this package, how much are we worried about a difference in an individual patient, or in a mean population between a test and a reference, and how do we best examine that?
So, I think it remains unclear, even if we had more experience with some of the biomarkers and acoustic rhinometry and so on, what role that might play, given the question that we're really asking this study to answer in a BE package.
DR. BARR: Right, but in a way it's a little bit like our use of blood levels as a surrogate. We don't always have a very clear relationship between those, but we can measure them and there is some measurement that we have that's intermediate between an overall global assessment and something that does show differences in onset, differences in duration, difference in intensity, that gives us some measure that there may be some differences between the product. It just seems to me that some compromise ultimately would have to be found because the global assessment is just so inadequate in terms of the sensitivity.
DR. BYRN: We're going to have time for discussion, so I think we should go ahead with Dr. Adams now, who's going to give us the recommendations of the subcommittee.
Thanks very much, Dr. Meyer.
DR. ADAMS: Good morning, ladies and gentlemen. I'm pleased to be here and talk about our nasal bioavailability/bioequivalence guidance.
The issue that we're bringing to the committee today is one of dose response, and I'll get into that. An outline of this would be an introduction to the two questions, what are the two questions, and then the recommendations and conclusions of the OINDP Subcommittee.
Introduction to the two questions. This slide is one that Dr. Lee had presented earlier, but I'd like to just have us read through this because it focuses the question, that to establish bioequivalence of suspension formulation nasal aerosols and nasal sprays for allergic rhinitis, the June 1999 draft guidance recommends a series of different pieces of information.
It recommends the equivalence of the formulation, both qualitatively and quantitatively. So, we're saying that a test product should be the same in terms of its qualitative composition of inactive ingredients, as well as quantitatively within plus or minus 5 percent.
That the device should be comparable, either the same device meaning the same metering valve and pump, or one made preferably by the same manufacturer and the same model. If that's not possible, then as close as that can be obtained.
In vitro studies and systemic exposure or systemic absorption. The in vitro studies, however, do not assure equivalence of particle size of the suspended drug. Because particle size differences between test and reference products have the potential to alter the rate and extent of delivery of drug to local sites of action, then those differences in clinical effectiveness could result. For this reason, the draft guidance also recommends conduct of a clinical study for allergic rhinitis to confirm equivalent local delivery.
Now, what I'd like to do is to skip to a slide that originally I had presented at the subcommittee meeting and I think it is appropriate to present that here. It's not in the packet because I originally wasn't going to present it, but I think it's essential. It's a nice way of explaining what our predicament is.
We've indicated that the package of information for bioequivalence for solution and suspension nasal sprays and nasal aerosols is a substantial package, and it's built upon a number of items. One is that the formulation be qualitatively and quantitatively the same. That is expected of the generic or test product going into this issue, that the device be comparable.
And then there's a series of six in vitro tests for which we ask for equivalence. Those in vitro tests are unit spray content, which assures the test and reference products are both delivering the same amount of drug from the actuator. Droplet size distribution. Spray pattern and plume geometry, and what those do is to characterize the plume as it comes from the product and provides confidence that the drug will be distributed the same region of the nose in both test and reference products. That is, droplet size, spray pattern, and plume geometry is the same.
Particle size distribution, however, is one that, as we've indicated earlier in our presentations today, cannot be determined in a validated method, and so consequently there's an issue about potential differences between test and reference products in terms of the particle size and, at least in principle, that can affect the rate and extent of delivery to sites of action. It can also affect the rate and extent of systemic absorption, and consequently distribution to sites that would cause adverse effects.
We also ask for pharmacokinetic information as a means of determining systemic exposure. As Dr. Meyer had indicated, if that's not the case, then we move to a pharmacodynamic measure such as an adrenal suppression for the corticosteroids.
Now, this slide is intended to illustrate the package that we're talking about, and what it says is that first off, going into the formulation, the test and reference products will both deliver the same amount of drug from the actuator. They will deliver the drug from our in vitro studies. They will deliver the drug to the same regions of the nose. And so these products are behaving the same in vitro.
In terms of the local delivery -þ and of course, local delivery here is really the challenge and why this issue comes to the committee in the first place, because systemic exposure, PK levels are not appropriate to assure the equivalence of these drugs because they do act locally. So, the blood levels may be relevant more to safety than to efficacy.
We would like to conduct the clinical study for rhinitis to answer this question about equal efficacy on the steeply rising portion of the dose-response curve, but as we've heard from Dr. Badrul Chowdhury's presentation and Dr. Meyer's presentation, we essentially cannot get into this region of the curve with present available methodology. So, we believe that we're up in this region of the curve where the dose response is insensitive.
But if we were to conduct a rhinitis study and show that the test and reference products are both equally efficacious, we know then that even at a single dose, that the products would both be working. They'd both be relieving the rhinitis symptoms as long as they're both up here. They would show equivalence. In spite of the fact that a different amount of drug may be getting to the active sites, they would still be showing equivalence.
The other concern is that the drug, because of potential differences in particle size distribution, could be delivering different amounts of drug to the systemic circulation, and they could put the test and reference products down here in the region where they may differ on the pharmacodynamic or clinical dose-response curve for safety or for, let's say, adrenal axis suppression. They could differ.
Well, we can control that by use of a pharmacokinetic study to show equivalence. Of course, for some of these products in which there's very little drug that reaches the systemic circulation, it may be necessary to do the pharmacodynamic study instead.
So, what we would know, then, from this package of information is that the same amount of drug is delivered from the product. The products are equally efficacious, and that they have equivalent systemic exposure or systemic absorption. So, essentially that is the package of information that would be used for these products.
Now, to go back to the two questions, does the committee believe that a placebo-controlled traditional 2-week rhinitis study conducted at the lowest active dose is sufficient to confirm equivalent local delivery of these products, and two, does the committee believe that a placebo-controlled park study or EEU study conducted at the lowest active dose is an acceptable option to confirm equivalent local delivery?
As we've indicated, two days ago we held a subcommittee meeting to discuss these issues and what I'd like to do is, with four slides, present the outcome of those deliberations. What I'll do is to indicate a summary statement, and then I'd like to try and capture some of the thoughts that were expressed during the meeting on that particular issue.
The first conclusion is that based on current technology and methods, demonstration of dose response may not be possible for locally acting drug products for allergic rhinitis. Some of the comments that were made were the limitations of the current study design cannot be overcome at the present time to show a good dose response. We recognize that a dose response may be seen in certain individuals. As you increase the dose, they seem to respond. But that in fact may be due to differences in allergen levels over time. So, in fact that really may not be a true dose response seen in some subjects.
And if we were to be interested in a dose response, it was the subcommittee's feeling that that would be a major challenge to develop a model which is sensitive to dose. For instance, it could be a crossover study, possibly a nasal challenge study of some design, but it would require a substantial effort on the part of the agency in order to develop such a possibly more sensitive design. And in fact the feeling of the subcommittee was that it's really not much of a clinical issue.
On the topic of Dr. Meyer's issue is this study for bioequivalence of these locally acting suspension products, nasal products. Is it a pivotal study or is it a confirmatory study? All of the individuals participating in this felt this is a confirmatory study. This is not a pivotal study to fill the needs of the bioequivalence, given the other package of information.
The second slide, a clinical study is needed in the comparison of suspension nasal products. However, we have as a note that the subcommittee was not in consensus on this issue, but the majority agreed with the above. Now, I looked over my notes from that subcommittee meeting, and in fact almost half of our participants felt that either a rhinitis study was not needed at all in this circumstance, or that it was questionable as to whether it was needed. It's a blunt instrument.
However, it was felt that patients and clinicians will have increased confidence in the equivalence of the products if the study is performed. That was one of the benefits of it. As I say, almost half of the participants felt that the rhinitis study either isn't needed, or they felt ambivalent about it. The feeling was that the disease is benign, the study cannot distinguish between doses, and the rhinitis study in fact is overkill, which is the word that was used by some of the individuals.
However, they felt that the pharmacokinetic study is an important part of the package, and in fact the question was asked that if this is a high first pass effect drug, or charcoal block study were used in order to prevent drug coming in through the GI tract so that all the drug comes in through the nasal route, that in fact a PK study could be, to some extent, reflective of equivalent local deposition in the nose.
If a drug is absorbed substantially from the gut and a charcoal block study is not done, then the systemic levels would simply reflect the overall safety of the drug as it's clinically used.
I received a phone call after the subcommittee meeting, and one of the individuals who felt that the rhinitis study was not needed said, upon further reflection, if the drug is a prodrug, he felt that in that case it would be important to do the rhinitis study at a single dose, and the reason for that, he indicated, was potential differences in distribution to the nose for test and reference products. There could be differences in the enzyme levels in different regions of the nose, resulting in different degrees of conversion to the active moiety. So, that was his reason for that recommendation for a prodrug.
Slide three, a clinical rhinitis study would be useful to confirm that whatever unknowns remain after establishing equivalence through in vitro performance and pharmacokinetic metrics are not clinically important. The feeling of the subcommittee was, just do a simple one-dose rhinitis study. If the study is to be done, just do a simple one-dose rhinitis study. It doesn't need to be done at two different dose levels. And that the only information presented is the opportunity to show that large differences exist between test and reference products. That would be the only benefit of doing this study.
And lastly, of the three study designs in the draft guidance, the traditional placebo-controlled 2-week rhinitis study is the most appropriate. That is saying that the park study and the EEU study, as pharmacodynamic studies rather than clinical studies, the committee felt were not appropriate, at least at the present time, for the needs for establishing bioequivalence.
And a single dose level of test and reference products should be used at the lowest labeled dose. And some of the comments which were made were that an EEU or a park study were not clinically meaningful since there's only 1 to 3 days of exposure of the subjects to this drug, and in fact for full efficacy to take place for the nasal corticosteroids, it can take 2 weeks or even longer to establish that efficacy. So, therefore the traditional 2-week study design is the appropriate one for establishing equivalent efficacy.
It said that pharmacodynamic endpoints are not suitable at the present time. We don't know that the onset of action in fact, which can be measured from the EEU and the park studies, is more discriminatory, more sensitive to differences between products than the traditional 2-week study.
And the question comes up about should the study be done at the lowest labeled dose or the lowest possible dose. The lowest dose would be one spray per nostril daily. If the product is marketed, however, at two sprays per nostril daily, it would be possible to cut that dose in half in an effort to get down into a more sensitive region of the dose-response curve. It would be possible. But the subcommittee's recommendation was to do the study at the lowest labeled dose because that's a clinically relevant dose. People don't take it at lower doses than that.
Another thought was that for these products we know from our experience that showing a dose response is very difficult. In fact, dose response may not even exist, as Dr. Chowdhury has indicated. There was some thought that if in the future products are developed which can show a dose response, then this issue could be revisited in terms of the need to show a dose response.
Lastly, no one on that subcommittee felt that either the EEU or the park study was appropriate for establishing bioequivalence. Everyone felt the traditional 2-week study design was the appropriate one.
DR. BYRN: I think we can combine now our discussion with any questions people might have for Dr. Adams. On the agenda we have two topics that we need to discuss. But first of all, let's make sure that there are not specific questions about what Dr. Adams said. Any specific questions for Dr. Adams?
DR. BYRN: Let's go to question 1, which reads, does the committee agree with the OINDP Subcommittee regarding its recommendations concerning the conduct of the local delivery study based on the lowest active dose and a traditional 2-week placebo-controlled rhinitis study? Can we have discussion on that? So, the committee is recommending a 2-week placebo-controlled rhinitis study at the lowest active dose, which would be the lowest labeled dose. So, that topic is open for discussion. Does the committee agree, disagree, have concerns?
DR. JUSKO: I have a general concern about the generality of what we were presented with and these recommendations. All of the products being discussed were corticosteroid suspensions, and I would presume that these recommendations should apply to drugs with other mechanisms of action. It seems that these questions are posed to relate specifically to steroids and not in terms of general principles.
DR. ADAMS: Dr. Jusko, the questions were posed as they were because at the present time the only marketed products of suspension formulations are corticosteroids. Should other classes of drugs, antihistamines, anticholinergic drugs or cromones be developed as suspension products, then the same issues would apply here with regard to the need for a clinical study and a PK study.
DR. JUSKO: I'm not really sure that drugs with other mechanisms might require, as indicated, the lengthy period for full onset of effects. That's sort of what my concern is. If they did not require the full 2 weeks for a good effect, then these other test procedures, 1- or 2-day pharmacodynamic assessments, could become highly relevant.
DR. ADAMS: I would agree with that. We would deal with that on a drug class basis and work with the Pulmonary Division in terms of the study designs.
DR. BYRN: Judy?
DR. BOEHLERT: I have a question I guess with your use of terminology. By using the term "equivalent local delivery," are we implying more than you can deliver because you might get equivalent local action or activity or efficacy, but indeed may not have equivalent local delivery because you don't have the same dose-response relationship that you might want. I think I'm being confusing, but if you don't have dose response, then you may not have equivalent delivery of the drug, but you might have equivalent activity.
DR. ADAMS: I think what you're saying is that the way we worded these questions, one might assume that in fact we meant that there was equivalent local delivery to the sites of action. And what we really mean is that there's equivalence in therapeutic response, recognizing the fact that different amounts of drug between test and reference products could be delivered to sites of action. But because the study is done at the plateau of response, it's going to have the same therapeutic effect.
DR. BOEHLERT: That is indeed my concern.
DR. ADAMS: Yes.
DR. LEE: I just want to go back to Bill's question. Is Bill requesting that the wording be made more specific?
DR. JUSKO: Perhaps it should because everything we've seen and discussed pertains to only this one class of drugs.
DR. BYRN: Just a comment. I mean, this is more how a guidance should be written, I guess. The issue is, I guess related to all this, is really what we're saying here is, as Bill is saying, it's related to one class of drug, yet the guidance appears general. I don't know whether we should put something in the guidance that says it's only for this, and if there's another class of drugs, there might be a supplement or revision issue.
DR. ADAMS: Yes, the guidance will be very clear that the particular designs that we're proposing are for the corticosteroids. For instance, the adrenal axis suppression test would be inappropriate for the antihistamines. We would ask for a different package of information for the systemic absorption if PK could not be determined in that case. So, there's an issue about drug class specificity which will be clear in the guidance.
DR. BYRN: Could I ask a question about particle size? If, say, some analytical chemists or pharmaceutical scientists could develop a method to measure particle size in suspension and show equivalence, what would be the effect of that on the deliberations of the committee, if you could show equivalence of particle size with a validated method?
DR. ADAMS: I would say that the paradigm for the approaches that we're using to take to the committee today did not include that particular issue because we don't have that situation at the present time. Should validated particle size and particle size distribution methodology become available in the future, then a question on OINDP technical committee is, would we be content then with solely in vitro comparative testing for suspension products as well as for solutions? I would say that we would cross that bridge when we come to it. It's not present at the present time.
DR. BYRN: Is the problem in particle size that there are carriers in the suspension that the active is bound to? Is that the problem?
DR. ROBERT MEYER: The problem really gets down to there are things like methyl cellulose in these suspensions that in fact are present at pretty high proportions compared to the active drug, which are generally in fairly low concentration. So, it is a matter of interference, I think, as much as any binding --
DR. BYRN: But if there were fractionation methods or other approaches developed, there may be ways to do it. As a person that's involved in analysis, I hate to hear somebody say there is no method available. It makes me interested.
DR. ROBERT MEYER: I do want to stress the no current method.
DR. BYRN: Right.
DR. BARR: An alternative approach would be possibly to go into some dissolution because the problem, of course, with the particle size alone is that you have all the other factors that may affect the overall release of drug. So, ultimately there may be some dissolution procedure.
DR. ADAMS: That's right. Dissolution is something that has been suggested in the past as a means of addressing that issue. In fact, we've looked at that a little bit in one of our laboratories.
Fractionation alone, in the absence of a specificity between different fractions would not be adequate.
DR. BYRN: Yes. You'd have to be able to do specificity. You could do dissolution and fractionation. This isn't the subject of our discussion.
So, let's get back to question number 1. Is there other committee input on topic number one?
DR. BYRN: I think we have about 10 minutes. We need to decide, I guess, if we're reaching a consensus or starting to agree with the committee, then we would be recommending that a local delivery study of the lowest actual dose for 2 weeks would be required and we would be supporting that recommendation. Are there any concerns about that on the committee? Any other discussion?
DR. JUSKO: The way the question is formulated at face value, the answer seems to be no. It's not possible to confirm equivalent local delivery of two products by this type of test.
DR. BYRN: So, your thinking, Bill, is that we don't agree with this recommendation, that it's too -þ well, go ahead and elaborate.
DR. JUSKO: I like the phraseology that Dr. Meyer used that this type of test, while it may be advisable to do for reassurance purposes, it in no way provides any confirmation of bioequivalence or clinical equivalence.
DR. BYRN: Okay. I guess we're talking about writing a guidance which would use a number of methods. Maybe Dr. Adams can explain what would be in the guidance. I guess this method by itself would not be in the guidance. Is that right?
DR. ADAMS: Yes. As Dr. Meyer indicated in his slide, there's a package of information with the formulation/device recommendations and the PK and the rhinitis studies. Furthermore, an acceptable equivalence shown on the rhinitis study does not trump the in vitro data. The in vitro data must show equivalence. We would in no way ask just for the rhinitis study without the other information.
DR. BYRN: Does that clarify that, Bill? So, we're talking about a guidance that would have a number of components, including the 2-week study, but the issue is, do we agree that the 2-week study should be included with those components? I guess the choices are more clinical studies or no clinical studies.
DR. JUSKO: I find myself most in agreement with the statement on slide 8, a clinical rhinitis study would be useful to confirm that whatever unknowns remain after establishing equivalence through in vitro performance and pharmacokinetic metrics are not clinically important.
DR. BYRN: So, it's as a confirmatory study is what you're saying, Bill.
DR. JUSKO: Yes.
DR. BYRN: I think that's the intent of the question.
DR. VENITZ: Can I ask a follow-up question to that because I think I'm with Bill Jusko on this. Is the subcommittee proposing that this study is required? That means everybody has to do it, even if there are no unknowns left after the in vitro and the PK package has been reviewed? Is that what the subcommittee proposes? I guess I'm asking Wally.
DR. ADAMS: We're saying that for the suspension products at the present time, Dr. Venitz, that there is an unknown, which is the particle size distribution.
DR. VENITZ: So, by default, it would be required for those products to do a clinical study.
DR. ADAMS: Yes, and that would be written into the guidance. And even if a validated particle size distribution method becomes available, the issue would have to go back to our internal technical committee to discuss whether we would be happy with scientifically feeling that the in vitro data alone would support equivalence. That's a separate issue, should a validated particle size distribution method become available.
DR. VENITZ: So, right now if the in vitro package and the PK package and the clinical package all demonstrate bioequivalence, that product is approvable?
DR. ADAMS: Yes, it is.
DR. VENITZ: If the in vitro package or the PK package show bioinequivalence, regardless of the clinical study, that is not approvable?
DR. ADAMS: That's correct.
DR. VENITZ: If we had a test for particle sizing, and that was a validated test, and the in vitro package, the particle size package, and the PK package show bioequivalence, a clinical study would still be required?
DR. ADAMS: Until we take the issue to the OINDP technical committee and obtain agreement from within the committee and at higher levels of management that that is acceptable to not ask for the rhinitis study.
There are various routes you could take. For instance, it might be that a PK study and no rhinitis study might be appropriate as well. So, we would have to discuss what the various options are. That decision has not been made at the present time.
DR. VENITZ: Okay.
DR. BYRN: Any other comments?
DR. BYRN: I think we have consensus on topic one.
Shall we go to topic two now? Topic two really relates, I think, to the fact that there is not confidence in -- would you summarize what you think topic two relates to, Dr. Adams?
DR. ADAMS: I'd be happy to.
DR. ROBERT MEYER: I think the upshot of this is that we really brought in some ways two questions to the committee that are somewhat split out. One of them is a bit in the way they were phrased to the committee, and perhaps a bit covert, or not explicit. That is, should a clinical study be done at a single dose, and if so, should it be sort of a traditional clinical study, or are there reasons to allow for or prefer an EEU study or day-in-the-park study, given the question that's being put to that in the clinical study?
DR. BYRN: And you're recommending neither an EEU or a day-in-the-park study would be acceptable. Right?
DR. ROBERT MEYER: Yes. There was a consensus coming out of the subcommittee that if a study were to be done -- and again, there was not full consensus on the requirement for that -- that it should be the more naturalistic 2-week study at the lowest labeled dose.
DR. BYRN: Now, we've already said a study should be done. The whole committee has reached consensus on that.
DR. ROBERT MEYER: Right, and in fact, as long as everybody understands this as being explicit rather than implicit, if you've reached consensus on topic one, you may have already reached consensus that topic two is -þ
DR. BYRN: We may have, but I think we should discuss. So, what we're saying now in topic two is saying we're going to do a study. It's going to be 2 weeks. Under this category we were just discussing, we're not going to recommend, at least at this time, a placebo-controlled in-the-park study or an EEU study. We recommend a 2-week study.
So, let's have some discussion. Does anybody have a problem with that? Again, this is recommended by the committee.
DR. MARVIN MEYER: Just a quick question. Which are the pivotal studies in the NDA review? The natural study?
DR. ROBERT MEYER: Yes.
DR. BYRN: So, this would parallel an NDA, in effect.
DR. ROBERT MEYER: Yes, it would.
DR. JUSKO: My comment on this one is a little bit of a repetition. Once again, if this pertains to corticosteroid suspensions, it is entirely reasonable, but if a new class of drugs came up, that should be addressed separately.
DR. BYRN: Now, I think it's pretty clear that if a new class of drugs comes, the committee would want re-evaluation of a guidance. I think the agency would, too, from what I'm hearing.
DR. ROBERT MEYER: Yes, I think it depends a little bit on just how much of a departure it is. I would point out that the first study that Dr. Chowdhury showed was a solution product, but it was also not a corticosteroid product, and that was a day-in-the-park study. We've not seen differences to date between suspension and solution products, or between drug classes in the failure to show a good dose response, nor in the ability of these other alternative study designs to be more discriminatory of dose.
So, I think we wrote the guidance to be fairly general, but with an understanding of what the current universe is. I think if that universe were to change, we might need to take that back. But I think it would have to be a change in the universe as it is for these drugs right now.
DR. BYRN: Is that okay with you, Dr. Jusko?
DR. JUSKO: In part, but I find that first study to be the most flawed, with the baseline being so weak that it would be impossible in that type of study to see real efficacy when you're looking for an improvement from a possible range of 30, when the baseline starts at 10 and you look for a score to drop below 10. It's just awfully difficult to see changes.
DR. ROBERT MEYER: I guess my point is that if we just simply saw, say, for whatever reason, a suspension antihistamine nasal spray come along and no data from the NDA that we should view that differently in terms of the sensitivity or discriminatory ability of these studies, then I don't think we'd need to rethink the guidance. I think there are a number of things that could change that would lead us to come back to you folks, and we might be talking about new methods of assessing drugs with an ability to better discriminate between doses. It might mean being able to particle size within the suspension. There are things that could change that, and we understand what you're saying but we did write this to be general for what we know now about these drugs.
DR. BYRN: So, I think we're reaching consensus on topic two, that we would require a 2-week placebo-controlled study.
Go ahead, Wallace.
DR. ADAMS: Dr. Byrn, I just wanted to supplement what Bob said. If there were another drug, let's say a suspension antihistamine, to come along, we would intend to use the present paradigm in this guidance for that drug, in terms of what is the universe of drugs that we're talking about here. So, the present paradigm would apply not only to corticosteroids but it would apply to other products, should they be available as suspensions.
Yes, we'd have to change some aspects of it in terms of the systemic absorption study. But the basic paradigm would be the PK study and a clinical study conducted for multiple weeks I would presume, a rhinitis study conducted for multiple weeks.
DR. MARVIN MEYER: Maybe this relates more to how a guidance works. The draft guidance says the guidance covers studies of prescription corticosteroids, antihistamines, and anticholinergic products. So, if one includes what we are talking about in the framework of that draft guidance, one would assume then that the anticholinergics and antihistamines also require a 2-week natural exposure study, without some specific statement of categorization of the drugs, in line with what Bill's concerns are.
DR. ROBERT MEYER: Yes, I think we note the concern is the best way to put it at this point. We'll consider that in the redraft.
DR. BYRN: Yes, I think it's appropriate that it's just been noted because the agency would, I'm sure, not apply a guidance unless it was appropriate, unless it had been shown to be appropriate in a submission. So, because it is still just a guidance, if it's not appropriate, some action will be taken.
DR. DOULL: I think it might be useful in doing this guidance that the language that says lowest active dose would be a little more precise. Lowest active. You're talking clinical dose. In the dose response slide that you gave, you have a toxicity dose response and you have an efficacy dose response. So, you need to tell us which dose in fact you're looking at. In that case you're looking at þ-
DR. ROBERT MEYER: Yes, it will. And in fact, the subcommittee recommended it be the lowest labeled dose. We, I think somewhat purposely, chose a vague term there because it wasn't clear to us. There are various ways to define the lowest dose. There's the lowest feasible dose, there's the lowest dose that might be active, or there's the lowest labeled dose. The subcommittee felt unanimously that it would be the lowest labeled dose that would be examined for the efficacy purposes, and that a higher dose should be examined for the systemic bioavailability purposes.
DR. DOULL: That's fine, so long as you don't call it a threshold.
DR. BYRN: Any other comments? Wallace?
DR. ADAMS: Dr. Byrn, it would be helpful for us if we could have a vote on these two questions rather than simply a consensus.
DR. BYRN: Okay. When we say a vote, let's just go ahead and have an aye or nay vote on the two questions. So, question 1 would be, as topic one is stated, do we agree with topic one? We would say the committee agrees with the OINDP Subcommittee regarding its recommendations concerning the conduct of a local delivery study based on the lowest active dose and a traditional 2-week placebo-controlled rhinitis study considering the comments we've had on the lowest active dose, and all the other comments we've had.
So, we'll ask for a vote now. All that are in favor of that, please say aye.
(A chorus of ayes.)
DR. BYRN: Opposed?
DR. ADAMS: Can we have a show of hands on that so we can get a count?
DR. BYRN: Okay, all in favor? And I guess we're only voting, official members. I'm not sure who that is.
DR. BYRN: Raise your hand.
(A show of hands.)
DR. BYRN: Is that 10, Nancy? Eleven? Eleven in favor and none opposed.
DR. ADAMS: Eleven to zero then?
DR. BYRN: Eleven to zero.
And then the second topic, the committee agrees with the OINDP Subcommittee regarding its recommendations that the local study be based on the lowest active dose. I guess that really covers it, doesn't it? Do you want a specific consensus against a day-in-the-park and an EEU, or does that cover it?
DR. ROBERT MEYER: I think if the committee's comfortable with the 1 precluding 1, then -þ
DR. BYRN: The way it says it, a traditional 2-week placebo-controlled, I think it covers it.
DR. BARR: I would just like to ask a question, though, because I think again it comes back to this issue of duration. If you have compounds that you expect to have long duration, that seems to be the primary reason for the traditional 2-week. Is that correct? For example, the cromolyn type or the corticosteroid type, you would expect you would have to have longer exposure in order to determine efficacy. But would that be true for a sympathomimetic or an anticholinergic?
DR. CHOWDHURY: Currently the drugs which are approved available for allergic rhinitis are antihistamines, anticholinergics, steroids, and cromolyn. And to answer the question, steroids would require a couple of days to have efficacy. The question here is the suspensions. And all the steroids are suspensions. So, therefore, for suspensions, we're talking about steroids. Therefore, we would require a couple of days for the drug to be active. So, one day of dosing would not necessarily mean the drug would have its efficacy.
DR. BARR: My question related to the other compounds. Would alternative methods be appropriate if duration of activity wasn't a consideration because they appear to be more sensitive in some ways.
DR. BYRN: Dr. Meyer.
DR. ROBERT MEYER: Yes, I was just discussing this with Dr. Adams. I think that perhaps we actually should ask for the vote on topic two. The subcommittee did recommend against giving the option of a park study or an EEU study. I think we should get a vote from the committee whether in fact the guidance should continue to include these as options, in addition to the placebo-controlled.
DR. BYRN: Okay, let's finish Dr. Barr's question, though. I think your question, Bill, really is addressed because we've sort of agreed that we're going to reevaluate the guidance if it involves a suspension other than a steroid.
DR. BARR: Right, and that's really what I was dealing with.
DR. BYRN: That's the general consensus of all of us here, I think.
DR. ROBERT MEYER: We just need to be clear. I think that it is a guidance, and should we learn something else that changes the way that's applied, we may either rethink the guidance or choose to apply it somewhat differently. But we don't want to leave here with the understanding of the committee that we absolutely will come back to the committee for these kind of changes, should something evolve.
DR. BYRN: Right. We're just giving kind of a general policy overview on this.
The second topic, then, would be that the committee would agree with the OINDP Subcommittee recommendation that a day-in-the-park study and an EEU study would not be sufficient. I guess we can just say that. Would not be an option, and maybe that's a better term.
Is there any discussion of that, any further discussion?
DR. BYRN: All in favor, please raise your right or left hand.
(A show of hands.)
DR. BYRN: Ten in favor.
(A show of hands.)
DR. BYRN: One opposed. So, that is also a consensus.
Any other discussion?
DR. BYRN: Let's take a break. We're running about 10 minutes behind. Let's cut five minutes off the break, if we could. So, we'll come back here at 10:50.
DR. BYRN: I think we'll get started. We have three new members up here at the table, but we won't introduce you, if you don't mind, until you're actually speaking because some additional members from the CMC group are coming. So, we'll go ahead and go to the Nonclinical Studis Subcommittee report, and John Doull will introduce the issues.
DR. DOULL: Well, I've been asked to introduce this issue. I'll be brief because I know we're all anxious to hear the reports of the working groups.
Those of you that have been on this committee for a while will probably recall that our subcommittee, the Nonclinical Studies Subcommittee, was created about two years ago. The charge for this committee was to evaluate the use of nonclinical studies in the development of drugs. In developing the charge to the committee, we really have pushed this a little and we're now more focused on the use of biomarkers to identify both the effects of drugs and also particularly to identify adverse effects, toxicity.
We had a second charge, and that second charge was to link nonclinical studies that could be also used in the clinical evaluation of drugs. So, those were our scientific objectives.
We were also asked to facilitate the interaction of our subcommittee and Food and Drug with industry, with academia, with other public groups, and we've tried to do that. Yesterday Helen called that leveraging, and I thought about that last night. I'm not sure it's leveraging. It's more win-win, hopefully.
I borrowed a couple of slides from Jim MacGregor from his PowerPoint, and let me turn to those. Those are the objectives. I think I've talked about those.
The next slide has the history. In order to decide which biomarkers we would focus on initially, we started out in our committee activities by bringing in a lot of experts in different areas. We had several people who came to talk to us about genomics and proteomics and the other "omics". We evaluated that and decided the drug houses are really using those techniques powerfully in the development of new drugs, but they are not quite at the stage where we felt that it would be useful to have a working committee on the use of genomics or proteomics in toxicology, or perhaps even in efficacy. So, we did not include that as a working group at the present time.
We also had a number of experts who came and talked to us about noninvasive imaging, both PET scanning and NMR. That one we were really intrigued with, and we thought perhaps that was one where we could recommend a working group to develop some of those ideas. Since then there are some difficulties with PET scanning. It isn't at the stage yet where it's available in medical school teaching, for example. Few places have the equipment, so that one also we are not making as a recommendation for a working group at the present time.
Now, the third area that we talked about is the one Dr. Collins talked about yesterday. He talked about the need for biomarkers for liver injury, for cardiac injury, and for vascular injury. We felt that there are a lot of groups that are collaborative groups that are looking at liver injury and that it would be more profitable for our subcommittee to focus more on cardiac toxicity, biomarkers for cardiac effects, and biomarkers for vasculitis. And those are in fact the two committees which we agreed on.
We did that last year. By the fall we had sent out notices to the Federal Register, to scientific societies. We asked Food and Drug for suggestions. We asked our members for suggestions, and we got a slew of them. We sorted through all those, and in January we put together two panels, one for cardiac toxicity and the other for vasculitis. I thought, well, gee, we got that all done in January. Perhaps we can meet shortly and start this process. It took an immense amount of effort, and Jim MacGregor really worked long and hard to get through that process. We're learning, and one of the things we've learned is it takes a long time to get these working groups established.
But they are now established, and as you can see from the history there, we had the meeting in May, and in that meeting we met with the designated members of those two groups and got off to a start.
The next one indicates the members of the committee, and let me just go through that. Jim MacGregor, of course, is from NCTR CDER. Well, he was with CDER. He's now the designate for NCTR. And Dave Essayan is CBER. Dr. Reynolds is PhRMA. Joy Cavagnoro represents Bio. Jack Dean. Is his term up? He was on this committee, but I think his term is up. Anyhow, he's also from Sanofi. And Gloria and I are the two members from this committee that serve on the Subcommittee for Nonclinical Studies. Jay Goodman is a toxicologist from Michigan State. Ray Tennant is from NIEHS. He actually was concerned primarily with knockout mice, but he is now in charge of the genomics program at NIEHS, which I understand will be the lead in the genomics effort of this country. Dan Casciano is the new head of NCTR. So, there are a couple of new members on the committee since we originally got it formed. So, we are doing our best, Helen, to leverage these activities.
I'd like to go ahead at this point and introduce then our speakers today, and I'd like to introduce the vasculitis speaker first. This is Dr. William Kerns.
DR. KERNS: Thank you, John, for that introduction. I'm here as a representative of my committee, and this is still work in progress. We have met only one time, and I'm here to provide just an update, a report of what we have done to date.
Our committee is composed of members that represent approximately 50 percent from industry and 50 percent from academia and the regulatory side. David Essayan is our liaison from the agency representing CBER, and myself and Lester Schwartz are co-chairs of the committee.
Following the introduction from Dr. Doull and Dr. MacGregor, we met the first time on May 3-4 of this year, and we tried to interpret our charge, as we understood it. Following that meeting, we understand our charge the following way.
One, to first develop a common understanding of exactly what the problem is that we're here to resolve. As you noted from the previous slide, our membership is composed of a wide variety of disciplines, clinical toxicologists, pathologists, pharmacologists, immunologists, and so on. It was clear from our first meeting, within the first hour, that we all did not clearly understand the issue that we were there to discuss, and we did spend a lot of time on the first day trying to zero-base the discussion so that we all understood what we were talking about.
The primary reason for that is the term "vasculitis" is confusing to many, especially clinicians. When clinicians think of vasculitis, they usually think of hypersensitivity, drug-induced vasculitis. That is not what we were here to describe within this committee. It's something quite different. So, having clinicians on our team is, A, very important but, B, created some communication problems early on that we had to sort out.
Second, we were asked by Dr. Doull to address the criticality of the issue and understand whether or not this was a question that needed to be answered, and that's item 2.
And if so, develop an initial list of biomarkers that we might pursue.
And following that, then, in the second day we surfaced three or four other issues that will become very important for us to resolve as we move forward.
In the process of developing new biomarkers and new assays, the opportunity for intellectual property development is tremendous, and this will become an issue within the committee that we have to deal with as we move forward.
Secondly, funding issues are critically important in the research that needs to be done to discover and develop the assays and validate them in the next slide, validate them to the point where they become acceptable as decision making tests within Pharma, as well as within agencies around the world.
And lastly, resolving issues of confidentiality, both within the membership and between the membership and the agency. And this is an issue that we have yet to deal with, but one that we will have to come to understand more clearly so that we can all communicate more clearly within the team.
So, if we tackle the first issue, understanding the problem, I thought I would present a few slides so that those of you in the audience could understand the problem as we do. So, is this drug-induced vasculitis as we know it in humans, or is this drug-induced vascular injury as we see it in animals?
The clinical versus pre-clinical impressions I've already alluded to, but in clinical medicine drug-induced vasculitis is usually associated with hypersensitivity vasculitis, a specific morphological kind of disease that patients usually recover when drug is removed. Sometimes it gets worse, and they redevelop disease when you rechallenge them.
Preclinically in animal models, we don't see this syndrome. We see something quite different, and I'm going to show you what that looks like. Of the seven major categories of vasculitis, some drug-induced in humans, none, or rarely are they observed in animal studies in routine and toxicology studies in normal animals. This then becomes a problem.
I want to go through a few slides to help educate the audience as to exactly what we're talking about. Unfortunately, we didn't have these slides when we first met, but since we've exchanged them by e-mail. I think there are four or five currently approved marketed products on the U.S. market that cause lesions as you're seeing in rodents and dogs and sometimes primates. This happens to be a mesenteric artery from a rat treated with fenoldopam mesylate, a DA-1 agonist. Fenoldopam is an approved drug for hypertension in critical care units.
The lesion is characterized macroscopically by intense medial hemorrhage in the mesenteric artery. And if you look at the artery ultrastructurally, you can see tremendous compromise of the vascular endothelium. The endothelium is swollen. There are white blood cells attached. The endothelial cells are retracted, and in some cases endothelial cells can be seen sloughing from the surface. The endothelial cells I was alluding to you can see here sloughing from the surface.
You can see down here normal medial smooth muscle. If you remember from the previous slide all the hemorrhage in the media, you can see the cavernous areas where the medial smooth muscle has disappeared, and the empty spaces are filled with red blood cells.
And if you look at it from another perspective in transmission electron microscopy, you can see that there not only red blood cells have replaced the normal media, the media is filled with platelets as well.
I show you these slides because it should bring to mind different kinds of biomarkers that we might pursue in this effort. And also for those of you that know, morphologically this syndrome is very different from what we see in humans with drug-induced vasculitis.
If you look at an arterial lesion three days after injury, you can see that unlike human disease, there are no eosinophils in this lesion, and the lesion is primarily characterized by a neutrophilic inflammatory response. There's separation of the endothelium from the internal elastic lumina. There's medial smooth muscle necrosis and hemorrhage, and there's inflamation in the periadventitial tissues that is primarily at this stage mononuclear.
Enough about morphology, but the point being that this syndrome that we're here to characterize is different than what we routinely see in humans. That doesn't make it unimportant. It makes it perhaps more important because we need to understand how to detect these kinds of changes if they occur in humans.
So, number two, confirming the criticality and validating the problem. In the 1980s and 1990s, we worked with a variety of different cardiovascular agents that at high doses caused hypotension, reflex tachycardia, myocardial necrosis that Dr. Holt will talk about, and also vascular disease. And we were quite comfortable with that for reasons that, on reflection, may not seem realistic, but quite comfortable with that and thinking that if we did not induce hypotension and reflex tachycardia in humans then we would not induce vascular disease. This is clearly true for myocardial toxicity, but unproven for vascular toxicity.
So, we now have a series of new drugs that we're working with in Pharma and within the agency that cause vascular disease but they do not cause changes in blood pressure and heart rate.
Once again, lesions that we see in humans are not observed in routine toxicity studies in normal animals. The common drug-induced lesions that we do see in animals are not known to occur in humans and have unknown relevance. There are, as I said, five marketed products on the market that cause these lesions.
But lastly and importantly, even though they are unknown to occur, there are, however, no methods for detecting drug-induced vascular injury as I've described in animals or humans prospectively.
So, drug-induced vascular injury in animals does warrant an investment of resources to define early and predictive biomarkers of injury and possibly mechanism. The EWG then recommends proceeding to organize the funds and the process necessary to develop and validate specific markers.
The next item we took in our charge was then to develop a list of prospective biomarkers. Although the pathogenesis of vascular injury in animals is not clear, it is clear to the pathologists that have looked at these changes that the initial events appear to occur by perturbations of endothelial integrity. And secondly, it's clear to many of us who've worked in the field that the changes that we see are not a result of direct toxic action of compounds on the endothelium, but more importantly probably an effect of altered function, changes in blood flow, changes in fluid dynamics, changes in shear stress, and lastly, changes in hoop stress within the vascular wall, and that these factors are probably more important than direct toxicity.
Endothelial compromise, then, appears to play an important early role in the development of this syndrome, and therefore our biomarkers might be targeted to endothelial compromise.
So, the charge then is to develop noninvasive methods to monitor endothelial and vascular smooth muscle cell damage in a variety of preclinical animal species.
Equally important, in the inflammatory process that ensues in this disease, there are many other inflammatory cells, neutrophils and platelets, involved in the process, and we're also thinking that these platelets and neutrophils, taken ex vivo, might be able to tell us something with regard to new biomarkers, proteins that might be upregulated in these cells that we can look at ex vivo in animals and potentially in humans.
And lastly and importantly and probably most difficult, once new markers are identified, then validating the new marker both in preclinical species and transferring that to practice in phase 1.
The markers that we are targeting initially as of our initial meeting and as a result of several e-mail discussions in the interim, would be vascular endothelial growth factor and its soluble receptor, sF1t-1, von Willebrand factor, thrombomodulin, CD62E, E-selectin.
Circulating endothelial cells. There have been a few publications recently from Europe looking at circulating endothelial cells following angioplasty. I can tell you just briefly the baseline for circulating endothelial cells is undetectable, and post-angioplasty, you can pick up 6 to 10 cells per cubic micrometer. If we can translate that to this kind of a model, that could be a very sensitive and specific indicator of vascular injury, and we need to look at funding research in this area.
VCAM-1, soluble beta thrombomodulin, P-selectin. Endothelin 1, also an important soluble factor to look at. PECAM, ICAM-1. And lastly, soluble FAS ligand. I think there's some data that's evolving showing that the endothelial cell death that I showed you in the scanning EM is probably associated with apoptosis and not necrosis. A lot more work needs to be done in this area, but if that is true, we might be able to detect soluble FAS ligand in the plasma as an acute marker of endothelial compromise.
Additionally, with regard to biomarkers and other "omics," as Dr. Doull refers to, I think there's tremendous opportunity here to look at the cells involved in the pathogenesis of these lesions for different expression patterns of different messages, different proteins, and so on. I think there's great opportunity here to do that if we can put together the right mechanism.
Funding. Critically important to the success of our mission, and it's very early days yet in my committee. To be quite honest, we're struggling to figure out how to accomplish this, and we're looking for guidance from your committee. I have made some phone calls to NIEHS and there are potential funding mechanisms there, and I've been speaking with Ray Tennant and one of his colleagues. Yesterday I spoke with Denise Robenson at ILSI. ILSI does have a reputation of developing large projects like mouse tumors and hepatotoxicity and so on and funding them. They would be interested to see an application. That's just a beginning, unfortunately.
I think eventually we would anticipate Pharma would be interested in providing funds to support research in this area, but it's early days yet. Any advice or thoughts you may have, I would be appreciative.
With regard to funding, then, whatever the mechanism, I think we need to be looking at animal model development. As I said early on, the current animal models don't really predict what actually happens in humans, and what our current animals predict is something that we think doesn't happen in humans, but we want to prove that it doesn't by developing the right biomarkers. We need animal models that predict what really does happen in humans, and I think this is an area of research that we might look into, as well as the biomarkers.
We need novel and specific markers of endothelial and vascular injury that can be validated and reduced to practice. The monies and the research efforts will go into doing this.
Our immediate plans. We have a conference call lined up for the 31st of July to continue our discussions and expand and explore what I'm telling you today. I think we need to submit an ILSI application, if that's what the committee wants to do. I haven't mentioned this to my committee yet, so I need to review that with them. We need to look at the other funding mechanisms through NIEHS, which we're actively exploring. We're looking at setting up a workshop in collaboration with the ACT and/or the SOT meetings coming up in the fall and spring of '01 and '02. At the SOT meeting in '02, we have already organized a workshop on vascular toxicity and biomarkers. Dr. Schwartz and I are co-chairing that, and that is on the slate to be presented and we hope to organize some sidebar meetings around that for a broader participation and discussion.
There are the IP issues that I mentioned before that we are looking to understand more clearly. Maybe it isn't an issue, but we need to understand it more clearly. We need to understand the issues of confidentiality so that we can communicate more effectively between the agency to understand clearly what the issues are, what they see if possible, and how we might help. Validation strategies are also key.
So, lastly, our recommendation then is that this particular topic does warrant the investment of further energies and monies to bring new biomarkers to the table that we can use in preclinical and clinical medicine. The methods need to be noninvasive. They need to be robust. They need to be specific. They need to be sensitive. And we need to be able to reduce them to practice so that we can translate them to phase I.
Thank you. I'm happy to answer any questions.
DR. DOULL: Thanks, Bill.
Our other working group is the cardiotox working group, and Dr. Gordon Holt is going to tell us about activities of that group.
DR. HOLT: I'm very pleased to be here to present our findings. From the moment that we constituted, it was, from a personal standpoint, a great relief really to find that we had been constituted with a good group with diverse experience from Pharma, academia, and then the governmental backgrounds to help us with all the ins and outs of things that we needed to consider, as you can well imagine.
Perhaps you're hearing between the lines right now, that frankly, to a certain extent, our work is in progress. Our particular charges are likely to change in tune as time goes on. Our particular goals are likely to change as well, too.
I wanted to emphasize, too, that Ken Wallace wasn't able to be here today to serve as chairman in talking to you about what is going on, so I get the privilege, since I live 10 miles up the street.
Major points to be considered, as Dr. Kerns has just described. In all cases it's very much needed, we found quite quickly, to make sure that we're talking the same language and that we believe we're sitting at the table for the same reason. After we did that, we were able to come up with key questions, what we thought were the real pressure points for the information that we needed to gather to address our charge. We came up with some specific things that we can be doing in the very near future to address these charges, and I'll talk about each of those in time. Then we also started amassing a list of resources that we were quite clear that we did not have that we'll be looking to the committee at large for input on how we can do these things.
Again, I emphasize this is work in progress, and if I say something that seems challenging, then I really strongly encourage everybody to bring it to our attention so we can move quickly toward some tangible outcome.
In terms of our charge -- this was given to us -þ identify opportunities for collaboration, develop valid markers that effectively predict drug-induced myocardial toxicity. We quickly tuned it a bit. What we believe we're trying to do is to find a path for implementation because that, as far as we are able to identify, does not clearly exist right now. So, find markers, find a path to implement them, at the same time clarify what the benefits would be of doing this action, and then finally to identify resources that are needed to bring this to bear.
In terms of getting our language straight, one person's biomarker is another person's target, so we had to be sure that we were working in the same zone with our language. We quickly discerned that there are biomarkers we could break down into major categories of susceptibility, exposure, and effect, and then subdivide it further. It's quite obvious that it's a matter of semantics. You kind of run out of words to separate the difference between exposure and effect.
Nonetheless, we believe we're down at the bottom end of the spectrum where we believe that we should be focusing our attentions on effect, in particular effect that takes a patient or an animal from a state of integrity, wellness, homeostasis, into something that is not that, stress, and perhaps injury/damage. And injury/damage in our minds is that next step where the patient, whether it be a preclinical animal or a human, has actually had some effect that is long-lasting and adverse to the animal.
We wanted to also figure out what the characteristics of an ideal biomarker are. We discerned that we needed to have some idea of a goal in mind for what we were shooting for. I won't go into this list in detail. It's just here as a matter of record, and I emphasize ideal here. This is clearly a wish list because I think in many circumstances we and regulatory agencies will have to take what they get, what biology presents with. But generally speaking, I think there's probably going to be useful agreement that any biomarker will have to be specific to toxicity. It has to be sensitive, predictive, robust. There's no point in going through these things if all the work has to be done in a very expensive academic or very high IQ setting. That's just not going to hold true.
As you just heard from Dr. Kerns in the case of vasculitis, this is going to be a very challenging issue, whether preclinical and clinical markers will bridge both forward and backwards n the case of vasculitis. It looks like it will be less of an issue with cardiotoxicity. There are examples that do bridge already. And then ideally these would be noninvasive. In the case of cardiotoxicity, it's an important point to stress that you don't want to induce cardiac damage in trying to monitor it.
Key questions that we came up with are listed here. I'll briefly touch on each of those in turn. What cardiotoxicity markers are already accepted? Can we look to existing models and get a paradigm in place for what we should do next?
We believed that we had to split that into two zones. One is what the FDA has accepted, and then what the toxicology research, academic, and industrial community is doing right now. Those are two different commodities, we felt.
How are new biomarkers quickly identified and validated? How can they be quickly identified and validated? There is an existing committee, the ICCVAM committee, that we looked to for some guidance on paradigms for bringing new markers on board. We also looked to the toxicologist community to help us with this task, and we are, in turn, addressing both of these. I'll talk about that briefly.
Then also, as you've also already heard from Dr. Kerns, we have considered what the FDA could do to enable this process, and particularly with confidentiality and some kind of funding vehicle.
So, with respect to the current cardiotoxicity biomarkers, I can just summarize a lot of work that we did in our two days of sessions in trying to identify whether or not there are existing guidelines. It looks like there are no biomarkers for toxicity. Again, we're talking about serum markers or something like that that's validated. QTC is not covered under our charge as a biomarker, so we didn't consider that further.
So, the FDA doesn't have an accepted guideline. How about the community? In fact, I should register there was a certain degree of surprise because there are some biomarkers that I'll talk about iin a second. Troponins are really highly regarded by most toxicologists as very good markers of toxicity, but they're really not. I'll put that forward. There is quite a long shopping list that we went through, that I just listed here for your information, of proteins, changes that are well known, or at least somewhat well known, in the literature to be associated with cardiotoxicity. But we concluded quite quickly that troponins are by far and away the most advanced of any of them. They are approved for some aspects with myocardial infarction in the regulatory community, but not for cardiotoxicity.
The key thing here is validation. With all these markers, how can this information be bridged into the regulatory setting? It's all about validation and some kind of consensus-reaching.
I'll also emphasize, too, that we had a strong sense -þ and in fact, to a certain extent, personal knowledge -- that these "omics" are in fact in the wings and they have identified very, very compelling markers, and we want to be able to bring this information on board for us as well as to help advance that so that it's a community-wide process.
Again, we feel that while there's probably lots of statistically significant identifications that have already been made out there, that again, even without knowing more about what's going on there, that they too will face a validation problem.
So, how to validate? The group is looking for models to help us to identify how validation already occurs, and also how we might suggest that things go on in the future. The ICCVAM, the Interagency Coordinating Committee on Validation of Alternative Methods, already exists and has a very important role in bringing new marker paradigms into regulatory acceptance. These tend to be investigator-driven. That is, the person comes forward and says, I'd like to get acceptance on this.
They have a very well-described path -þ not so much a path but a set of attainments that they look for markers to be advanced to, both in animal testing and human testing, frankly quite an involved process. The difficulty as we perceived it is that it wasn't as clearly milestone-driven as one would have hoped, and it had a certain degree of all or nothing policy to it. But nonetheless, it's an important guideline for us to look to to see if there's a way to help bring things to regulatory acceptance. We very much hope that the ICCVAM members will help us to explore if there's any possible interface between this group and our group to see if we can bring things forward.
We also looked for methods where we can get a consensus finding information from the toxicology community, and we have particular example that we propose to do this already. These may well be driven by expert working group people. Many people in the group, we came to find out, know people who know people who can basically bring some of the power of the toxicology community to bear on the kinds of things that we're interested in.
We hope to be able to establish some kind of expert consensus on specific biomarkers. This is probably not going to be a huge finding exercise, but in fact a very specific method.
We propose using toxicology conferences as forums. These are public forums with speakers and platforms, discussion, the usual sort of things that go on in these conferences, to reach some kind of a gathering of information that will eventually lead to a report. And our working hypothesis right now is that that will be akin to an NIH consensus conference. Not binding, but just a way of collecting information.
That's very effective for the kind of information that's already in public domain. What it does not address is the information that we have a strong sense and, to a certain extent personal knowledge, of markers that are out there that the new markers, with the new technologies that have recently come online, where these discoverers and innovators were likely to require maintenance and nondisclosure to ensure their market preservation, at least to a certain extent of time.
How can that be dealt with? It's going to be complicated because there is clearly going to be some complexities with multi-party confidentiality. We don't have any suggestions for how to deal with that other than to say we're heartily enthusiastic to do what we can to help in any way to bring that to bear. Perhaps there is some subgroup forming that we can bring at least some information into a private forum so we can make sure that we're seeing the best information available.
As Dr. Kerns has already talked about, there's almost certainly going to be some need for funding resources. The idea here is that you probably need to have something to help support academic researchers to focus on specific things that the agency and the committees know they need to find more information on, and there's got to be some enablement there by some funding.
There also is likely to be some need for a clearinghouse, a warehouse of samples and standards too so that everybody can be testing to the same methods and qualities. There may come a time when there's a need to have a specific independent testing method done to make sure that everything is going along as it's supposed to be.
How's this going to be accomplished? Probably industry and PhRMA should be looked to. I think even as an industry member myself think that industry should be footing some of this bill. It's really no different than a patent application. If industry knows what's supposed to be accomplished, what will be accomplished with success, then they can help work that into their cost of doing business.
Certainly the existing granting agencies and the NIH universe are also a great place to do some of these things. It will require some integration.
And last but not least, the FDA hopefully can bring some resources to bear on this.
What tangible things can we do that we are doing right now to move things forward? We are holding, internal to the expert working group, although it is open to the public, a troponin workshop to be held, I guess, here on the 29th. This is again focused on troponins. We will be reviewing existing data. We will be trying to identify data gaps in the validation pathway as we see it, and then we'll be drafting suggestions on how to take troponin as a particular example of a new marker that we believe can be brought on board or, at the very least, can be put through paces that will let us know whether it can be brought on board.
Secondly, we have already taken the privilege of having some contacts within the group to conduct a fall workshop at the American College of Toxicology specific mostly to troponins. We've already scheduled this and started looking for speakers. This, of course, will be conference attendees, where there will be a presentation of current biomarkers on myocardial injury, again heavily weighted towards troponins. And then we are anticipating that there will be some sort of a satellite working group meeting, again that should be open to the public, to review the status of troponins and also to update on novel reporters. That may well be a time when we're going to need to start addressing confidentiality.
What's the outcome of this? We really do believe that fairly quickly we can at least prioritize the markers that are out there right now for bringing them online to help with better understanding of toxicology, cardiotoxicity. We also believe that the outcome of this is we will be able to set up a help form of paradigm for bringing new markers on board too.
I think I'll stop at that.
DR. BYRN: Thank you very much.
I think because of time, are there any major questions anybody would like to ask of any of these two speakers?
DR. DOULL: I think our intent was simply to inform the committee about the kind of science that's going on and to acquaint you with some of the problems that the working groups have already brought to bear, which our committee, of course, will deal with in its future meetings.
DR. BYRN: Thanks very much, John.
Helen is now going to give a sort of overview or a what-next talk on these two issues.
MS. WINKLE: I'll try to make my talk real quick, since time is limited.
I do want to say to Dr. Doull, though, that I agree with the word "leveraging." I don't consider this leveraging it either. I consider it more partnering. I've always had a difficulty with that term, so I thought about it long and hard, too.
I want to thank Dr. Kerns and Dr. Holt for coming and giving us this overview of the expert working groups.
Just to remind the committee as to what these groups are responsible for, they're basically fact-finding groups for the subcommittee. They will bring the information that they come back with to the subcommittee, and the subcommittee then will, in turn, make recommendations to the full committee.
As I think most of you on the committee know, Dr. MacGregor was basically the champion of this subcommittee. He's worked very hard with Dr. Doull and others to get the subcommittee up and running. Also I think it's already been mentioned by Dr. Doull that Dr. MacGregor has left CDER and gone to NCTR.
At that time, there was some question as to what should be the future of this subcommittee. So, I want to talk a little bit about that just so you as the advisory committee will know what our thinking is in the agency. Dr. MacGregor and myself talked many times with Dr. Woodcock and Dr. Casciano on this subject and have really been looking at the concept of possibly moving this subcommittee under the auspices of NCTR.
Basically the purpose of this committee, which I think Dr. Doull has already addressed, is to provide advice on improved scientific approaches to nonclinical drug development and to foster scientific collaboration or partnering.
Here are the objectives. I won't go through those. I think we've already talked about that. I basically want to talk about the future of this committee, as I said.
The committee will continue to focus on nonclinical safety assessments. We think this is very important. It's something that's very important to us at CDER. NCTR has a mandate and structure to lead in this area, so as I said, we've been having conversations within the agency as to whether to move this subcommittee under the affiliation of the NCTR Science Advisory Board, and basically too those conversations have included how this affiliation should be accomplished.
We've talked about the advantages of the transfer of the subcommittee. Already the subcommittee's liaison, Jim MacGregor, is part of NCTR. Also the ICCVAM process, which has already been mentioned, in the agency also resides in NCTR. NCTR is oriented in doing toxicology research, and it has the resources to support that research. They also have a scientific advisory board, which has experience in supporting such working groups as this.
And I may want to just back up a few minutes to talk about CDER's position on research. I think that most of you on the subcommittee know that our resources dedicated to research are limited in CDER. So, we feel that NCTR is in a much better position to support any of the research that comes out of these working groups. Basically they also have the resources to support the working groups. And NCTR -- I talked to Dr. Casciano on numerous occasions -- really has the interest of being involved more in this area.
However, should we decide to make these decisions, we feel that CDER is still going to play a very important role in the future of this subcommittee and with the recommendations that come out of this subcommittee because most of this is affecting how we make regulatory decisions on pharmaceuticals.
So, we will continue at CDER to support the NCSS if it is moved through participation in working groups. Based on the recommendations we'll bring issues relating to research and regulatory issues to the advisory committee so that we can have further discussion on these issues as they relate to our regulatory process. CDER will bring regulatory questions to NCTR's Science Advisory Board, as appropriate, that relate to this subject.
So, we still feel that we'll play a very active part in the role of this committee, should it move to NCTR. We see this committee as very important in helping us set future standards, and also see that there are important things that will come out of this subcommittee as far as our guidance development.
Basically where to from here? NCTR has not finalized a decision as to whether to adopt this committee as one of their own. They're convening a team right now to review the appropriateness of the subcommittee and make a determination whether it should, in fact, become a part of the Science Advisory Board. CDER will receive a report back from that team. Dr. Casciano said that he would hope to give this to me in the fall, which we will then in turn share with the advisory committee.
Until that time CDER will continue to take on responsibilities for this subcommittee. There are a lot of things happening with the subcommittee, including workshops, working groups, meetings, et cetera, and we'll continue to support those until a final decision has been made. So, I don't want you to think that this is sort of going to go down the tubes if we do make this transfer. In the interim, we'll continue to support it, and after that we'll be an active part.
Any questions, comments? Yes, sir.
DR. MARVIN MEYER: The focus of today's discussion seemed to be toxicology. Are there other issues that aren't toxicological that would fit within the Nonclinical Studies Subcommittee, and will they fit at NCTR?
MS. WINKLE: That's a good question. I think if we come across other issues, we'll have to make some decisions then how we want to handle them internally, if they're not toxicology issues. Right now, as you can see, all the issues that have come up are in the toxicology realm, but you're right, there are other questions that could arise.
DR. MARVIN MEYER: I'm thinking perhaps some of the issues from the bioequivalence side, with ways to determine permeability of drugs, in an in vitro setting. That wouldn't really fit necessarily with NCTR.
DR. WINKLE: Right. And we would probably bring those issues independently to the advisory committee.
Any other questions? Okay, thank you.
DR. KERNS: I just had a point for clarification. So, as I understand it, we're to do nothing different in the interim. We just proceed.
DR. WINKLE: That's right. Just proceed. And we'll continue to support you. We feel the work is very valuable, so we don't want it to sort of fall to the side while we're making this decision.
DR. KERNS: And you'll deal with the politics.
DR. WINKLE: Right. We'll deal with the politics.
DR. BYRN: It sounds like the prospects for funding at NCTR are more advantageous than FDA. So, that could be an advantage to the investigators.
Is there any committee discussion on this issue? Any additional questions, concerns?
DR. DOULL: I might just say, Steve, that the subcommittee was, of course, very concerned about maintaining the link with CDER because we feel that what we do in this committee will have great impact for writing guidelines and regulatory approach and so on. So, we need a very strong link and a very effective link in order to make those things benefit in a two-way kind of situation, so that our feeling is that we are very concerned about this and we'll follow this very closely and, hopefully, can work out something that benefits us all.
DR. BYRN: Let's go on to the next session. I think we'll just go ahead. I had some discussions about whether we could break this up, but because of other meetings, I think we'll just go ahead until the CMC section is done. So, Dr. Chiu will start out and give us an overview of the CMC section and the AAPS workshop.
DR. CHIU: We are here to give you a progress report of this new initiative, the risk-based CMC review. We also are here to seek your advice on two questions.
Just to refresh your memory, we brought this topic to you last November, and this is a program with a three-tier process. We are actually in tier 1 of this process. Tier 1 is to establish scientific attributes and acceptance criteria for drug substance, drug products, microbiology, and CGMP, to define what is considered low risk with respect to product quality. With these attributes and acceptance criteria in place, we would be able to compile a list of low risk drugs.
Then the second tier is we would show this list to our medical colleagues in CDER and ask a determination of a safety factor, whether any of the drugs on the list should not be considered low risk from the safety perspective.
Then the third tier would be evaluation of the GMP status of individual firms, and to see whether a firm would be eligible for this program.
A drug, if it is under this program, then the agency will have less oversight. There are three elements.
The first one is we will minimize the types of post-approval CMC changes requiring a submission of prior approval supplement, for changesþbeing-effected supplement.
We will reduce the amount of CMC information needed to be reported in annual reports to our approved application.
The third one is if the drug is on the list, then if a genographer would like to make a copy and this firm has good GMP historical status, then we will reduce the amount of CMC information needed to be filed in an original ANDA. We call it a truncated ANDA, and this ANDA will mirror the amount of data required in an annual report for an approved application.
So, we have many internal discussions. We presented this to ONDC scientific rounds, and we had brown bag meetings numerous times internally to seek comments, inputs. As I said, we talked about this last November in this committee. In June of this year, we presented this program to AAPS workshop. We had a one-day full discussion from the participants, and we seek their scientific input, how to put together the attributes and the acceptance criteria. Therefore, we can start to compile the list of low risk drugs.
So, today we're going to give you four reports on what happened in this workshop. We will cover drug substance, drug product, microbiology, and GMP. The speaker for GMP, Ms. Pat Alcock, could not attend, so therefore Dr. Eric Duffy will be her substitute.
DR. BYRN: Eric, as we go on, I would like to introduce two invited guests for this session, Dr. Leon Lachman and Dr. Gary Hollenbeck. And our guest speakers are speaking. Of course you just heard from Dr. Chiu, and now Dr. Duffy will be speaking, and then Dr. Sayeed, and Dr. Hussong. So, Eric, please proceed. Thank you very much.
DR. DUFFY: I'd just like to give a very brief overview of the discussions that took place at the AAPS workshop on drug substance issues. We had a brief presentation in the morning, to try to frame some of the issues, and then multiple breakout sessions, which were very active and really quite productive.
Overwhelmingly, the participants felt that the major criterion that would define "low risk" with respect to drug substance manufacturing was the manufacturer themselves. What are the capabilities of that particular manufacturer? Are they capable? Do they know their process? Can they reproducibly manufacture the product? These seem to be the recurring themes in most of the responses from the industry participants.
Secondly and close behind the quality parameters of the manufacturer themselves was having adequate specifications and the capability for adequate quality assessment. This seemed to be a recurring theme as well.
Lower down on the scale of critical issues seemed to be issues of stability, inherent stability of the particular drug substance. What the discussions pointed out was that people felt that if you really understood the inherent stability of the product itself, that would seem to be adequate, a good understanding. The discussion centered around whether a drug substance which is flat-line, no degradation, would that be the paradigm. Or would it be acceptable if you had degradation, but if it were well understood and predictable? Would that be acceptable? Well, people tended to think that the latter might be an acceptable paradigm with respect to stability.
Some of the issues that we had brought forth in the presentations at the beginning of the workshop had to do with whether one could define complexity of structure as a parameter that one might use as a measure of low risk versus otherwise. And I think people's consensus was that the degree of complexity may not necessarily be of any relevance. Furthermore, how one would define complexity seemed to be extremely difficult, and I think we have struggled with that particular issue as well in other contexts. But the degree of complexity is not relevant because primarily there are analytical capabilities, regardless of the degree of complexity, to understand the quality parameters of the particular drug substance.
Another issue that we had brought forth was whether one could use manufacturing process complexity as a parameter to define a drug substance which might be of low risk. The consensus I believe was that it really wasn't necessarily a defining criterion, but simply that the process should be well understood, that the manufacturer should understand their process. And this hearkens back to the initial point that I made, that it really depends upon the manufacturer and their degree of understanding of the process. It was considered essential that the manufacturers themselves understand exactly the complexity of the process and have it well controlled. Another reason for really not regarding this as a defining criterion would be the difficulty in defining what constitutes a complex versus simple process.
One other criterion that we had considered was the inherent reactivity of a drug substance. Is it robust, or is it susceptible to reactivity with atmospheric and environmental issues? Or would it be sensitive to various formulation excipients, et cetera? This was considered to be something that could be quite reasonably assessed in the context of the drug product itself, in terms of its stability.
Some of the other issues had to do with quality measures. Primarily the discussions focused on specification. It should be well justified. The set of specifications, the tests and procedures should be well defined and justified. And typically for drugs that have been around for a while, in most cases the specifications should be upgraded to contemporary practice and guidance.
There were, however, some concerns expressed by many of the industry participants having to do with the notion of upgrading specifications and maybe test methodologies where one might observe, for example, in an enhanced impurities test or assay, new impurities arise. The concern was expressed that if one did observe these new impurities, what would you have to do? Would a new safety qualification have to be conducted? Would toxicology considerations have to be considered? There were some concerns based upon that and there were a number of people who said that a more clear definition of in-use qualification from a safety perspective would need to be put forth by the agency. So, this is something that I'm sure we will have to consider.
With respect to the set of specifications as the measure of quality, it was considered appropriate by many participants that that may not be sufficient for assessment of change, impact of quality on change, sort of in the realm of BACPACs, where one needs to assess the impact of a change in manufacturing that is made, and maybe a set of protocols would be appropriate to establish with respect to assessment of change.
With respect to process characteristics, I've mentioned that it was considered essential that the process be well understood and controlled, and that also a set of in-process controls need to be in place, and that those controls need to be well justified. In terms of process characteristics, simple versus complex. As I had mentioned, it was considered not particularly relevant, and the definition of how one would do this is, furthermore, very difficult. Would one define it in terms of yield, number of process steps? The type of manufacturing process, very difficult to define. It was overwhelmingly considered that the process should simply be robust. Now how that's defined is another issue.
There were some concerns expressed, and I've listed a couple here that some of the manufacturers had a concern that if a drug was put on a list, would it then be mandatory that they engage in this process, upgrading the specifications and going through whatever registration process there might be. That would certainly have to be considered by the agency. Furthermore, if a drug was put on the list, would the agency promulgate kind of a monograph where there would be a universal specification? There was some concern about that.
That's really about all on drug substance. Steve, we're going to take questions afterward, or shall we do it now?
DR. BYRN: Maybe because of the number of speakers, we should do it now, right after each speaker. So, are there any questions for Eric? Gary?
DR. HOLLENBECK: Sort of three questions, Eric. First of all, this process, the streamlining process, relates to drug products. Is that not correct?
DR. DUFFY: Well, one of the issues that did come out in the discussions that wasn't necessarily specific to drug substance breakout sessions was the notion of whether or not one could have a drug substance considered to be low risk, but the drug product that it's used in is not considered so, or vice versa. That is certainly something that needs to be discussed. I'm sure Vilayat is going to mention something about that as well. But yes, we need to decide whether you can split it.
DR. HOLLENBECK: So, you are not talking about changes in the manufacturing of the active in this context?
DR. DUFFY: Oh, yes, we would be.
DR. HOLLENBECK: You are talking about that.
DR. DUFFY: Yes, and certainly the BACPAC initiative would go a long way toward addressing the issue of change in manufacturing process. I think we have to think about whether or not the BACPAC initiative would need to be enhanced in any fashion for those drugs that are on this low risk list or not. It's something we haven't fully explored.
DR. CHIU: Originally we were talking about drug product, drug dosage form. However, because drug substance is part of the drug product, of course if the drug product is low risk, then drug substance must be also low risk. You cannot have a high risk drug substance and have a low risk drug product. We think the two are linked.
However, we did receive comments we should consider if the drug substance is stable, but if the drug product, the dosage form is not stable, then we should not just forget. And then we could have a program, drug substance part can be low risk. So, that's something internally we have to discuss.
This program is not about post-approval changes because once it is on this program, there's no preapproval CB supplement anymore. So, therefore, the BACPAC does not apply at all. There's no need to report those changes.
DR. DUFFY: You said you had a few questions, Gary.
DR. HOLLENBECK: Yes. I guess just following that up, I guess there was a presumption, at least for me, that we would always be using quality active pharmaceutical ingredients, and that the danger of establishing new specifications for them in this context really wouldn't help streamline the process.
DR. CHIU: For the initial program, of course we will only consider stable bulk drug substances. We will not include the proteins or other labile substances. However, the industry's view is it really doesn't matter if it's unstable, as long as you know the degradants, you know the degradation process, you know how to control it, you have a good specification to detect degradants. Therefore, they should not be out of consideration.
DR. HOLLENBECK: My other main question. When I saw this category come up, I kind of expected some consideration analogous to SUPAC, the permeability, solubility, therapeutic kind of screen for active ingredients as part of the classification system. Is that involved at all?
DR. CHIU: Of course, the BCS classification could be used as a consideration, but we think you should not be limited to the class 1 because other substances which may be not soluble, not permeable as well, but from a quality aspect, they are probably low risk.
DR. HOLLENBECK: It kind of gets to what Yuan-Yuan had mentioned in her presentation, is that the considerations presently are the tier 1, which are quality attributes and other performance and in vivo performance attributes are a different consideration.
DR. BARR: Basically does this group then relate just to the stability and perhaps sterility of the unit, as opposed to the release or the performance? Because I'm kind of confused. I think like Gary that it's very difficult for me to separate what's already been done in SUPAC and the bioequivalence classification and those kinds of things to identify problem drugs and non-problem drugs. Apart from the stability, I don't see much difference between the two. Could you clarify that?
DR. DUFFY: In terms of product performance, that's the object eventually, how does the product perform in use. Now, certainly for drug products that would be subject to performance problems due to quality attributes, that would certainly be a major consideration for us. Vilayat is going to mention a bit about that in his presentation. So, ultimately that's the prime consideration, how the product actually performs.
DR. BARR: Perhaps a low risk drug would be a drug which had excellent stability based upon some set of criteria, and would meet, say, pharmaceutical classification class 1 or something like that. Is that a fair statement?
DR. DUFFY: We're not necessarily considering the BCS as tied directly to the quality attributes. We're really focusing more on manufacturing capability, whether the product can be manufactured in a consistent and predictable fashion. Is the drug product itself robust, is the drug substance itself robust, where the degree of FDA scrutiny over manufacturing issues would be considered to be maybe passed over to the manufacturer, provided the manufacturer has the capability to provide proper controls. It's really that approach.
DR. CHIU: I would like to add. This program is just to reduce the oversight of FDA. It does not reduce the responsibility of companies to make assessments whenever they want to make a change, whether the change will impact the product performance, product quality. They continually have to do those things, and they just do not need to provide the documentation to the FDA, paper documentation or electronic documentation.
However, we also plan to have a joint inspection. Periodically we will go to the site and inspect and make sure companies continue to do the things that they are supposed to do.
DR. DUFFY: I'm going to say a bit more about it when I talk about GMPs, but that's an integral part of this whole program, that the manufacturing capability and adherence to GMPs and having quality systems in place on the part of the manufacturer. It's a quality issue primarily.
Any further questions?
DR. RODRIGUEZ-HORNEDO: Yes. In the case of solids that are drug substances, were any specific scientific attributes considered beyond what you presented, such as solid state structure, functional groups, melting points. I wonder if there is a similar paradigm to what has been used in the bioequivalence, biopharmaceutical classification system to the vulnerability of a solid in meeting the expectations we have with respect to quality beyond what you have mentioned here.
DR. DUFFY: Yes. Certainly physical attributes are very important, and some physical attributes are well understood and well controlled. And others might be less easily understood and controlled. Polymorphism, for example, very important, but might quite easily be controlled and understood. Less well understood might be particle size distribution, where that's important for the drug product performance.
What constitutes a defined particle size distribution and how does one assess the change in that particular size distribution is a difficult thing, and in fact we're hoping that some of the initiatives that PQRI on that score can really help the industry and the FDA come to an understanding of what constitutes a good understanding of particle size distribution.
But you bring up a very good point that the physical attributes certainly can't be neglected in terms of assessing whether or not it's a drug substance. It might be vulnerable to vagaries of manufacturing problems or atmospheric problems.
DR. CHIU: I would like to add. Polymorphism and particle size, all those things were discussed in the workshop. However, the feelings of the participants were although those are important attributes, as far as they are analytical tools to define them, to detect the change, then they should not be used as a barrier for defining low risk drugs.
DR. RODRIGUEZ-HORNEDO: I thought the objective was to also reduce the regulatory burden. We also have very good techniques to identify the bioequivalence, and yet the impact of the biopharmaceutical classification system is there.
DR. CHIU: Let me add, because if it affects the bioequivalence, then the case is requiring in vivo studies, we're not removing that oversight because based on FDAMA, whenever there's a need for in vivo studies, it needs a prior approval supplement. So, we must comply with our law. So, therefore, your concern is that this will change FDA with our oversight, then if you affect in vivo performance, then we will not know, that won't happen because it would still need prior approval supplement when in vivo bioequivalent studies are required.
DR. DUFFY: Yes, Dr. Anderson.
DR. ANDERSON: If I understand this correctly, the most crucial element of this whole thing is the manufacturer.
DR. DUFFY: That was the consensus of the participants at the conference.
DR. ANDERSON: I'm not questioning that. My question is, will you have some criteria or some standard, some kind of guidelines for deciding in this area?
DR. DUFFY: I'll be getting to that in the GMP discussion, but the short answer is yes.
DR. ANDERSON: That's good.
DR. DUFFY: You like short answers.
DR. ANDERSON: Well, my students always give me short answers.
DR. ANDERSON: Under your quality controls, underneath the upgraded to contemporary guidance, what happens if new impurities are discovered in the drugs?
DR. DUFFY: Well, this certainly was an area of concern that the industry had expressed. It's always a safety issue. If one finds new impurities, you need to assess the impact that it may have upon the safety profile of the drug. How one does that is something we do need to work out, and the discussion of in-use qualification is one thing, but there is the standard ICH approach to qualification from a safety perspective. These issues certainly need to be addressed. There's no question about it. There is tremendous concern on the part of the industry.
DR. CHIU: Let me add. If we have a drug on the low risk list, even though we reduce the oversight, but if the company makes a change, new impurities occurred because of the change, because of the change of that kind, it will affect the specification because when you have a new impurity, you will have a change of specification. You need a test or you need a change in the substance criteria. Therefore, a change in specification under FDAMA requires a prior approval supplement.
So, therefore, we will still have oversight when a new impurity is discovered. The firm needs to submit a supplement. We saw the qualification data, tox data necessary. So, this program will not affect when a new impurity is discovered.
DR. ANDERSON: One final thing. Under structure, it is generally known that the analytical methodology is less reliable for complex structures than it is for simple ones. Under the process where you have simple versus complex, and you said that's considered not relevant, it is usually known that the more complex the process is, that is, the number of steps in a reaction, the more likely you are to encounter a lot of other problems, including additional impurities and things like that.
DR. DUFFY: Well, there is greater opportunity for things to foul up, yes.
DR. ANDERSON: I think this is under not important or something like that, but that may be something you want to look at.
DR. DUFFY: We are going to be considering that, indeed. What I maybe should stress is that my presentation and the following presentations are really an attempt to summarize what the consensus of the workshop participants was, and not necessarily the specific recommendations that FDA will have. These are considerations that we're going to take back and work on in our further deliberations.
DR. HOLLENBECK: Not to prolong this, but would it be possible for a drug that's classified as a narrow therapeutic index drug, given the comments that you've made, to be considered low risk? You've gotten into tier 2 of our considerations, which is discussions with our medical folks.
DR. CHIU: I'm sure our medical colleagues will not agree.
DR. BYRN: If I can just give you an idea of what we're going to do now, based on our agenda and so on. We're going to go until 12:45, so if we can adjust the presentations and such. We had a lot of discussion right now, and we'll try to compress the committee discussion. Then we'll break for lunch at 12:45 and will come back with our open hearing at 1:45. So, everybody got about the allotted time.
Dr. Sayeed is next. He's going to talk about drug product.
DR. SAYEED: As pointed out by Eric, the workshop was like a morning presentation followed by a breakout session. So, what I'm going to do is go briefly into what was presented in the morning session, and then go into the input we got in the breakout sessions.
In the morning session, these two distinct approaches were presented to the audience. As you see, the first approach was based on developing a set of attributes or criteria for defining low risk and use this set of attributes, once they're developed, to identify low risk drugs. And the second approach basically deals with the knowledge and the understanding we have for a given drug product and identify these drug products based on the understanding we have, and then go ahead and perform a quality risk assessment to define low risk.
Given the nature of the approach one, which is basically a global approach, the determination was made to get the input from the audience on only this approach. There were certain questions that were raised in the presentation, and based on these questions, we expected a little bit of input in the following breakout sessions. So, I'm going to go over the questions and the attributes which were presented to the audience in the morning session based on this approach one.
Here I have a set of attributes which were actually presented in the morning session for the discussion in the breakout sessions. The attributes were dosage form, strength, manufacturing, specification, and stability.
On the next few slides, what I'm going to do is I'm going to go into each of these attributes and then go into the input we got from the audience for each of these attributes.
Dosage form. The question raised was, should all the dosage forms be included in this risk assessment or in this initiative. The general consensus was, yes, maybe we can consider all of them, but it wasn't further defined what that maybe is. So, due to the time restraints and all that, the general thing was, yes, depending on the understanding, maybe all the dosage forms can be considered for this initiative.
The question for the strength was, should strength be used as a factor in determining risk? Should there be a line drawn below which a product can be identified as either high risk, or above a certain point, it can be identified as a low risk? The general consensus of the audience was, it should not be considered. Strength should not be a factor for defining risk in terms of quality.
Moving on to the manufacturing, this is where we spent most of the time. Almost all the issues relating to manufacturing were covered, including the physical attributes of the drug substance, the excipients, the interaction of the excipients with the drug substance, and the various manufacturing processes that can be used in manufacturing a given drug product.
Having discussed all of that, the input was, regardless of how complex or how difficult the process is in making a given drug product, it should not be used. It really doesn't inherently contribute in defining risk. In other words, what the audience was trying to tell us was, if we understand the process, if there is a control and the process is controlled and validated, then the manufacturing should not be an issue in defining low risk.
But there is one thing which clearly came out in that session. If there's any functional packaging attached to the product that includes like a delivery system or something like that, then that product should not be considered as low risk.
In specification, the thing which was dealt with in specification was, is it adequate to just have the USP specs? Or for this initiative, should the specifications be updated to the current standards. The general consensus from the audience was, yes, there is a need to update that standard to the contemporary standard in order to adequately define or assess the risk for this initiative.
In terms of the stability of the product, again, the questions and the things which were discussed in the breakout sessions were, do you need to have a profile? Do you need to have a complete understanding of the mechanism of degradation? Does the degradation have to be predictable, or there should be some sort of a limit placed, and depending on the level of the degradation, is there any way to define the product, whether it's a high or low risk, depending on the level of the degradation?
So, the general consensus was the level should not be a determinant, regardless of what you see in the degradation as far as you understand the degradation, as far as the degradation is predictable. The level of the degradant should not be a criterion in determining the risk. But the consensus was, yes, there should be an understanding for the mechanism of degradation, and the behavior has to be predictable in order to adequately define risk for this initiative.
The outcome of this discussion was, in summary, it's hard to define or identify quality attributes so that those attributes can be used for defining a product, whether it's a low or high risk. They said approach one is a good approach but it was difficult for the audience to actually pinpoint the attributes that could be used for defining low risk. They were telling us, give us a product and tell us what the product is and how it's being made, then we can tell you whether it's a low or high risk product. That was the basic outcome from the breakout sessions.
DR. BYRN: Questions for Dr. Sayeed?
DR. SHARGEL: Yes. I have perhaps a need for clarification. When you're saying strength, are we really talking about dose in terms of a very low dose drug, maybe in micrograms with a large excipient concentration versus a drug that's a relatively high dose versus a very small excipient?
DR. SAYEED: Well, that was a question which was raised when we said low dose, if you have micrograms or milligrams, or something going into like 500 milligrams versus a microgram. The general consensus and the input we got from the audience was it really doesn't matter whether it's 1 microgram or 500 milligrams, as far as they understand the process, as far as the process is under control and validated. The strength should not be used as a determinant for defining risk.
DR. SHARGEL: May I have a follow-up? Concerning then the dose response -- and that may go back to the drug substance -- are you considering a drug in terms of nonlinear or having a very steep dose response versus one that's relatively flat, that small doses doesn't make much change?
DR. CHIU: No. The project is really only related to product quality. We are not talking about in vivo response. And if a nonlinearity response becomes a safety factor, we will evaluate in our tier 2 of the process.
DR. SAYEED: Are we going back into the clinical effects, and we really don't want to get there. That's part of the tier 2, and we're dealing with tier 1 only here.
DR. SHARGEL: However, if you were dealing with a nonlinear product, then small changes might affect its delivery.
DR. SAYEED: Well, that's something which will be considered, but what I'm trying to present here is what we got in the breakout session. It really doesn't mean that we're going to follow up on that but that's what we got there.
DR. BYRN: Any other questions? Leon and then Bill.
DR. LACHMAN: I think we're talking about trying to control these active ingredients and dosage forms by the measurement of the quality of the active ingredient and the product from a reproducible point of view. I think we have to consider the inherent characteristics of the active ingredients, the complexity of the synthesis and complexity of the molecule, as was indicated before, as well as the complexity of the process.
I'm sure you can control it. It doesn't mean that everybody can control it to the same degree. And I think that's where you run into a problem. I think in order to have a tier 1 set of characteristics for active ingredient products, you're going to have to somehow cut the totality of the product mix that you're talking about here. If you're looking at the outcome of the workshop, I don't think you'll ever get to that tier 1 set of compounds and products that you can use. That's just an observation.
DR. CHIU: I think you made some good comments. This is a difficult issue because most of the companies think there are no high risk drugs. There are only high risk companies. And I'm not one of them.
DR. CHIU: At the agency we have to establish objective criteria. So, we will proceed from a scientific point of view.
DR. LACHMAN: What I'm trying to say here is we're going to have to consider the basic sciences here, physical and chemical sciences, not just the practicality of coming up with a dosage form. If you do enough work, I'm sure you'll come up with it, but the amount of controls you're going to have to implement to assure the repeatability of that is going to be enormous.
DR. SAYEED: That was the intent of the workshop, to get some input like that. But unfortunately what we heard was, for a given company if the process is under control and if it's validated, we are fine. As Yuan-yuan mentioned, there is no high risk product. It's all a high risk company.
DR. BARR: It seems to me that the ideal goal, that what you're really seeking is to try to find those few substances which may be so stable, so safe, and be so easily manufactured that you can reduce the amount of work that you have to do. That would be tier 1, as I understand it. You have to use simply physicochemical measurements and characteristics to put them into tier 1.
Most drugs I think are going to fall into a category in which to some degree they're going to be dependent on some of their pharmacologic properties and their critical manufacturing variables. There, just to comment on one point just as an illustration, the dose and the strength is very important. I know at least two companies that have had great problems manufacturing levothyroxine because of the very low dose and the difficulties of manufacturing it. That to me is an inherent difficulty, and the minute I would see a microgram dose, I would say, somebody's going to mess up.
DR. SAYEED: I totally agree with you.
DR. BARR: And next, we have to get into somehow the pharmacologic linkage to that.
And then it seems to me the next linkage is the dosage form linkage. Obviously, stability in an oral tablet is going to be different than the stability for intravenous products that maybe have to be sterilized. So, the dosage form is going to be critical.
But it seems to me that ultimately what you'll need to do is to come up with the critical manufacturing variables for that particular dosage form, maybe for that particular company, but maybe in general, and then define the stability or the range of stability about that critical manufacturing variable, whichever they are. In other words, how sharp that peak is on that variable, or how flat that is and how much area you can have on either side of those variables. I think that probably is workable.
DR. DUFFY: You mentioned probably the poster child of problem drugs in levothyroxine. Not only is the drug substance itself problematic, but how you then formulate it. It's probably one of the more difficult you could come up with. So, that's the kind of consideration we certainly would be making. Is the drug substance itself inherently stable, and is it subject to problems depending upon how it's handled and how it's manufactured? That example was very well put.
DR. BYRN: Now we're really running out of time. I'm not sure how we should do this. Maybe try to do it like the next two talks in two minutes apiece or something.
DR. BYRN: If we could do that, and the committee also may need to limit their comments a little bit or we'll never get to lunch. We'll just start our hearing.
DR. HUSSONG: Good afternoon to all my hypoglycemic friends here.
DR. HUSSONG: The AAPS conference on streamlining the CMC regulatory process had two sessions on microbiology issues, one concerning the post-approval changes to applications and the other was to try and define specific characteristics to qualify drug substances and drug products as low risk. The discussions focused on sterile products, but we also got some comments concerning non-sterile products.
Now, participants felt that sterile drugs could be separated into risk-based groups based on sterilization processes used in their manufacture. For example, the terminal moist heat sterilization processes were considered to have greater reliability than the aseptic processes for manufacturing. Although this generality was noted to have exceptions, aseptic processing is universally agreed to offer greater challenges.
Certain changes to the processing of what might be considered low-risk products will still require supplements, however. These examples might include major changes in sterilization technology. For example, if you were switching from filtration to gamma irradiation.
Additionally, if you were deleting steps in the sterilization process. For example, if the sterilization process used aseptic filling methods, followed by a short heat process, and if you dropped one of those, that would certainly require a supplement.
Also, changing critical parameters in the specifications concerning the sterilization process. Those would be the control parameters for the sterilization.
However, many changes, about 20 of them, were noted that do not negatively affect sterility assurance, and for these it was recommended the route of annual reports could be used. Now, some of these included minor changes to container and closure systems. Also offered as an example were equipment items used prior to the sterilization steps. Additionally, terminal sterilization autoclave loading patterns were felt to be kind of low risk concerns. And several people argued that the lyophilization cycle really didn't have that much to do with sterilization. We didn't even use to sterilized lyophilizers until recently.
Concerning non-sterile products, there are very few microbiological concerns. Participants said none, but I disagree. For oral dosage forms, transdermal, suppositories, and products that are inherently antimicrobial, they felt that these should be streamlined and of reduced review and scrutiny. And certainly non-aqueous products, such as the metered dose inhalers, nasal sprays, dry powder inhalers, were offered as examples of additional low risk category drugs.
There were a lot of requests for guidance concerning manufacturing process-associated changes. These requests asked in particular for information concerning the categories of filing changes and more examples and definitions so that people could feel confident that they were doing what the agency wanted and communicating properly.
The other advantage to having these guidances is, it was felt, that the agency was in need of internal help here, and this might be a side benefit to it because of many complaints from the industry that recommendations were not consistent, either between offices, between centers, and sometimes between the centers and the field.
So, in summary, we have a lot of evaluation to do internally. We need to determine what we can do to best address these concerns, and we do feel that we can accomplish a lot using process based evaluation rather than drug product based.
DR. BYRN: Questions for Dave.
DR. BYRN: Our next speaker is Eric Duffy again with GMP.
DR. DUFFY: Steve wants me to talk fast. Now, I'm not from New York, but I'm from Boston, so I can probably keep up.
I'm presenting this on behalf of Pat Alcock who is out of the office today.
The GMP breakout sessions were really central, I think, to most people's consideration of this whole initiative, where the capability of the manufacturer really was a recurring theme all the way through. There was some discussion initially of what the current system was, and I'll kind of breeze by that, however, simply just to say that there was, to me, surprisingly a consensus that the current system really works quite well. The inspectional paradigms that we have in place for ensuring GMP compliance seem to be working quite well.
But for this particular program, there was some discussion about whether or not there should be some what was termed GMP-plus system established where there's something a little bit further than what the current system was. A number of different suggestions came up with respect to how one evaluates the firm's capability for adherence to GMPs and to have quality systems in place to ensure consistent quality manufacturing.
What are the measures of these? How would the agency assess the capability of this firm to demonstrate exemplary adherence to GMPs?
Some of these suggestions were recall history, for example. It could be an assessment of the body of PAI inspections that had been conducted, review of 483 comments, the recurrence of particular issues. Basically what is the regulatory status, inspectional status of a firm? So, I think what we need to do is try to develop a paradigm to assess the history and a means of demonstrating the capability of a particular manufacturer.
There were other issues that could certainly be measures which might concern whether a firm had been under any consent decrees. Would then some sort of probationary period need to be established to provide the firm an opportunity to demonstrate good manufacturing practices and adherence to GMPs? That might need to be defined?
There was another consideration of the implication this might have with respect to the mutual recognition agreements that we're currently engaged in negotiating with the Europeans, and I think in the future with Japan, that this may have some impact on that. And we certainly need to take all that into consideration.
Further concerns were that if we were to create this GMP-plus system, that it might create a different set of GMP standards for the drugs on the list versus those that are not. This approach may have a differential impact upon large firms versus small firms, new firms versus experienced firms. So, a fairness issue essentially was expressed.
How one would handle situations where there are multiple companies involved in a supply chain. What clearly comes to mind is drug substance manufacturing where one might have three or four firms involved in manufacturing various stages of a synthesis? Manufacturing intermediates, how would we handle that? Certainly an important thing to consider.
Also, how one would handle changes in ownership or management. Would that have an impact upon our consideration of the reliability and capability of the particular manufacturer?
I think I hit two minutes. There we are.
DR. BYRN: Questions for Eric?
DR. LACHMAN: Eric, you're now discussing a lot of GMP and administrative issues that are ongoing right now within the agency's activities on inspections. So, there's nothing really novel here. I still think we're getting away from the inherent characteristics of the drug and dosage form and controls necessary to assure reproducibility.
DR. DUFFY: It's a totality of approach in this case, Leon. We're not really divorcing the attributes of the drug itself from manufacturing capability. It's going to have to be interwoven in some fashion.
DR. LACHMAN: Right now there's an intensive, proactive regulatory environment out there from a compliance point of view, GMPs, and so on. They consider all these elements on inspections and what to do next to the firm and so on. So, that's really nothing new that you addressed. And these additional GMPs that you can apply are being applied if you're out there in the field. So, I think we still have to get back to the basic science of the drug and dosage forms and the reproducibility of the controls for the products and active ingredient.
DR. DUFFY: We don't disagree with that at all.
DR. LACHMAN: I think we're muddying the waters a little bit here with bringing in all these GMP issues because they exist now.
DR. DUFFY: Well, we were simply trying to express what many of the participants at the workshop expressed, and that is that we need to have some way of measuring the capability and qualifications of a particular firm to enter into this program for reduced regulatory scrutiny. If they have a demonstrated history of a capability to adhere to GMPs, to manufacture in a consistent manner, and produce a quality product in a predictable fashion, well, then that's a plus for them for involvement in the program.
DR. LACHMAN: I think the FDA has that now. They have quality profiles of firms based on their inspectional history.
DR. DUFFY: Right, and some firms are turned down for approvals.
DR. LACHMAN: That's right. That's what I'm saying. So, that's nothing I think that we don't have already. That's all I'm saying.
DR. DUFFY: Were there any other questions? Comments? Judy?
DR. BOEHLERT: Just a comment. While I agree with everything that Leon said, I just wanted to add a comment on this concept of up-to-date and meaningful specifications. I don't think industry realizes what kind of task that may be for them, particularly on old products that are compendial. They're following compendial methods. There are no physical tests in the compendia to begin with. So, that's something that needs to be addressed. Those will probably result in submissions to update old methods, old tests, new impurities that they've now found that have always been there but they didn't see them before.
DR. DUFFY: Those concerns were amply expressed at the workshop.
DR. BOEHLERT: Yes, I'm sure. And it's a lot more work than I think industry is realizing. On a new product that has good controls, perhaps not, but on old products.
I don't know how everybody gets up to the same standard in that case because the methods aren't published in USP. The physical tests, the process impurities. They don't list those.
DR. LACHMAN: I think we need to look at some of the history here for existing products that have been on the market a long time and they've been safe. They haven't caused any health hazards. As the methodology and analytical techniques become more sophisticated, we're going to find more impurities in the products that have been on the market. That's something that we have to consider in addition and not part of this mechanism, I don't think, because those exist now for existing products.
DR. DUFFY: Those concerns were expressed repeatedly.
DR. BOEHLERT: I think if impurities have always been there, that's a different situation than creating a new impurity because they could, indeed, be qualified for use.
DR. DUFFY: It's just that you now see it.
DR. LACHMAN: That's right.
DR. DUFFY: Shall we move on? Any further questions? Gary, you had something?
DR. HOLLENBECK: Just a similar comment. I think that the essence of this presentation shows that maybe you don't have tier 1, tier 2, and tier 3. These things are so interwoven that they almost have to be considered simultaneously. I'm a strong advocate for rewarding a company that has a history of good GMP compliance, and I think that's a critical part of the whole process.
DR. BYRN: Dr. Chiu? We're going to go to the next steps, and then if people can be looking at these two questions. I think we've discussed many of these issues already.
DR. CHIU: As you can see, we were a little bit disappointed with the outcome of this workshop because we went in seeking scientific input. What we received were a lot more questions, and also the consensus is not the way we think we can readily handle.
However, I do believe -- and I think our working group also believes -- there is a way to establish criteria, attributes to characterize safe, so-called low risk drugs. Actually the terminology was discussed in the workshop. Many people felt it has a bad connotation because if a drug is on the low risk list, they feel other drugs become high risk. They would like us to think about changing the terminology. So, internally we have discussed maybe we could call it predictable drugs, established drugs, robust drugs. Some people suggest low impact drugs. So, if you care to discuss, maybe you can come up with a better term than low risk.
But everybody understands what low risk means, that from the quality point of view, the product is really prone to defects and they are with those more physicochemical characteristics. Therefore, not much will happen to them regardless how you handle it.
Based on the discussion you had the last time and today and also the workshop and the internal discussion, we thought we need to modify our program a little bit. A lot of people told us internally and externally when I see a drug, I work on a drug, and I review the drug, I know it is low risk. When I see one, I will know it. But if you ask me to define the characteristics in a broad sense, it's very hard.
So, we thought then maybe we should take a parallel approach. In addition to considering stability 5 years, stable at the room temperature, it has no polymorphism, et cetera, maybe in the meantime, we can also solicit from people what drugs through their experience they think are low risk. Then we can evaluate the characteristic of those drugs and then come up with objective, scientific criteria. So, if we do those things parallel, maybe you can reach there faster.
So, we're going to form subgroups under our current working group to separately address drug substance, drug product, and microbiology issues.
We also formed a group to address GMP. But as Leon said, GMP is GMP. Everyone has to be in compliance, otherwise you already get in trouble.
So, the other input we had from the workshop is, as I said, this is really the concern of so-called high risk manufacturers. The manufacturers will now know what they're doing. Therefore, regardless if the drug is low risk or high risk, the drug made by such a company would become high risk.
So, therefore, the feeling is it is important that you tie in the GMP status not only with the historical status, but also with the GMP status of a specific product on the list. So, if we do that, then that company, to be eligible for this program, must already have experience in making that particular drug. If we move from that direction, that means the original ANDA must contain full information because the company would not be eligible for this program because they have not made that product yet. So, if we move in that direction, there will be no TANDA, no truncated ANDA.
Therefore, this comes to the two questions we pose to the committee to discuss. The first question is really whether we should take the parallel approach, we should seek input from people from industry, from our reviewers to find the drugs through their experience that are considered to be low risk. Then we use those drugs and analyze the characteristics and see whether from there we could establish a set of objective attributes and acceptance criteria.
The second question is whether we should tie the GMP status to a specific product. And if the answer is yes, we will not for the moment entertain TANDA, and the program temporarily will exclude truncated ANDA submissions.
DR. BYRN: Let's spend a couple moments on each of these. On the first question, any comments from the committee as it reads here, is the approach of establishing attributes and acceptance criteria for drug substance, drug product and microbiology based on the characteristics of potential candidates of low risk drugs appropriate? Is that approach appropriate? Any comments?
DR. HOLLENBECK: I think the list is inevitable. It is something that's necessary.
But your comments about the process I think are really good. It's like my view of art. I don't know what it is, but I know it when I see it. Here, I think you would be better served to do kind of a retrospective rather than prospective approach. If we sit down and try to identify everything that might be on the list, it's almost impossible to make the list small enough or have any drug ever qualify as being low risk.
However, if you do go through this exercise, I think what you usually find is there's one thing that kicks things off the low risk list, and if you do that for a series of compounds, you'll begin to compile this set of criteria.
DR. DUFFY: We're doing precisely what you're suggesting, Gary. We're kind of delving back and doing a little data mining, one might refer to it as, to really see. We have a product that appears to be robust and perform in a consistent fashion. What is it that makes it do that? We are doing it.
DR. BYRN: I agree. I think you have to do it almost compound by compound early on anyway.
Other comments on number one?
DR. MARVIN MEYER: Is the question whether one should have a subgroup that looks at the chemical and a subgroup that looks at the dosage form, or will they be studied simultaneously by a group? For example, hydrochlorothiazide immediate release tablet versus some type of a controlled release dosage form? If you want to get this thing off dead center, if you took a product that everyone says, well, it doesn't matter what the dose is, it's effective, it's safe, it's stable, it's blah, blah, blah, that's our poster boy, if you will, for a low risk drug, and then kind of build around that and come up with a list and then float the balloon and see how it flies.
Or is the question saying should the agency even be concerned about reducing the regulatory burden based on these attributes.
DR. CHIU: No. The question is the former, not the latter.
DR. MARVIN MEYER: The approach.
DR. CHIU: Yes, it's the approach because even though we formed subgroups, if we identify lists of already the candidates, we will have the subgroup to go back to our files to look at the characteristics of the drug substance of that product, and the characteristics of that drug product as a drug product subgroup. Then we will talk to each other and then put the things together. So, the reason we want to form separate subgroups is then we can become more focused.
DR. BYRN: Is there general consensus that the response to question number 1 is affirmative, it's a good idea? Okay. I don't think we need a vote on this one.
Question number 2. In effect, this would eliminate the TANDA mechanism right now. Basically what's being said now is that the CGMP status and also its history of that specific product would go into consideration. Are there thoughts on that?
DR. SHARGEL: As a member representing the generic industry, I was, of course, compelled to address this issue. I think the history of GMP certainly is suitable and for new products that generics make or new generic drug products, there are already in place pre-approval inspection and validation batches and other approaches. So, I would like to keep it broader, not specific to a history of GMP.
DR. LACHMAN: I would say that the GMPs apply across the board. They're not geared for any single product. Even on pre-approval inspections, you do a vertical review of the documentation and records to support that product, but you also go broader because your environmental system or your water system doesn't just apply to a product. You got to look at the totality of the GMPs and the training program. So, you can't just isolate GMPs in a vertical manner. It has to be horizontal.
The quality systems are broad. It's not only for one product. If you're making tablets, you've got to have a quality system for tablets. You make injectables, you got quality systems for injectables. They're not exactly the same as tablets. So, you got to look at the system and not an isolated element.
DR. CHIU: So, you do not think the manufacturing history or experience for a specific product is important related to GMP.
DR. LACHMAN: No, because I think it's all broader than just a specific related to a single product.
DR. BYRN: Where I think some of the problem may come in is in the drug substance side. I don't know, but that's where know-how and so on may play a bigger role in many cases. I guess if you started talking about extended release products and so on, it may play a role in drug product. But certainly to me I would like to see somebody have made some drug substance and see what their record is on making that prior to. So, I don't know whether there's a way to do it with drug substance and not with drug product.
DR. CHIU: Well, I think there is. Maybe we could split this question into two: 1a means whether GMP status to a specific drug substance is important; the second one is whether GMP status to a drug product is important. Then if the committee can vote on both subquestions.
DR. BYRN: Well, I'm just saying that the manufacturing history might be more important for a drug substance than a drug product.
DR. CHIU: Yes. I mean manufacturing history for a specific drug substance. That's the GMP part.
DR. BYRN: You know, that wouldn't preclude a generic firm from buying it from a well-known manufacturer. This is more like a new manufacturer.
DR. CHIU: Right, a new supplier.
DR. LACHMAN: The API firm supplying an innovator company or a generic company also undergoes inspection by the FDA, and their process is evaluated with regards to repeatability. In certain cases, both innovator companies and generic companies don't manufacture their own API or they manufacture part of their API and farm out part of it. So, your drug master file becomes an important part in the evaluation of this low risk to high risk. I think that needs to be taken into account, the controls like we have for dosage form. What are the controls for the active pharmaceutical ingredient? I think, Steve, that's an important piece.
DR. CHIU: Can the committee vote on these questions? Because it's important for us to establish the scope of this project.
DR. BYRN: I'm not sure what your question is.
DR. CHIU: The question is whether we should eliminate TANDA, if we could put into two parts the TANDA for drug substance and TANDA for dosage forms.
DR. BYRN: Yes. We need to try to reach a consensus because we are going to have to start at 1:45 again.
DR. LACHMAN: I think there can only be one TANDA. I don't think you can break it --
DR. BYRN: TANDA would just be a drug product. An ANDA would be a drug product. It would be the DMF --
DR. CHIU: I understand. DMF supports the TANDA, so DMF is part of TANDA.
DR. LACHMAN: So, the TANDA would be affected if the DMF wasn't any good. I mean, if the bulk drug supplier wasn't any good, you won't get approval of the application.
DR. CHIU: I understand. Maybe let me explain. The ANDA contains a drug substance part and a drug product part. A drug substance could be supported by a DMF. So, TANDA means truncated ANDA. We couldn't have a truncated ANDA, both truncated in drug substance information and drug product information. So, if we say the drug substance part of the information is essential for TANDA, then the truncated submission would not apply to the drug substance part.
So, therefore, if I can have a reading from the committee whether the drug substance information should be fully submitted in a TANDA. That's the first question.
DR. LACHMAN: I think it's an integral part of the TANDA. You can't get a TANDA without an active ingredient.
DR. CHIU: Sure, but it will be reduced information. It's not eliminated. Under TANDA, there will be reduced information to be submitted for a drug substance and for a drug product.
DR. LACHMAN: All right, so that has to be still determined.
DR. CHIU: To be determined, yes. We will eventually write a guidance, what would be adequate information for an annual report, and then we thought we could start with the summary, CTD summary of the quality section. That type of information, if it's sufficient for an annual report, it will be sufficient for a TANDA.
So, if you tell me the drug substance cannot be truncated, then we will say the annual report will also be required to have the full drug substance information and the TANDA will have full drug substance information, only reduce the information on the drug product part.
DR. BYRN: Is it possible just to make a list of drugs from the safest to the less safe and just draw a line somewhere and say these are so safe that it doesn't make any difference who makes them?
DR. CHIU: That's the objective.
DR. LACHMAN: Well, I'll tell you, I wouldn't go that far, Steve, because I wouldn't want to have metal in the active ingredient --
DR. BYRN: Yes, well, we're assuming that they pass compendial specs.
DR. CHIU: Compendial specs are not adequate for all products.
DR. BYRN: I think we have to stop now. I know we haven't gotten a full conclusion yet, but I think we should stop. I think the agency could come back to the committee with more detailed proposals, but continue along both of these lines, and from what the committee said, not kill TANDA. Do not kill a TANDA, but consider our comments and continue.
DR. CHIU: That's fine. We will come back if we have more specific questions. Thank you.
DR. BYRN: That's what I think we should do.
We're going to meet back here at 1:45.
(Whereupon, at 1:10 p.m., the committee was recessed, to reconvene at 1:45 p.m., this same day.)
DR. BYRN: Welcome to our afternoon session. This is the open public hearing part. We have had no requests from the audience that's attending to make a presentation, but we do have four five-minute presentations from the Inhalation Technology Focus Group.
The first speaker will be David Radspinner, Ph.D., who's going to give us an update on ITFG/IPAC-RS DCU Working Group progress. He'll explain all this.
DR. RADSPINNER: It's only fitting to have more acronyms, isn't it?
DR. BYRN: That's fine. We're used to that.
DR. RADSPINNER: As mentioned, my name is David Radspinner. I'm a member of the IPAC, which stands for the International Pharmaceutical Aerosol Consortium on Regulation and Science. This is an industry association. We formed a collaboration with the Inhalation Technology Focus Group which is a subgroup of the American Association of Pharmaceutical Scientists.
Together what we have done is last year we formed a collaboration to look at CMC issues and also BA/BE issues related to the FDA draft guidance. These technical teams have actually presented some of their concerns at this meeting back in November. What we'd like to do today is give you an update as to some of our activities.
As you see here, we've been working quite diligently on proposals around issues of CMC with relation to the draft guidance. Also, the BA/BE technical team has been looking at dose-response studies.
With regards to CMC, there are four critical issues we look at, that is, dose content uniformity, particle size distribution, tests and methods, and leachables and extractables. What I'd like to do is briefly update you on dose content uniformity, and then I'll hand it over to Dr. Evans.
Back in 2000, we collected and analyzed the dose content uniformity database and submitted the findings to the FDA. This was back in July. The reference is listed here.
In November, there was a meeting and we reported at that meeting that 68 percent of the products that were analyzed did not comply with one aspect of the dose content uniformity criteria within the draft guidance.
We also met with the FDA back in November, and we met once again in May 2001 to discuss the findings and plans for future work.
What we've done is we've kind of moved on from the review of the database itself, and we've worked very hard on developing an improved dose content uniformity test, and that's what I'd like to focus on here.
The foundation of this test is originally based on some ideas coming from Dr. Walter Hauck, which I'm sure most of you know, and it's based on a parametric tolerance interval approach. The test design is also similar to some concepts that were developed and discussed within ICH with regard to content uniformity.
We've looked at quality standards implied within the guidance, and it's sort of an approach where we've taken the draft guidance and sort of reversed engineered a definition of a quality statement.
We've also looked at the capabilities within the industry of modern inhalation technology and considered it while developing this test.
The parametric tolerance interval approach, when we compared it to the current guidance -- the advantages are increased efficiency in using the sample information. So, we're not really collecting different sample data, but we're using the information much more efficiently we believe.
By doing a parametric tolerance interval test, we're also improving the consumer protection -- this is in a statistical sense -- while at the same time improving producer protection. So, we're trying to avoid those batches that fall in the middle.
What's important is we have an explicit quality definition, which is a proportion of doses within a batch that fall within a given target interval.
The acceptance criteria is based on a sample mean, a standard deviation, and what's called an acceptance value, which actually combines the two.
It's a consistent quality standard, but we offer a flexible testing schedule to the producer.
There's also a single test for both within-unit and between-unit variability, and this has been achieved through a parametric tolerance interval test. One of the aspects of that has also been an increased average sample size for testing within the industry.
Where do we go from here? There's a draft report currently under review within the IPAC-RS consortium. We anticipate submitting this in the fall of this year. We also anticipate having a meeting following that with the FDA to discuss this, and we do recommend that this become part of the draft guidance.
I guess I take questions either now, or if you'd like to move through all four presentations before taking questions.
DR. BYRN: I think we will go through all four and then take questions together.
DR. RADSPINNER: Thank you.
DR. EVANS: Good afternoon. My name is Carole Evans, and I'll be presenting on behalf of two of the teams today, the particle size distribution team and the test and methods team.
The particle size distribution team have addressed two concerns on the draft guidance, firstly, the concern that there is a requirement for mass balance within the particle size testing be established as a drug product specification. In this case, the mass balance actually attempts to measure emitted dose, which is appropriately controlled by separate specifications and test methods. However, we agree that this mass balance measurement could be appropriate as part of a system suitability control, but it should not be a product specification. Furthermore, if we're to use mass balance as a system suitability, the limits should be determined during validation studies and not set arbitrarily in a guidance.
Additionally, one of the concerns is that the label claim may not necessarily be reflected by the mass of drug collected on all stages and accessories. For example, there are some products for which label claim is defined by the pre-metered dose rather than the emitted dose, and in these cases, there would not be a match there.
Finally, we've also reviewed some data that we've collected from a number of products and have found that, in general, the majority of products do not meet this requirement. To date we have collected a large database of data from 35 products and found that only 11 percent of the products -- that's 4 of them -- will actually meet this criteria. We've submitted this initial assessment of the database to the FDA in a paper last August.
As a next step, we'd like to meet with the agency and try and determine the actual purpose of this requirement to try and understand better what the objective of the agency with this requirement is and work with them towards finding an alternate method of addressing their concerns. To this end, we've submitted a proposal to PQRI to have further discussions on the subject.
The second area that the particle size distribution team is working to address is for the use of particle size distribution profiles in bioequivalence testing. The draft guidance proposes a chi-square differences approach to comparing the profiles with test and reference products. The concern is that the chi-square method was developed for one particular product and we're using one particular type of equipment, and the applicability to other products and other test methodologies may be limited and hasn't been demonstrated. Furthermore, the equivalence criteria have been set somewhat arbitrarily.
The team are currently pursuing an investigation of alternate approaches. Amongst those are the approaches based on bootstrapping of data. Their objective here is to try and find other approaches which may be more discriminatory, would have wider applicability, and would provide a consistent approach for comparisons of profiles. They've submitted a proposal to PQRI to have some work pursued to look at alternate approaches and to look at what metrics for comparisons of profiles may actually have some clinical relevance to help us evaluate bioequivalence.
I'll move on to the test and methods team. The test and methods team has been reviewing the test methods proposed in the guidance, and our objective has been to select methodologies that would be based on development data providing meaningful information about product quality. Our concerns are that some of the tests proposed in the guidance offer little added assurance as to product quality and in some cases may be redundant.
We've collected data on a number of the tests and have developed a database consensus and recommendations to the FDA. We submitted a paper to the FDA in May of this year which proposes alternate language for a number of tests for MDIs. Again, our objective here is to maximize the value of the controls and tests and minimize the redundant testing.
I will not read all eight. We submitted comments on the tests listed here. Our paper provides a critical assessment of the value of these tests and the development data that may be used to support new product control. We've concluded that a fixed list of tests may not be appropriate as guidance and that the guidance should stress the importance of defining the tests used for a product during the development process, and that we should eliminate those controls which we feel are redundant. We're at the moment working on developing proposals to put forward to PQRI.
DR. BYRN: The next speaker is James Blanchard who is going to address leachables and extractables.
DR. BLANCHARD: Thank you and good afternoon.
I'd like to update you on the work of leachables/extractables team.
We have reviewed both guidances very carefully, basically trying to look at them from a user's perspective. From an implementation perspective, we feel that we can more effectively implement the guidances if we have some thresholds to work with which we can agree upon and the agency can agree upon as well. So, one of our concerns is proposing or trying to propose adequate, appropriate thresholds for reporting, identifying, and qualifying leachables and extractables.
Also, we have found some terms that are very important that are a bit unclear in terms of how to interpret them. So, we are also looking for clarity to define concepts such as correlation, particularly how a leachable will be correlated with an extractable because that's actually a very important in implementing the guidelines and further testing.
Also, there is a term called "critical component." What exactly is a critical component? What has to be actually done to test a critical component? So, we'd also like some more clarity on that definition as well.
So, what we've done to start this process is that we've started gathering data from the industry and the one set of data we did collect was for leachable and extractable data on specific drugs to see if correlations did exist between these leachables and extractables for this.
We've also collected other types of data. We've also formed a toxicology working group of expert toxicologists from industry to look at the qualification issues, and together we have put together a report which we've now submitted in March.
So, I'd like to go through some of the highlights of some of the areas in each of the guidances that we think would help for clarity or some help with the agency.
First of all is the definition of a critical component. We're proposing that a critical component would be any part of the device that would be in direct contact with either the formulation, the patient's mouth or mucosa. That would be what we would be testing going forward in our characterization of the extractables and the leachables.
Next, getting to the idea of thresholds, we are proposing a reporting threshold of 1 microgram per gram in the controlled extraction studies of the raw materials. At this level, we are thinking that you won't get complete structures, but maybe you can an idea of at least the class of the compound you're dealing with. Then when you have 100 micrograms per gram, we would set that as the identification threshold where we would have confirmed structures.
Now, moving ahead to leachables, basically these are when you're really working with the dosage form and the excipients. The guideline right now calls for doing toxicological qualification on extractables, and we really want to make a strong case to only do the tox evaluation on leachables.
Secondly, getting back to the point of correlation between extractables and leachables, we would like to say a correlation exists between those two when you can qualitatively, either directly or indirectly, relate a leachable to an extractable.
Third, again getting back to the concept of the threshold, we are proposing a reporting threshold of .2 micrograms total daily intake, TDI, as a reporting threshold and a 2 microgram TDI for identification threshold for each leachable.
Then lastly, in the routine extraction studies, which we would be doing to maintain or to make sure that we have adequate control over the components coming in, we would like clarity in terms of what is the actual purpose of these studies. We are proposing that these should be used to ensure that the extractable profiles of components used in commercial manufacture remain consistent with profiles and components used in the pivotal development studies, and they are not a substitute for in-process control or supplier qualification.
So, we've put together two flow charts to help capture some of these issues. Also, the second flow chart will give us more detail on the tox qualification, which I haven't got into yet.
But we're starting off. This is taking us down through the routine extraction, controlled extraction studies, and into leachable studies. The first box here is starting off with the critical component, again a component in direct contact with the formulation, the patient's mouth or the nasal mucosa. And we're saying that if that's true, yes, then you do a controlled extraction study where we would do qualitative and quantitative assessment of all peaks greater than 1 to 20 micrograms per gram.
And then going down to the next blocks, we would then go on an do a leachable study on this material, and we would do that using aged registration batches through end of shelf life to quantify in drug product the extractables identified above. And in this process, we would quantify all peaks greater than .2 micrograms TDI. And we would provide identity and quantity of all leachables to the toxicologists for assessment, which is the next box.
Just going over, if the critical component did not contact the formulation or patient's mouth or mucosa, then we go over to the no box. Then we can do other testing that would be sufficient such as identity, dimensional properties, and so forth.
Going forward on to our routine extract studies, then we would be doing that and other testing if necessary. So, we have it all boxed up.
Now, going on to the qualification thresholds here, we have individual leachables above .2 micrograms TDI and if we are saying that's true, then we go down a couple of paths. We'll go down the easy route. Greater than 5 micrograms TDI would be our upper threshold. And then we say we confirmed structure, and then we would basically do a full tox assessment of that.
However, if there are greater than .2 micrograms and less than 5 micrograms TDI, we would assess whether there are structural activity concerns, and then we would do a tox assessment or not, depending upon what's happening with that assessment.
If we go up here to the top, if it's no, if the leachables are less than .2 microgram TDI, then we would do no further evaluation.
So, these are the thresholds we're laying out for the qualification.
I'd also like to make the point we are also opening up a category for special compounds which may have SAR concerns or be nitrosamines or PNAs that are known to be a problem. So, we would treat those on a case-by-case basis. These thresholds may or may not apply to them.
So, going forward, we strongly encourage incorporating into the guidances these thresholds for identification, reporting, and qualification, and we are proposing that we have an ongoing discussion through various fora with toxicologists and chemists to work through these thresholds. We also have submitted our proposal to PQRI.
So, just to sum up what we've talked about here, the ITFG/IPAC-RS collaboration plans to bring several proposals to PQRI and continue discussions with the agency regarding the new DCU proposal.
We hope that through the meetings of the OINDP Subcommittee, Advisory Committee on Pharmaceutical Science, PQRI, and other appropriate fora, the work of the ITFG/IPAC-RS collaboration will be carefully considered.
And we believe that FDA and industry will be better able to respond to the needs of patients by expediting the availability of new OINDP products while maintaining appropriate standards for safety, efficacy, and quality.
We appreciate your consideration. Thank you for your consideration. I'll turn it over to the BA/BE presentation.
DR. BYRN: We're going to go ahead with Dr. Sequeira, and then we will have any questions or comments from the committee after this presentation.
DR. SEQUEIRA: I'll be brief. I'll try to keep it to under 5 minutes.
I'm a member of the BA/BE team, and this team has been in existence for a year and a half. During that time, we have been very productive and worked constructively on this very difficult issue, and some of our efforts are described on this slide.
We've made four presentations on this topic at meetings like these, and we've also submitted three reports to the FDA on this topic.
We've conducted a review of the current literature on this, a task which has stretched over this the year and a half. We do not have substantive new approaches on dose response, but we feel that risk assessment and risk management must be done first to put this whole issue of nasal drugs into proper perspective, as I'll discuss a little bit later.
The in vitro study designs in draft BA/BE guidances are useful for comparability of products, but unproven in value for establishing clinical equivalence and substitutability.
Based on the data presentations made by the FDA at Tuesday's meeting and this morning, we agree with the OINDP Subcommittee recommendation of selecting one dose between the test and reference in the clinical study and the inclusion of a placebo.
We also agree that the traditional treatment study offers the most appropriate study design for assessing nasal drug products intended for local delivery and concur that the typical 2-week duration of this study is appropriate.
However, there is a need for the draft BA/BE guidance to further develop the statistical requirements for this study, even if it is to be used to confirm the comparability and substitutability of reference and test products. As most of you know, the weakness of this design is its dependence on seasons and the measurable placebo effect.
I'd like to present here a case study that is very relevant to this topic. This is work done by Casale, Azzam, Miller, and others and published in 1999 in the Annals of Allergy, Asthma, and Immunology. It deals with the demonstration of therapeutic equivalence of generic and innovator beclomethasone in SAR. I'd like to point out three issues with this paper.
The first, the authors state that the primary objective was to compare the test product, which in this case was a generic, at two doses versus the placebo. And their secondary objective in the paper was to compare the test versus the reference innovator product. We clearly think that a reversed hierarchy is more appropriate here and that the secondary objective should have been the primary objective.
Secondly, the study was designed as a different study, not really as an equivalence study. The sample size was adequate to distinguish between active and placebo, but inadequate to distinguish between the two types of BDP, had there been a difference. This is a typical common design error. Failure to differentiate between the two products dose not mean that a difference does not exist, had the design been more robust to pick up this difference.
The third issue is the order of administration. The active was followed by a placebo, and the treatments were not randomized. Hence, we have the bias of washoff or washout by the placebo.
We really didn't mean to critique this paper, but only to present it as an example of the need for further work in this area.
Therefore, this leads me to the key steps to confirming the correct study design, which are summarized on this slide. Firstly, the draft guidance must address the issue of substitutability and not confuse this with comparability. Secondly, we need to develop statistical requirements for this study design for comparing the test and reference products. And the team seeks the agency's guidance concerning this issue.
One way to deal with open questions on bioequivalence study design is to use risk management to focus scientific investigation on those critical elements whose uncertainties should be given priority as the development of the guidance progresses.
We've highlighted here three risk areas present with locally acting nasal products in the context of clinical comparability and substitutability. The first is the comparability of the container closure system to assure comparable spray delivery. Here I must add that the FDA has done an excellent job with the guidance they have given for Q1 and Q2, but that takes care of the formulation. What we need is something like I'd like to coin, Q3, to give us some measurable parameters on the packaging of this particular product so we can be assured that the spray is comparable between the test and reference product.
The two other issues concern particle size differences between the test and reference product and the implication of these particle size differences on both the onset of action and a systemic exposure of the product.
As Dr. Adams very well knows, people use different micronizers throughout the industry and end up with different particle size distribution products for the drug substance. People also know that you can essentially nanosize the drug product using microfluidization techniques and achieve drug product with very fine particle size. And people also know that you can make a mistake and do a lousy job on micronization. So, you end up with particle size being a very critical issue here.
And it cannot be presumed that an in vitro test that correctly correlates with the local actions will also be predictive of the systemic outcome.
My last slide is missing, but I'll read it out to you. The container closure system and particle size are two key risk areas that remain to be addressed regarding clinical comparability and substitutability. We agree with the agency and the OINDP Subcommittee that particle size is important in determining standards for orally inhaled nasal drug products. We agree that Dr. Adams and the FDA have rightful concerns on drug particle size in the emitted spray as being one of the most critical parameters that could affect local efficacy and safety. In fact, their sister division on the pulmonary side considers dose delivery and particle size distribution of that dose to be a very critical element for these products, even if they are line extensions of new products.
So, after giving you all those thank you's, I would like to now throw out a challenge, and I'd like to recommend that an efficacy study be developed to investigate the onset of action, via either a park study or an EEU study, so that we could at least be assured on substitutability of these products because a very important parameter of these products is onset of action. So, in addition to the traditional treatment study, we'd like to suggest a short-term 1- to 3-day study in the park or in the EEU to get a feel for onset of action.
DR. BYRN: Thank you very much.
Are there questions from the committee for any of these speakers? Judy? For Dr. Sequeira.
DR. BOEHLERT: Yes, for Dr. Sequeira. I have a question regarding particle size. It's very easy to control and measure particle size on the active ingredient. That can be done. The techniques are available and you can show comparability very readily.
In your experience, does that particle size change once it's formulated, and are you going to see a difference from one product to another?
DR. SEQUEIRA: Yes, in fact, Dr. Boehlert, Dr. Poochikin gave us a dissertation on the five or six factors that can change the particle size of the drug in the final formulation, because after the drug is compounded by one of many, many techniques where there can be homogenization or they can use other kinds of techniques, there could be changes occurring during compounding, during filling, and then finally on stability. And he listed a few more factors that I have the time to cover.
DR. BOEHLERT: Is that reducing the particle size or increasing the particle size, or both?
DR. SEQUEIRA: Sorry?
DR. BOEHLERT: Does the particle size go down or up or either?
DR. SEQUEIRA: It could go either way, depending on the manufacturing.
DR. BYRN: Dr. Meyer.
DR. MARVIN MEYER: This wasn't your presentation, but I was curious how the threshold limits were established for the extracteds and items leached.
DR. BLANCHARD: If you wanted, I could give you slides. We have prepared slides to describe this if you want to go through the process, or I can give you just a high level -- very high level?
High level. Basically we worked from the 5 microgram TDI. We compared that to daily exposures a person would get to ambient air pollution. So, basically we're trying to look at what are people exposed to every day and what do we accept as being safe every day.
So, there's actually a study called the Harvard Sick Cities Study that measured air pollution concentrations and related them to mortality and cardiovascular problems. In that study, they found one city that was actually very, very clean. It was Portage, Wisconsin, and it had a concentration of 18 micrograms per cubic meter of these particles. Actually that's very, very clean air compare to all the other cities in the U.S.
We used that as a reference point, realizing we've got an added safety factor just by being the fact it was very clean air. We calculated what people would be exposed to in that city at different ages and also for people with disease, and said basically these are the different ranges they would be exposed to, then looked at the safety factors we're talking about. So, basically the 5 microgram TDI stands up very well when you do that analysis. We're talking about being 3 percent or 9 percent of what you'd be exposed to due to ambient air pollution in those cities.
We've also done comparisons with being exposed to different MDIs, high dose MDIs, low dose MDIs, acceptable residues from metered dose inhalers. So, we have a four-pronged rationale based upon that.
So, basically the ambient air pollution one is the top one actually in terms of what's driving that.
Do you want to get into the analytical thresholds at all?
DR. MARVIN MEYER: No. I was just curious. I'm not in a position to debate whether that's good or bad. I just wanted to know how you did it.
DR. DOULL: Steve, let me follow up on that, Dr. Blanchard.
I don't know whether you're aware. Alan Rulis has put together a threshold of regulation for Food and Drug. It has to do with packaging materials. It really is in the food section, but it's a very similar concept.
DR. BLANCHARD: Right.
DR. DOULL: And I was struck by the fact that your TDI is similar really to what Rulis has --
DR. BLANCHARD: Are we talking about the threshold regulation which is a .5 parts per billion?
DR. DOULL: Yes.
DR. BLANCHARD: Right. We're familiar with that. We actually reviewed that, and we were looking to incorporate some of that rationale in our thinking. So, we are aware of that.
DR. DOULL: His argument is that no matter what the agent is, even if you take the carcinogens, whatever list of carcinogens that go with that rationale, that is in fact a threshold of concern that is reasonable.
DR. BLANCHARD: And the rationale there was that even if it was unknown to be carcinogenic today, if it was later found to be carcinogenic, it was still be so low to be trivial.
DR. DOULL: I had one other question. You talked in there about using SAR, structure activity.
DR. BLANCHARD: Yes.
DR. DOULL: Are you talking about components of the molecule or are you talking about the molecule itself? You're saying if it's cleared by SAR, then --
DR. BLANCHARD: With SAR, you're looking at components where basically you find a functionality that would be of concern. Then that would raise a red flag for you. We could work this through with the agency, but you could take a conservative approach and say, well, if we know this is problematic in functionality, then we would put that into a special category and give it further analysis.
DR. DOULL: You mentioned nitrosamines, for example. You could say all those agents that are similar you're going to put them in the same bag and be concerned about them.
DR. BLANCHARD: Right.
DR. DOULL: Or you could be looking for quaternary ammonia or something which should be a part of the thing.
DR. BLANCHARD: So, I'm thinking we're going to look at functionality groups, not the whole compound. Both. The nice thing about nitrosamines is that you know going in that these are well characterized compounds. You know you should be looking for them and you are expected to be looking for them.
DR. DOULL: It's kind of a decision tree.
DR. BLANCHARD: Right, and we can handle it on a case-by-case basis.
DR. DOULL: Well, that's interesting.
DR. BYRN: Thank you very much. We'll be sure to provide this information to the people who are writing these guidances, and I'm sure they will take it into consideration.
Now we're going to go on with the next session, and let me introduce Dr. Lachman and Hollenbeck who are our guests for this session also. Both of them have spoken before and are on the left.
First, Dr. Ajaz Hussain is going to give an introduction. Dr. Hussain is acting Deputy Director in OPS.
DR. HUSSAIN: Good afternoon.
The afternoon session is actually taking a look at some future directions. I will not ask you to vote on any of these, but I think we would like comments, recommendations on what your thoughts are on the two topics that we present to you this afternoon and start to take a look at some of the new directions and bringing new science and technology into manufacturing.
The topic I have chosen is optimal application of in-line or at-line manufacturing controls in pharmaceutical product development. For the last couple of years, our labs within FDA -- actually more than a couple of years -- have been working with some of the new analytical methods which offer, we think, significant opportunity. A number of publications that I provided to you in your handout material were to illustrate the type of applications that are feasible, and other chemical industries -- indeed, in fact, food industries -- have adopted some of these and are benefitting from these technologies. Pharmaceuticals have not done so and I feel that's an opportunity that we can have significant public health and economic benefits if we can have optimal application of modern in-line and at-line process controls and tests in pharmaceutical manufacturing.
One could look at that as a hypothesis, and that's what I'm presenting to you. The goal here is to initiate public discussion on opportunities and challenges associated with regulatory application of what we call process analytical chemistry tools in the pharmaceutical industry.
I have invited Dr. Raju from the MIT Sloan School of Management and Chemical Engineering Program which is focusing on pharmaceutical manufacturing to discuss with you anticipated win-win opportunities. The MIT program is in conjunction with a number of companies that has looked at modern manufacturing methods. I hope you get not only the time and cost saving type of information from him, but a sense of what engineering applications to pharmaceutical productions can do for us.
As an introduction of what I mean by process analytical chemistry, here are the many different technologies that are part of process analytical chemistry, and the goal here is to have real-time characterization analysis of samples or material and to have those decisions as close to the processing step as feasible. Generally these are accomplished without sampling and are multivariate in their nature.
Two very common examples are near infrared and Raman spectroscopy in the transmission mode, as well as the reflectance mode. These essentially can be within the processing unit itself or would be close to the processing unit, so that you don't have to collect a sample, and information about the sample is gathered at the site, and decisions could be made rather quickly, as opposed to the conventional method where you collect the sample, do the analysis, wet chemistry, and so forth. So, you're looking at a difference between wet and dry chemistry here in some ways.
My presentation is more focused from a formulator's perspective, how we think a formulator would benefit from these technologies. To give you a sense of what these tools are, here is an example of gasoline analysis. On the top, you have four different attributes being tested by different methods. You have octane engine taking 40 minutes, RVP analyzer, a GC method, and a density meter. All of those attributes can be measured on line or quickly with near infrared with the same spectra. So, one method is able to characterize or to gather information about various physical and chemical attributes.
So, in this case, the difference here is you essentially use pattern recognition tools to understand the relationship between the spectral attributes and those physical or chemical attributes of interest. Based on that calibration curve or the statistical model, you have a system that can evaluate a new sample that comes along. So, that's the framework under which many of these process analytical chemistry tools operate.
I have taken this from a website of a company, which I have obviously blocked the name our, for pharmaceutical applications. Here from the website it says, "from incoming raw material inspection to final product release, instruments, software," and all these technologies have been available. You can see the progress that has occurred in this area over the last 10 years.
These obviously are available but are being currently used as an alternate. These are not generally regulatory methods. These are alternate methods which are in addition to the regulatory testing.
I would like to focus my thoughts on what I perceive as the impact on product quality could be by adoption of some of these technologies. In my opinion, the current situation begs us to take a hard look at this at this time. Combinatorial chemistry and high throughput screening essentially have created a scenario where the number of interesting, promising new chemical entities is humongous. As a result, development, including product formulation development, is becoming rate limiting.
There are two aspects which are challenging. Formulation development has always been considered as a black box because of the inability to reliably predict product performance changes when formulation/process variables are varied. Also, variable physical functional attributes of raw materials that are known to conform to USP or NF standards. Compendial standards have always focused only on chemistry, not on the physical attributes. So, functionality of excipients has not been a public standard, and it's not likely to become a public standard because of the complex nature of the excipients, as well as multiple uses of excipients. It's a very difficult process to build public standards based on physical attributes.
Process analytical chemistry tools focus both on physics as well as chemistry at the same time and at the right place actually. So, here is an opportunity which in pharmacy at least we have not, in my opinion, taken full advantage of. The number of publications are humongous. Some of those you have seen in your handouts, and they're very impressive. But I think in terms of evolution, I see bringing these technologies in would really help move pharmaceutical manufacturing to the next stage quickly.
From my way of looking, over the last 100 years, tablets that we make today are the same as we made 100 years ago. In fact, aspirin is over 100 years old, the first tablet ever made. So, we have been making tablets and capsules essentially in the same way, the same process as for the last 100 years.
But during those 100 years, we have transformed pharmacy from an art to more of a science and engineering based profession. In the last 30 years, you have seen application of physical chemistry and chemistry principles coming in and engineering principles coming in, but we're not there yet. We still develop our formulations through a trial and error approach, although that's a guided trial and error approach where you have a formulator with vast experience and can guide the formulation development program quickly.
But keep in mind, at least from the pharmacy school perspective, pharmaceutics and other disciplines have sort of eroded away, and formulation development is not being taught in schools anymore, literally. So, the experience base and the knowledge base is to some degree eroding away. So, the trial and error has to be guided. In the abscence of that, it becomes very difficult.
There has been a tendency towards moving to design of experiments with Professor Bancor and others who had initiated that, but 1994 Professor Shanguard did a survey of the pharmaceutical industry to see how many of them are utilizing statistically designed experiments to do formulation development. That number came to be 5 percent. So, the trend has not moved in that direction. So, although we would like to see more designed experiments and hopefully computer-aided design concepts to come in, they have not occurred.
Dosage forms have transformed drug delivery systems. The next stage is obviously intelligent drug delivery systems. If we are able to improve the formulation science, then we actually create more opportunity to look at more creative options. Here's an opportunity. Batch processing to continuous and automated processing is obviously a desired next step in this evolutionary process.
However, coming back to the pharmaceutical product development process, here are some of the attributes that we have to address. It is multi-factorial and a complex problem. Significant reliance on formulation development is based on personal knowledge. Historical data is likely to have been generated by a guided trial and error approach. There are many choices of achieving target specification.
Therefore, I think from an FDA perspective, to evaluate some of those changes under SUPAC, for example, becomes a challenge. Without up-to-date information, there's a high potential for misjudgments, reinventing the wheel, and mobile institutional memory. We have seen in many situations approved products need frequent changes. They're not optimal.
So, if you look at the pyramid of pharmaceutical product development knowledge, I tend to put that knowledge base in low to medium in terms of level of sophistication in the details that it's able to resolve. The reason for that is most of our database is based on historical trial and error. Patent recognition and generalization of that data is extremely difficult. We have heuristic rules of thumb and very few empirical models for developing formulation safety. With respect to mechanistic modeling, physical rules, we're not there yet.
How are we controlling unit operations now? If I take a simple unit operation, blending, the last two years I have been engrossed in blending problems and the criticisms received from industry of our guidance. Blending is a major thing in my mind right now, and therefore I have asked Dr. Raju to use blending as an example to illustrate some of the issues.
How do we control blending? We define the equipment, type, size, operating speed. We define a process time. Then we check whether the blend is homogeneous or not. So, you blend, put thieves in, collect samples, and check.
Wet granulation. We define equipment, define fluid addition, composition, volume, and process time, and check for moisture content after we dry those granules. These are fine but are limited in scope with respect to performance predictions.
Unit operations are intended to produce in-process materials that possess optimal attributes for subsequent manufacturing steps. We know that.
Do current controls always ensure consistent quality of in-process materials? They can't. One reason is the physical attributes of the pharmaceutical raw materials can be highly variable. We don't have a good handle on that.
A consequence is processes do need to be adjusted, and if you do adjust those beyond certain ranges, you have to seek regulatory approval or some regulatory evaluation is needed. So, it's an added level of scrutiny. One of the whole initiatives of risk based is to reduce the supplements.
So, the current situation, again to summarize, in-process testing is the norm, not controlled. Blend uniformity, for example, if I take that example, I'll stop the blender, test, wait for the answer to go to the next step. That's one way of looking at it. If it was controlled, blending would have been done until it's homogeneous and move on.
Process parameters and specification are set based on limited data. Raw materials. We don't know their functionality well. And a combination of all this. In-process sample collection, testing, verification, and as a result, a lot of exceptions that occur contribute to long production cycle time. It was a bit of a surprise to me that it could take 30 to 60 days to manufacture one batch of tablets.
Process validation. What are the limitations there and how are we doing that? I found this quote by Harwood and Molnar quite interesting. The publication was called Using Design of Experimental Techniques to Avoid Problems, published in Pharmaceutical Development Technology in 1998. They characterized current practices in validation as a "well-rehearsed demonstration that manufacturing formula can work three successive times." In their experience, "validation exercise precedes a trouble-free time period in the manufacturing area, only to be followed by many hours, possibly days or weeks, of troubleshooting and experimental work after a batch or two of product fails to meet specifications. This becomes a never-ending task."
Clearly, companies would not release batches which fail specifications. It's the subject for recall. But here is a situation at least for temptation. If you your batches are failing, it leads to problems. And some of the court cases I was involved with dealt with these issues.
I hope that is not a general observation. I'm sure it's not a general observation. But the example does illustrate what happens when quality is not built in, and quality cannot be built in till you really understand your processes and so forth.
The type of cycles times that you're looking at, which you will hear from Dr. Raju in more detail, are as follows. It takes 21 to 90 days to qualify a raw material. It takes about 60 days to manufacture and release a tablet formulation, and you'll hear more about this, so I will not deal with it.
So, what we are talking about right now is the next step in the evolution of process controls. When I started out in pharmacy school and my industrial training, this is how we did it. Reach out, grab some of the granules, squeeze them, see how they break, and then decide whether the granulation endpoint is reached or not. That was years ago. Things are different now, obviously.
But the next step in the evolution is to go more subjective, gather physical, chemical information about the granules to ensure that the granulation was optimal so the tableting next step would be as smooth as possible. And that's feasible now.
Modern in-process controls. I'll use near IR as an example because in our labs we have more experience with that right now. It's a noninvasive spectroscopic technique, and you could also use it as an imaging tool -- and I'll show you some examples -- which has been in use for the last 10 years in the food and chemical industries.
It provides real-time control of processes without having to collect samples.
One can potentially process material until optimal attributes are achieved, as opposed to stopping and testing.
And using pattern recognition tools, one can relate near IR spectra to both physical and chemical attributes of materials and hence be in a position to predict product performance and therefore improve product quality.
If I were to apply near IR technology to a tablet formulation, I chose a direct compression as an example. On the left-hand side, the conventional approach would be get the raw materials, do the compendial tests to make sure they meet the specifications, blend the product, test for blend uniformity, and keep in mind the only component that we test is the drug. One of the culprits that creates problems is magnesium stearate, very small amounts. We never test for that.
Compaction. We make the tablets. We check for hardness, thickness, weight, friability, and so forth, content uniformity and dissolution. All of those could be done literally at- or on-line with some of these technologies.
I'll give you an example of some of our work. Blend uniformity has been an issue and PQRI has actually developed a proposal on how to address that. The proposal is posted on the PQRI website. But we wanted to look at the near IR imaging technique to see what can be done.
So, we were looking at tablets. These are furosemide tablets that I think were made at the University of Iowa. No. These are handmade tablets in our labs. It's a binary mixture of drug and excipient. What you're looking at is a chemical image. The tablets are white, colorless tablets. But the chemical image, the white areas are the drug, and the red spectrum is the excipient. So, looking at each of those pixels in the digital image, which was acquired in less than a minute, or actually in 30 seconds, you get that picture. You can actually develop simple metrics to do the analysis.
Here is our University of Iowa product where we are looking at the scale of interest right now that's actually a small part of the tablet. So, with the current technology of blending, we can achieve uniformity far beyond what we had anticipated. So, blending should not be a problem. We are doing it right, but we are having trouble proving that we are doing it right right now.
So, here is, for example, if you analyze each pixel, you can see the complete distribution of the drug and concentration and so forth and how symmetric it is when it's uniform. When it's not uniform, you can see how things change. This information can be gathered in minutes, if not seconds.
I've used another example. Since I mentioned magnesium stearate, here is a slide that Steve Hammond from Pfizer shared with me and what can be done which we could not do before. Two blends, one with good flow properties, one with bad flow properties. Look at the distribution of magnesium stearate in that. So, you can easily associate problems to solutions and develop causal links quickly.
Just to go on as an example, near IR is not the only one. Raman. You could have a three-dimensional Raman spectroscopy of a tablet's surface and look at where the aspirin is and where the excipient is, and actually do quantitative analysis at the same time.
Here is a very recent publication from Dr. Lodder's group from Kentucky, published in the Pharm. Sci. Tech. of AAPS. Since it was available on the web, I downloaded this. Here you're looking at the ability to analyze aspirin and salicylic acid after it has been packaged. So, this is through a blister pack. You don't even have to wait. Through a blister pack you could look at aspirin and actually look at the moisture content of the tablet without having to open the blister pack.
So, the technology is maturing, but there are many challenges. One of the challenges I have heard, talking to people from industry and in a recent trip to the U.K., the New Technology Forum, is the mind set is out there that FDA will not accept it. FDA will accept it if there's good science. Period. There's no question about it.
Also, I think the mind set is also in companies. Regulatory affairs departments within companies have to be convinced, and others have to be convinced.
There are challenges. Method suitability and validation approaches have to be developed, have to be agreed, a consensus has to be developed.
Chemometrics is something which traditional analytical chemists are not aware of, are not fully cognizant of, and don't have expertise in. So, chemometrics, pattern recognition will have to come in and we'll have to learn how to deal with that.
Also, mechanisms of regulatory introduction have to be developed so that investment costs and other cost issues can be managed properly.
So, to summarize, potential benefits for process analytical chemistry. I believe that manufacturing and quality control cycle times can be reduced and costs can be reduced. It can improve product quality, provide information during processing for feedback control. Direct sampling problems are eliminated and can facilitate establishment of causal links between product and process variables and product performance.
Improve patient and operator safety. Keep in mind many of the products are very important, and operator safety is a concern.
And I firmly believe there's a win-win opportunity that will require out-of-the-box thinking on both FDA's and industry's side to move forward. I hope you would support my perceptions here, and I would like to hear your thoughts on this.
The second presentation will focus more on the opportunities that exist in reducing cost, time of development, and so forth.
DR. BYRN: Questions for Ajaz? I'm sure we'll have a discussion after the second one, but are there questions for Ajaz right now?
DR. ANDERSON: Did you say that you are using near infrared in your laboratory?
DR. HUSSAIN: Yes.
DR. ANDERSON: Could you just take a couple of minutes and comment on it, on the results that you're getting?
DR. HUSSAIN: Actually I had planned to share with you some recent information. I had -- Robbe Lyon is here -- the division director, to give me a comparison about HPLC and near IR. They are currently doing furosemide analysis content uniformity. They estimated time to do a USP analysis for furosemide tablets is 34 hours, using the HPLC technique. It's 3 hours with near IR. The complete analysis takes 3 hours, everything.
The sample costs for a stability study that we are doing again. Costs per sample using near IR, again for the same drug, is about $2.25 compared to $47-something for HPLC. So, that's our experience in our hands.
Instrumentation cost is almost comparable. The instrument that we have is about $75,000 for the near IR, and HPLC in high end is $40,000 to $50,000.
DR. HOLLENBECK: Ajaz, in the backgrounder, there was the statement that you made that went like this. The regulatory environment under which the pharmaceutical industry must operate is often suggested by many to be an impediment for introducing these tests. I think you just covered that in your slide by saying that FDA won't accept it, but can you expand on that a little bit more in terms of what impediments exist and what steps can be taken to get rid of them?
DR. HUSSAIN: The challenge here is I think uncertainty. We don't have a guidance out. There are many parts of the agency that have to deal with this from the field to the center. So, that itself is a challenge.
I think the major challenge is validation in terms of how do you validate this. I'll use blend uniformity as an example. Sampling using a thief is a challenge. It creates this problem. But the mind set is to validate near IR, you have to compare it to that method. I think if you're looking at a modern technique, with the potential of becoming the gold standard, you have to compare that to some standard. We had that discussion this morning with clinical. The same issues cross over. So, again, I think we have to think outside the box how you validate some of these tools and bring those in without adding a burden.
What will we plan to do is to create a subcommittee. There are a number of challenging issues. In my letter to you all, I suggested that we really need a multi-disciplinary team to look at the feasibility and so forth. So, a subcommittee under this committee would be my proposal.
DR. BOEHLERT: May I just make a comment as well? Maybe we need to think even further outside the box when it comes to things like blend uniformity testing because right now things like the Barr decision are forcing manufacturers to take single dosage units, one to three times the size of the dosage units, take it off-line and test it by a technique, and that creates the problems. So, testing is one aspect, but it's other things that are impacting what we have to do today.
DR. BYRN: Our next speaker is a good friend of mine, G.K. Raju, who is going to give a case study on in-line process controls.
DR. RAJU: I'm not sure if this is a good thing or a bad thing. I haven't been to an advisory committee meeting in my life. I'm not sure that it's a good thing. I'm not a pharmacist. I'm not a doctor, but I want to help make medicine cheaper, better, and faster for patients because I think it's a great thing to do, and I want to do whatever little I can to help do that. I am a chemical engineer, and think of the next few slides as a chemical engineer's view of the pharmaceutical industry.
This is the training I come with that affects how I look at things. That affects what I'm going to say when I look at these things. So, I'm going to summarize an outsider's look at the pharmaceutical industry at multiple levels. Hopefully I have something intelligent to say. I'm not really asking for anything. I'm asking really for you to lend me your eyes and ears and hopefully your mind. And this is a summary of what I think I'm going to say.
Since I'm new to this field and this audience, I'm going to tell you where I come from. I'm then going to have two very high level looks very quickly at an industry at a very high level. I'm going to go through a lot of slides, and that's because I want to go through a lot of things quickly. So, don't worry if you don't get the details. You have it in your background slides.
I'm not from New York. I am from Boston, and I'm also from India so I can talk pretty fast.
DR. RAJU: So, this is the introduction to where I come from, sitting in the chemical engineering department and also in the business school at MIT. We then decided to work together in what we began to call the MIT Pharmaceutical Manufacturing Initiative. And our passion was to begin to describe and capture the opportunity to impact this part of this pharmaceutical industry.
What was that part? And we had to draw a diagram. That was one of the first things we were taught. Let's draw a diagram that represents that little block. That diagram has pieces over time and pieces over space. That's pharmaceutical manufacturing. There's the process development over time, and then there's routine manufacturing. We have the chemistry changing in the active ingredient. The dominant physics, which is what are the components. Small aspects of physics which is what form should these components be in and how do I package around it. No chemistry. Physics in the middle two, chemistry here, sometimes biology, and some paper most of the time around it. That's what pharmaceutical manufacturing looked like.
So, if I was going to measure and characterize it, I had to measure it in terms of something, and we all know what dollars are. We can debate what quality is, but we have a pretty good understanding of what that is. Time means the same thing to everybody. It's the time on a clock. And safety can mean different things to different people.
For this presentation, I now have a choice which one of these to talk about. It seemed like the most neutral and seemingly communicative thing to do was to talk about time because all of us know what that is. It's pretty neutral. It's important. It's the same thing for everybody. So, for the rest of the presentation I'm going to talk about time, looking at it from two points of view.
Routine manufacturing. When we first looked at pharmaceutical manufacturing, it seemed like the word only meant routine manufacturing, which was this, and process development somehow was disconnected from it. So, routine manufacturing. The first question was, what is routine manufacturing and where is the time spent?
So, we said let's look at some blocks of routine manufacturing. We got together a consortium of a lot companies. Over these I've worked with about 25 companies representing 80 or 90 percent or more of the pharmaceutical business. One of the focus areas was the formulation of a particular consortium, and we said, let's start looking together at your plants from an outsider's point of view and measure where the time is spent.
Once we decided to do that, the question then became which products do I look at. Everybody makes different kinds of products. So, we said we can do high volume products. Those are the billion dollar products. We can do the complex ones, and we had some discussions about complexity, and then there were liquid lines which have totally different manufacturing and testing priorities. Which one do we choose?
Since we had no basis to choose, well, yes, about 80 percent of the products are solid, so we could look at the first category, but liquids were distinct. So, we wanted to know what they were about as well. So, we said we don't really have a basis to choose between, so let's do a little bit of all of them. Let's look at the high volume products, for example.
The first step I was taught was to draw a process flow diagram. From a chemical engineering view, we said let's draw so-called unit operations, what is happening in that step, chose the color blue. This is the active ingredient that we don't study, and I showed you that block on the previous slide.
The first thing that came to my mind is why are these tests at the two ends of it. I began to understand that, of course. But why is it that we don't measure anything in between? We had two dominant places where we did testing: at the end, at the beginning. We had very minimal in-process testing in my opinion. I was surprised at the very little testing that happened along the way. It was something I wasn't used to, and I kept asking why.
I said, yes, we make a product that goes into somebody's body. That's important. We have to make sure its safe. We have to worry about its efficacy. I don't know if it's 210 or 211 on your CFR documentation, but these are the definitions about purity. I read them up and I said, okay, this makes sense that you have to do these tests because they mean something in the body. But why are we doing it at the end? Yes. That's the last place we can do it. We can be pretty sure that when it comes out, it's done.
But what are the consequences of only doing it at the end? Maybe we should think about that as well. It's not just a zero sum game here. There are some consequences, possibly, about measuring things here when the causes of that variability may be very early on.
Second, raw material testing. I was surprised at how little implications of the physics of the process were captured in that test. If formulation is all about the physics of the process, the main test was really a chemical test. And I wondered why. Again, as you wonder, you start saying, let me look at a few more cases. Maybe this is just one example.
So, I used the same colors now, and I simply said instead of drawing a process flow diagram in space, let's draw it in time. So, it's the same colors now. All I did was say let's draw them in time and look at it from a company's point of view. What came out instantly was an observation that the red testing took significantly more time than the making itself. Were we pharmaceutical manufacturers or were we pharmaceutical testers? It's just a general open question to ask. So, testing dominates what we do. Clearly there are important reasons.
Is this just process A now? Maybe if you look at a few more, we'll see if there's some pattern here. Another big high volume. Usually now we're talking about close to a billion dollars or more, so significant. I'm not doing products that are not important. It looked like a simpler process, the tests very much defined by the body now. The tests are very much defined by what a tablet should do. And the place is in the same place again, very little in the middle. The consequences in time look so similar. Again, about 20 days from the beginning and the end, less time in the actual making of the tablets. Then there's the API which I don't even count and this inventory afterwards that I don't even count.
Let's look at another one. Is there a pattern here? Yes. The tests look very similar, almost expected now. The times keep coming almost similar. So, it's not the company. It's not the location. It's not the product. Maybe it's just the high volume products that look like that because that's what I've seen so far.
Here's another high volume product that looks very similar.
Just to be sure, let's look at a fourth one, and it looks very similar again. We take a couple of months to go through the system, half or more than half of the time testing it in some way. Does that testing take that long? What drives the timer on those tests?
But before I go into that question, let's make sure that we've seen a representative -- if you would go back to the active ingredient manufacturing, you would see a much longer time. And if you look at this time and you add it up from the beginning to the end, you ask yourself is this what we want to do in pharmaceutical manufacturing. What are the consequences of allowing us to do it? That is, if there's some variability here and because of our testing and the way we define it, we see it 100 days later, how are we going to relate the cause and the effect, and what happens to our problem solving of asking why we see something? Does time affect that kind of a thought process?
We finished high volume products. Maybe it was just those billion dollar products that look like that. Let's look at a complex process, complexity measured in many ways. One measure would be the number of steps, which in the previous presentation you said wasn't important. In this case it clearly was a complex process. I try to make sure they always fit on one slide, so I don't take too many slides to explain it.
But again, you have a process that does a number of things again and again. The way we measure how well we do it is testing at multiple places. If you look at that process in time, this is what it looks like. Again, the testing dominates the time very much.
Let's say a liquid line, and liquids are different in the sense the uniformity is a little easier to establish. Micro-testing is a little bit distinct about priorities in terms of testing. So, let's look at a liquid line, although those are not the dominant dosage forms.
Yes, the basic tests around it look very similar. The sterility test clearly is going to show up on the next slide. If we now say let's put the process and draw the time around it, you really start wondering why this ratio of the testing to process is so different. If you then say let me try to summarize and see if I can get something important around it, you ask where is the leverage.
The first is to make sure you put all those products on one slide and ask do I see a pattern, and we do see a pattern and the pattern being that almost always the testing seems to take at least as much time as the making itself.
What shall we do about that? First, we probably have to understand the testing itself. So, if that is at least the single biggest thing we should look at, maybe we should look at it in a little bit more detail.
So, the big picture. Let's got to the next level of the picture for each of these red bars. So, we said let's look at those tests. What really are those tests and where is the time there? Let's look at any of those tests, at the beginning, at the middle, at the end of a process, and it always has a unit operation that ends. It stopped. You take a sample from the process. You hold the sample in the plant. You then document your sampling. You transfer it to the lab. You then batch it in the lab. You then actually do your test right here, then data collect. You document. You transfer from review, and then you make a decision about what?
If you looked at your test itself, it's this tiny little thing here. And Ajaz says he was comparing HPLC with NIR. What kind of a difference does it make? But Ajaz also said at-line and in-line, and it's those aspects that the opportunity is there. It can be Raman. It can be laser-induced fluorescence. It can be NIR. But it's the fact that at-line and on-line is what takes care of these red bars. That's where the variability comes in in many cases because we as human beings don't like to do the same thing again and again for too long. Sometimes that shows up in many places. But yes, we can do something about the testing, but yes, this is where the pieces are.
So, if you look at the technology opportunities around it, the only way to attack this place completely is the word on-line. Along the way we go from off-line to at-line, in-line, and on-line. You can see the transition, and I think there's an opportunity for the whole industry to make that transition test by test, product by product, and I think that's a lot of time that we can do something about.
So, to repeat, it's not the test itself. It's the before and the after of the test, which is 98 percent of time opportunity.
So, what did we say? We said if we were all about making quality, we measure it very infrequently. Why do we measure it so infrequently? Because it's a lot of work. It takes a long time. The scale of the test is based on the scale of the human being. The manual nature of the off-line test defines the cost-benefit tradeoff of doing that test. Hence, we do it at the end because we have to do it at least at the end we think.
But once we make it on-line, the tradeoff of number of tests to the cost of the tests has now changed fundamentally. So, one test and two tests are not necessarily once and twice more expensive in terms of the organization's time, cost, and possibly even quality. We want to make it more continuous. The FDA would be very happy. So would we because we would actually have differences in our times, we would have differences in our processes, and we would attack the off-line test once and for all.
So, that's the first message of a chemical engineer looking for a little bit of time at routine manufacturing over space. We covered different products. We thought we had some conclusions. But clearly I had to look at it over time, and there were so many things I could look at. From a chemical engineering perspective, I would love to look at the active. There's something chemical going on.
But the consortium, when we sat together and we said we said we can do all of this, we can study all of this, they said look at blend uniformity. Why would we want to do that? You blend for five minutes and all you want to do is figure out whether you're done? That's really boring. No. This is what we want you to do.
DR. RAJU: Okay, I'll do it.
We did a lot of other things, but when Ajaz invited me, I said I'm going to talk about all these things. He said blend uniformity.
DR. RAJU: So, I said I'm gong to have to do it here too. So, that's the next set of slides that I have. It's blending.
Let's define what blending is. What am I going to try to find out? I've looked at space. Let's look at time now just to be creative. I want to look at process development and the measurement of quality, particularly blend uniformity along the way.
Here is my on-line sensor and then benefits are a little less, but it's at-line and in-line compared against off-line. This sensor has many possibilities and near infrared is one. A number of companies have worked on it. We've patented a technology called laser-induced fluorescence within this consortium of companies. There are different aspects and different ways of measuring uniformity. But the conventional way, we're all the same, and we all do thieving because that's how we started off doing it a long time ago.
But let's understand what blending is. Before we figure out what on-line do, we've got to figure out what blending is first. So, blending is actually not just the mixing; it's actually a whole bunch of operations before and after it. You clean a blend. You load the active excipients. You then finally mix. Then you sample. You transport to a lab. You analyze, and then you have results about uniformity. You have different kinds of results. You can be undermixed, and so you mix longer. You could get it right, and there's a minimum specification. I think it's RSD 6 percent, and you usually get 3 or 4 percent. I was happy to see that.
But sometimes you have this thing called overblending that I never learned in chemical engineering. They call it desegregation. They said sometimes its demixing. But something happens so it really is not a good idea to go beyond that time too. Do we understand it? No. Well, let's not get into that right now.
But let's look at the material and information flows. The material flows through as you go forward. The information all comes far away from the lab many, many, many, many hours away. You then make a decision about the material based on another organization, which is what is it about batching the HPLCs? Because they have only so many and they want to make best use of their samples. So, what are the consequences? So, that's blending.
If we agree that that's blending, let's see if we can do blending on-line. Here's an example of a collaboration between MIT and Purdue, two universities actually collaborating. We don't have a pharmacy school and we have a chemical engineering school and a business program. Here is a bin blender at Purdue University in their pilot facility. We do the lab scale trials in our laboratories at MIT, and when we scaled up in collaboration with near infrared and LIF together. And this is basically a light-induced fluorescence. There's no laser. It looks at uniformity in three different locations.
The question is a very simple one, which is when are you done? There is no deeper question about what are those patterns, what do they mean. When are you done? It's very clear that we could do it very easily and very robustly.
We were pretty happy about when we were done, and we said we're very excited. How do we know whether we got it right? You're going to know if you got it right when you compare it against thieving.
Okay, I know I'm uniform. I have to compare against thieving. You told me thieving was a problem with the sampling and the manual operation. Now, is it going to be difficult for me to compare a much superior test with an inferior test and that would be my benchmark? Can we look deeper about content uniformity? I can do a lot more tests. I can look at different places. I don't think that's going to work. You have to measure it against thieving.
So, we did and we were very lucky that that works well. This is the laser-induced fluorescence, and it's very similar for the near infrared. We can certainly talk about that as well. On average for different active concentrations, and we were able to go very low. For important products, I think we have a great answer. The endpoint was very consistent and less variable. Not necessarily a tradeoff between the FDA and the industry, between quality and cost, but we got them all less variable. What does that mean in terms of time and cost? Well, I told you I won't talk about cost, but I will try to talk about time.
So, if Ajaz represented some part of the FDA and he was looking for just this variation, hey, we're not doing too badly. But if we represented the companies, how would this help us? Why would we have to go through this pain of showing equivalence? Hopefully we'll get something out of it. Maybe it's cost. At least it has to be time. So, the answer is so what. We've got to get something out of it. It seemed like we had some variability reduction.
The "so what" comes down to let's compare -- and I took one of those case studies now, one of these processes that three different excipients were added, one, two, three. Here is the conventional off-line test, and I have the on-line test. And I have the maker of this product, and I said what are your blend process development times.
But I said let me not stop there. We have a consortium of seven companies. Let's capture all of those times so that I don't have to then succumb to the argument that says it's just that company that doesn't blend very well or do the process development.
So, we collected blend process development time from all the seven companies and everybody was different. So, we said let's capture all their data, but let's start asking questions around the whole blending operation. Let's define the blending operation. The off-line one has a number of components, brown representing the material flow, as I said before, blue representing the information flow. Brown, material. Blue is information. Information flow and material flow are two different tasks.
When material is separate from information, what is the space in between called? It's called inventory. When you can combine material and information together, that's when you can deal with the fundamental drivers of inventory. You have to wait to get the information. You wait with the material. And that's called inventory. So, we wanted to get these two things together.
And then uniformity is done differently in manufacturing and is done differently in process development. Again, it's done differently if you're a generic versus a brand name. But in many cases, depending on the country, you don't necessarily have to do the content uniformity test at the end of the blend while you're manufacturing. You often do it during validation, often during process development. Some of the generics do it around the manufacturing as well. Some countries would do it in the manufacturing as well.
But let's look at process development now because that's what we're going to look at and figure out what is the material/information flow going to be for the on-line technology. Where's the brown? Where's the blue? They are in the same place, and this is the decision. Here is the material/information flow, so complicated. Here is the simple flow. We measured it where the cause of the variability is, and we can do something about it.
So, let's collect data from all these companies, and we have the seven companies. How long do you take to clean? How long do you take to load? How long do you take to discharge, sample, transport, test, hold? And we had all the seven data entered in, and we said now let's simulate each of these case studies.
So, we said let's take each of these companies and do blend process development the way they did it. We said here's all these tests. We're going to represent all these tests based on the time of what they took. Here is a representation, a model of each of those steps. Modeling is a really not so commonly used thing in this industry as well. But let's look at each of these steps.
For example, this is the QC lab. You transport to the QC. I told you about all the components. You hold. You retrieve the samples. You prepare. You test. You analyze. That's inside the lab. Here's the actual blending. Here's the actual charging of the active ingredient and you can say you usually have to clean and then you have to load the active. And you represent all those steps.
This is now two years old, and when we were presenting at the consortium of the pharmaceutical companies, we said it's a few more months before it's the start of the millennium. And I said let's start the millennium -- this is way back from our time now -- the old way. Let's do blend process development the way we did it for now I don't know how many years. If aspirin was made this way, then that's a lot of years. So, let's do it that way.
So, we're going to start using the actual data from each of these companies. Let's start and do blend process development. And here's the actual time that it takes, and you can see the 1st of January is now the 3rd of January and we're waiting for our first batch to come out. It's now the 4th of January. This is actual time based on the data that we collected. Still waiting. This arrow indicates that we got our first batch with an acceptable RSD. Now, we got one. We are really happy now.
We look at our plant and we see a whole bunch of samples waiting to be analyzed, so-called sample blends. We don't know whether this is right. We don't know how many we have to do. We make a lot and we're waiting for the analysis. It's a whole other organization somewhere. This is inventory space, information and material flow being disconnected.
Let's go inside our lab and see what they're doing. We go inside our lab and you can see they have a whole bunch of samples to deal with. They're working unbelievably hard, and you can see that it's at different places. Some are being held. Some are actually being tested. Then you can see some are being analyzed.
You can now look at the QC people, and there are QC/QA people in that organization, red indicating that they're busy, and you can see they're very, very, very busy in the lab. They're both very busy. We got our first blend.
You can now look at all your HPLC equipment, and if it's red, they're busy too. So, if HPLC is busy, if people are busy, there's inventory in your plant, you got one correctly.
Now, you have this interpretation of validation, if you remember Ajaz saying in his presentation. This is a lot of work. If I could just get three right. So, you say I've done one. It's now the 4th of January. Let's try to get a couple more. Oh, everybody is working so hard. Everybody is so busy. What should the right head count be? How many HPLCs should I have? Terrible questions asked around a terrible technology. The wrong questions.
But you finished one. You got two. It's the 5th of January. You took five days. You got it out. Now, there's some people in the organization, so-called processing people, who say you know what? We got it right. We got three done. You know, maybe we should do a few more so that we just understand the area around it.
But then you have your marketing people. You have your business people who look at your plant. Everybody is so busy. You have the inventory. And they say it's all about time market.
So, what are we going to do? Okay, everybody is busy. This is three runs in a row. This is content uniformity. This is blending.
Let's go to the next step. We have an envelope around which we've done data. We have data. Now we're ready to go to the market, and that's now the 5th of January.
As another alternative, I also challenged the companies and the consortium to say let's go back in time and start that same millennium, January 1st 12:00 midnight, run everything the same. That is, you clean the same way, you load the same way. The only thing you do differently is the monitoring of content uniformity. So, you start the same time too.
Now you figure out what you want to do about it. So, you run your batches. You watch the clock and you do everything else the same. It's 10 o'clock on the 1st of January. I finished one. Let me just take a look at my lab and see what they're doing. Red means they'll be really busy. Wow. Now, is the question now should you not have those QC people? No. You want your QC people to do thinking jobs instead of doing jobs. This is an opportunity for them to be auditors and trainers and QA people. I think they're going to enjoy themselves more if they don't have to move in batch samples.
Let's just take a look at our HPLC equipment that Ajaz had I think underestimated at $45,000. You just freed that up too, but you did put a lot of investment around your on-line sensor. But guess what? We're very happy. We've only got one right. It was pretty fast. Let's see if we can get a few more. We got two. It's the first day. In about 24 hours, we just finished three and now we're asked the question, you finished three, one is random, two is minimally a pattern, three is a law in some disciplines. Is this a law? Do we know a blending? Do we know the uniformity of our blending?
Shall we do a few more? Yes. QC people are there to analyze the data to figure out what your next run should be. You don't have things sitting around. The costs are making that decision of a few more. You know you're going to succeed. You can do some runs around it. And maybe you can go back to the real deeper spirit of CGMP. That's four. That's five. How many do you want to do? Six. Okay, two days. We did seven runs. We did more than twice as many in less than half as much time. This is what technology can do for us.
Now I've asked the companies -- this is obvious. The technology is in place now. This is your data. I presented it to you. Why isn't it done? It's been around for a long time. The first response is I've done so much of this NIR stuff. I have so much data. But the FDA just won't accept it.
I actually first met Ajaz at the PhRMA meeting, and he presented right after me, which is when the idea for this came up. I ran after him and I said, Ajaz, why haven't you guys accepted it, and he just said I have not seen one application with near infrared submitted to the FDA yet.
Are they wrong? No. They're both right. It's a perception. Number one. Second, it's a limitation of saying you want to do a test-to-test comparison.
Together, I challenge this advisory committee to break out of the box to see if we can break through that barrier. I can see the logic for that test-to-test comparison. I can do the same thing too. But let's look back to why we had that test. What does it mean for all of us? A lot, just for that one step. I took the simplest possible step, and it gets better every time. Blending. On-line blending process development. Off-line whether you have one, two or three blends. A factor not 10 percent. A factor of 10 improvement to a factor of 15 improvement of that process development time just for blending.
But even better. There is a predictability of that time, which means you know when to start your blend process development, you know when to build your plant, you know how big to build your plant. That is about variability of the organization. It depends less on the organization now. This is the opportunity.
I listened to the presentations and everybody seemed to believe uniformity is an important issue. But I challenge that on that important issue, to make an important leap in working together to be able to capture some of these benefits together. I don't even talk about the quality variability issues because I said I will talk only about time today.
So, we looked at the top level routine manufacturing, and we quickly got some pictures that told us something and we said where do we look now. We then took the simplest possible operation and we said let's take the simplest technology -- and there are three or four of them -- and look at the opportunity that we have ahead of us.
As I come to the end of my presentation, I'm going to take off on a couple of things that I said before. We want to monitor quality continuously. Because of the cost of doing it today, we do it at the end. The consequences are large and we all deal with it together as companies and regulators and society. So, on-line technology, at-line technology allows us to break that tradeoff and measure continuously where we can all win together.
We have extended this work beyond blending. In fact, I would have rather talked about all of those. And we've looked at different parts of the process. Being a chemical engineer, I like the first part. But we looked at a lot of these, including some microbial tests, flow, tableting transport. We looked at high volume products. Here is an example of some of the data that I deliberately don't show you the axis on, but here is where you can monitor in the active ingredient. Here's the blend monitoring data. Here's the flow data, and you can measure uniformity during flow and you can measure tablet uniformity.
The challenge now is to ask yourself what is content uniformity as the whole process. How do I show, when I bring in revolutionary technology, that I'm actually more uniform over the whole process? How do I get myself out of the way of saying it should be a test-to-test comparison when the case for the test and the manual aspect of a test is the technology problem? With all of these together, I showed you the opportunity for improvement here. I showed you the opportunity for improvement over just blending.
If you look at a three blending case -- I wanted to go back to that -- you can see as your off-line and on-line get to see more and more steps, the difference between on-line versus off-line gets bigger because the cause and effect gets separated. So, there's a cumulative benefit as you add on more of these things together.
With that challenge, I will end my presentation saying that I took one aspect of manufacturing performance and summarized many years of work around saying we can do something about it. I deliberately don't talk about those aspects, but obviously they're significant and you can imagine that time translates to money and quality.
I would gratefully acknowledge my colleague, Professor Charles Cooney from MIT who would have loved to be here, but is on the mountains of Peru and couldn't come. Now for the last five years I've worked very closely and very excitedly with Professor Steve Byrn at Purdue. This is my first introduction with a pharmacy school, and it's been great fun.
And CAMP is the Consortium for the Advancement of Manufacturing of Pharmaceuticals that has more than half the pharmaceutical industry associated with it.
And in addition, I've also worked with the MIT program on the pharmaceutical industry. We worked with basically almost every one of these pharmaceutical companies in different ways.
Last, because I think I'm beginning to say something real about real processes. I feel bad to put this up but I felt I needed to. Nobody is liable for anything I say except me. Some of the data -- I deliberately take out the y axis when it's not relevant.
But I think the basic message has to be very clear. I know the way to deal with that message. It's not obvious and not trivial, but that's what we're here for.
With that, I'm going to actually see if maybe Steve can have a few thoughts on this because we actually have gone well beyond some of this. Maybe he can decide whether he wants to talk about it or not.
DR. BYRN: Thanks, G.K.
One thing I should say, before we start and we talk about this, is Purdue is heavily involved in research and developing intellectual property in this area. So, you should know that when I talk about my comments.
But G.K. touched on these areas because with one of his slides especially -- and this is probably the only comment I'll make -- we think there's tremendous potential for these technologies, on-line/at-line technologies, to reduce time to market of drugs. That could be achieved by starting using these technologies in development and then moving them through scale-up because you can get instant feedback when something is going wrong, and by using multiple sensors, multiple at-line/in-line techniques. So, there is a huge potential public health benefit because if we can reduce time to market and, like G.K. showed, ensure quality at the same time, then that's a very exciting game.
I think that's probably all I need to say.
I think we need to have a discussion now. Ajaz' proposal was to, I think, establish a subcommittee of this group to look at these technologies in more detail and report back. But let's have a discussion and see if there are questions for G.K. and go from there. Yes, Vince.
DR. LEE: I think this is very intriguing. Is there any other industry using these technologies?
DR. BYRN: Yes. I think G.K. can answer that one.
DR. RAJU: This is probably one of those really extreme industries where testing takes a lot longer than processing. It usually takes a much smaller fraction. There are many good reasons for it. It's the legal nature of the test, the fact that we're making medicine.
But actually I think if we do it right, by moving it up, we can actually capture all of those. We can actually make -- I hate to say the word "better," but we can make equivalent, in a real way equivalent product I think. And we can all be a lot happier and have more fun doing manufacturing. I'm not sure I want to be manufacturing if all I do is doing. I want to do some thinking, and that's part of improving the process along the way within the constraints of the CGMP, of course.
DR. BYRN: Just to give one example, Vince, as far as we know, Lay's Potato Chips uses near IR to monitor the water content in a potato chip. They use many more units than we do.
DR. LEE: Let me ask one more question. Can you build into dissolution as part of the --
DR. BYRN: We do need to be fair. There are a few tests that are more difficult to put at-line or on-line.
DR. HUSSAIN: Steve, let me answer that. Vince, I think in the handout there's an article on predicting dissolution rate of carbamazepine. We in a sense can essentially predict or control every parameter or variable that affects dissolution. So, dissolution can essentially come at-line in terms of the predictive mode. You're not actually doing the dissolution, but you're essentially ensuring that dissolution would be acceptable. So, we'll have to think out of the box how to address that.
DR. BYRN: Yes. To put the actual test on-line would be difficult, obviously, because you've got a time to dissolve.
DR. LEE: You still need personal intervention. Right?
DR. BYRN: There are automated units where you can kick a tablet out. You can run a dissolution test automated.
DR. HUSSAIN: In our labs actually in St. Louis, we have actually predicted dissolution, just near IR when you know what the dissolution is. Tennessee has been doing some of that right now. So, predicting dissolution from spectra, information gathered from tablet surface. That's a very important point for us. There's potential for misuse of the technology too because now I can predict the dissolution of a tablet without doing the dissolution. Then therefore it raises the question of selectivity in terms of what gets reported to FDA. That's a concern that we have to worry about.
DR. LACHMAN: Has anyone considered the validation implications of this activity?
DR. HUSSAIN: That is a major issue I think we'll have to deal with, and part of the reason for requesting a subcommittee is to discuss those aspects, how one should go about doing this.
DR. LACHMAN: That's going to be something that's going to be very important to address.
DR. BYRN: Yes, and G.K. was touching on that. One of the problems in this blending area is how do you validate what we think is a more precise method, which is at-line monitoring, with a less precise method, thieving and off-line analysis. We need to talk to statisticians about how to do that.
DR. LACHMAN: I think you have to have the various computer assisted activities and electronic documentation and records that you're developing. So, it gets quite complicated for the validation activity.
DR. HUSSAIN: I think the patent recognition and the statistical validation would be a challenge.
DR. LACHMAN: Right.
DR. BOEHLERT: I was just going to mention that I'm aware of at least one company in this country that makes vitamin blends that has been using near IR since the mid-1980's to test and release product and quite successfully. I don't know if they'd be willing to share that with the group, definitely --
DR. HUSSAIN: I'm aware of the OTC and other --
DR. BOEHLERT: And that's analogous to a pharmaceutical blend.
DR. HUSSAIN: I understand, yes.
DR. RODRIGUEZ-HORNEDO: Two points. The first one is I cannot find it now, but in the reading materials you sent us, there is something in the European Pharmacopeia regarding the use of NIR. So, what do we know about Europe using these techniques?
DR. HUSSAIN: The European Pharmacopeia introduced the chapter on near IR in 1997. We are working with USP to try to get a chapter in USP.
EMEA, our counterpart, has a draft position paper, and that position paper is in your packet also. In their position paper, they have outlined some of the regulatory challenges that they feel would need to be addressed before it comes in. I'm aware of one company which has essentially adopted a lot of this in a new plant in Germany. So, probably Europe is ahead of us in this regard.
DR. RODRIGUEZ-HORNEDO: I think it's a great opportunity to have control of the processes by monitoring in-line.
Regarding dissolution and the example of carbamazepine you gave us, I'm not sure if the sensitivity to the dissolution is due to the solid state transformation. Are you able to also capture differences in effective surface areas that may affect dissolution?
DR. HUSSAIN: Predicting dissolution is sort of a black box. I don't have a mechanistic understanding of that, but based on what I have seen so far, porosity -- you can actually predict hardness of that. All those things are being captured.
So, the mechanism by which we are predicting dissolution I'm not sure I understand that, but that's the focus of our lab right now. We asked the labs to focus on how are we predicting dissolution, what attributes that we are getting from the tablet surface are related to that. So, I think as we understand that, more confidence would be developed in this area.
DR. RAJU: There's also a more recent public news that the Australian regulatory agency approved NIR for release just a few weeks ago.
DR. BLOOM: The other aspect of these techniques is that you can use them off-line also for troubleshooting. In some cases there have been publications of Raman and near IR trying to find some troubleshooting.
DR. HUSSAIN: One such example I presented from Pfizer, Steve Hammond, on the bad flow was the troubleshooting.
DR. LEE: This is not a quality control question, but how much retooling has to be done to implement this?
DR. HUSSAIN: I don't have a good answer for that. That's one of the reasons I thought we will need to gather more information on that. We have done it crudely in our labs. We are doing it off-line. We're using the same. So, it's buying HPLC or buying this, so it's not that. But in terms of putting it on-line, I think G.K. probably will have more information on that.
DR. RAJU: I think that people have been doing it in stages and different companies have made significant progress, more than one step at a time. The interface with the regulatory agency, because of perceptions, has been kind of delayed. But the phase has been to first do it at-line and in-line before on-line because you get half the benefit or a little bit more before that. When you go close to the process, the operators start asking questions about the data. Why is it that we call it uniformity? They start looking at patterns, for example, that say, oh, this is probably because we top-loaded the excipient versus bottom-loaded. As soon as they can remember the data and ask why around it, because cause and effect in the same human being gets analyzed and the process gets -- so, it's coming in phases and on-line has been kind of the last step and not everybody has done it yet.
DR. BYRN: Other comments from the committee? Is there general consensus that a subcommittee should be formed to pursue these concepts and work with the agency and so on?
DR. HOLLENBECK: Ajaz, could you comment a little bit more on the direction you'd expect the subcommittee to take?
DR. HUSSAIN: There were three stages in my mind in terms of how this could unfold. One is simply an understanding of the current state of technology. Vince asked about what does it take to do this. Because if that is too a high cost, obviously, it's going to be a slow process and so forth. An understanding of the feasibility.
Second would be I think probably understanding of validation procedures. Without that, I think it will be difficult.
Thirdly, I think some mechanistic understanding because I think we probably should gather information on how much this is generalizable so that we build confidence in what we are looking at because patent recognition, use of chemometrics and so forth is a different way of looking at chemistry than we have done before. So, we really need to build confidence and understand the mechanistic basis, especially, say for example, about dissolution. If I'm able to predict dissolution, how am I doing this? If we are replacing one with another black box, we need to be careful.
DR. BYRN: Any other questions?
DR. BYRN: Let's take a break till 4:00. We're not very far behind. I think we're in pretty good shape. So, let's take a break till 4:00.
DR. BYRN: I think we can begin.
I'll introduce the speakers as we go along today, and we should just continue till the end. I know we're running behind, but we're okay I think because we were supposed to finish at 4:45. So, we'll just finish around 5:00.
This session is on microbiology. The first speaker is Dr. David Hussong.
DR. HUSSONG: Good afternoon. The last time I was up here, we were nearly an hour behind. Now we're only 15 minutes behind, so I'd like to congratulate the panel for shortening the cycle times and getting things rolling.
DR. HUSSONG: I'm here to initiate a discussion of applying new technologies to microbiological testing in the pharmaceutical industry. Now, many of these technologies have been around for quite a while. Some have come from a clinical arena and some from academia. But I wanted to give a real quick history. This is microbiology history 101. So, if you'll bear with me for a minute.
Historically, to measure growth of microorganisms, you use medium. To detect them, you use medium. Everything is growth-based, and it depends on the medium. So, if you don't have the right nutrient, you don't detect it. You don't get the right nutrient, you can't count them.
There are other methods and they will often, when used, show different populations. Now, the USP methods, the compendial methods, for microbiology are very much the simplest and people can do them in most any laboratory. Because they are simple, anybody will do them. They can be standardized, but I don't think that they're necessarily the best.
Now, we've been looking at bacteria for over 300 years, and in the last 100 years, we have played with a lot of different methodologies. Certainly there has been some pressure driving us to get into the use of them. Towards that end, the Parenteral Drug Association was able to put forth Technical Report 33, a multiyear effort. It came out in May 2000 telling the pharmaceutical industry how to bring these methods on-line.
So, today's speakers I'd like to introduce. We have Dr. Bryan Riley, an FDA review scientist, who will give us an introduction to the alternate technologies used in microbiology.
We have Dr. Ken Muhvich, who is a consultant to the pharmaceutical industry, and he has a lot of experience with the validation of methods, both the standard methods and the new methods.
Dr. Jeanne Moldenhauer is with us who is also a consultant, and she has a tremendous scope of industry experience, and she will discuss her experiences as a user of some of these technologies.
We're hoping Roger Dabbah will be able to join us. He seems to be a little late. But he's from the USP and he can provide us some comparative information relative to the compendial methods.
So, with that, I'd like to introduce questions that we'll have at the end. What I'd like to have the committee do is keep these questions handy.
Question 1. You can see I have a little bit of bias in these methodologies. Considering the advantages demonstrated by some of the new microbiological testing technologies, should FDA take steps to facilitate the pharmaceutical industry's use of these technologies?
Then question 2. Since various guidances and compendia offer test acceptance criteria in terms of colony-forming units, is it appropriate to permit changes to the numerical limits to reflect the sensitivity of tests that measure microorganisms using these properties?
So, with that, I would like to have Dr. Riley take over.
DR. RILEY: Good afternoon. I'd like to spend about the next 10 minutes or so taking a brief look at the methods used for microbial limit testing. What we'll do is look at both the current methods that are now in use, as well as a couple of the new technologies.
First I'd like to look at the compendial methods, which in this case means USP. There are essentially two types of compendial methods used for microbial limit testing.
The first are called plate counts, which give us colony-forming units, also known as CFUs. This is probably the most common method used for microbial limit testing and is probably the most accurate of the ones used so far. In this case, the samples are applied to a solid medium. The medium is incubated. The microorganisms that are capable of growing on this media will grow, form colonies. These colonies can be counted, and then the results are expressed as either CFUs per ml or per gram of the sample.
The other method is called the most probable number method, or MPN. It's based on the statistical distributions of organisms in a sample. It is considered less accurate than the plate count, but it is used sometimes when plate counts can't be used.
What you do is you take a parallel series of serial dilutions of a sample in liquid medium. You do these at least in triplicate. So, what you might have, for example, are three tubes of a 1 to 10 dilution, three tubes of 1 to 100, and three tubes of 1 to 1,000, and so on. You incubate these tubes, and then you look for evidence of growth. You take note of how many tubes at each dilution have growth. Then you refer to an MPN table which will give you the most probable number of organisms in that original sample.
The advantages of the compendial methods, as Dr. Hussong mentioned a minute ago, is they're very simple. They don't require fancy equipment. Any microbiology lab should be able to perform them. They're sort of tried and true.
Also an advantage is it only counts viable or living organisms, which is important because that's really all we're worried about in this case. Are these organisms alive or not, can they multiply?
The disadvantages are the incubation time. Despite the fact this says 48 to 72 hours on this slide, it actually can be longer. It can be up to about 7 days or so depending on the organism you're looking for.
The other disadvantage is not all organisms will grow on a single medium. So, you're really just getting a subset of the possible viable organisms in a sample.
Again, we're only interested in the viable or live organisms. Therefore, the new method must be able to count or differentiate between live and dead, and also must not count microorganisms shaped particles or anything like that. You only want viable bacteria or fungi. Therefore, you need some sort of viability indicator, and I'm going to talk about two different indicators that are used in these two new methods.
The first method is called esterase detection. The example I'm going to give is a test called ChemScan from a company called Chemunex. Esterase is an enzyme that's ubiquitous in microorganisms. It's present in all of them. The reagent that is used is called Chem-Chrome, which is a nonfluorescent compound which can be passively taken up by microorganisms. Esterases in these organisms will then cleave that substrate, which will give you a fluorescent compound. The viability is demonstrated by the presence of the esterases in the microorganisms, as well as the intact cell membrane that is necessary to help contain the fluorescein after the Chem-Chrome reagent has been cleaved.
To perform the procedure, you sample the filter through a membrane. You expose the membrane to the reagent. You then analyze the membrane by laser scanning, looking for the fluorescence. You will count particles that fluoresce at the appropriate wavelength and also at the appropriate size of the microorganisms that you're looking for.
The time for this test is an hour or two from start to finish.
The next method I'm talking about is ATP bioluminescence. The examples are the MicroStar and the MicroCount tests by Millipore. This test looks for ATP, which is the primary energy source for all organisms. The reagent used is a combination of luciferin, which is a substrate, and luciferase, which is an enzyme, which will react with the ATP that you're assaying, as well as oxygen to produce light. And you can measure the light.
To do the MicroStar procedure, it's similar to the ChemScan procedure. You filter the sample. In this case, you then replace that membrane onto a solid medium for a brief incubation. This incubation could be 6 to 12 hours. It's not as long as if you're looking for total growth. The reason for the incubation is it amplifies the signal by increasing the amount of ATP that's present.
You then disrupt the cells to release the ATP. You add the bioluminescence reagent to the membrane. You can then detect the spots of light using a charge-coupled device camera and computer analysis, and then you can analyze the number of light spots you get and count your organisms.
The time, again 6 to 12 hours or so for the incubation part, and an hour or so for the analysis.
That's all I wanted to say this afternoon, and we'll go to our next speaker.
DR. BYRN: Are there any questions?
DR. MARVIN MEYER: Steve, the handout listed some advantages and disadvantages to the standard methods. Do you have similar statements for the proposed two new methods?
DR. RILEY: I think time is an obvious advantage. As I sort of mentioned, we're looking at probably a larger subset of the viable organisms that are present because you're not looking just at growth on a single medium.
DR. MARVIN MEYER: No disadvantages?
DR. RILEY: There are probably some disadvantages, but I'm not going to get into a lot of the detail at this point.
DR. BARR: Is it likely that this could replace the traditional method?
DR. RILEY: It could potentially replace the traditional method, yes.
DR. BYRN: Our next speaker is Dr. Kenneth Muhvich, who's going to talk about validation issues.
DR. MUHVICH: Being a former FDAer it's a pleasure for me to be here today to talk to you about my views. Since I left the agency, I've worked almost four years in the pharmaceutical industry, and a large part of what I do is audit sterile manufacturers, and I'm always in a micro lab somewhere. So, that's given me a perspective that I want to share with you all. I'm not going to take too much time. I'll really try to give you take-home points on where I think these technologies can be used and their efficacy.
I've heard it twice today -- and I use it and a lot of FDA investigators use it -- the common saying that you can't test quality into product, especially for sterile products. That typically refers to a final drug in its final container. Instead, one must use validated sterilization processes and use a proper aseptic technique.
That being said, I think that there are a lot of instances and/or points in a manufacturing process where appropriate microbial testing will provide invaluable information and provide a greater sense of control over the manufacturing process. It's not waiting to the end to find out what the quality of your sterile product is like.
The bullets on this slide show areas that I think are really ripe, if you will, for use of the new technologies which are really old to me. I used a lot of them as much as 25 years ago. They just haven't been used in this industry and the time is now.
Water for formulation; water used for processing, cooling water in autoclaves and washing of stoppers and so forth; raw materials; in-process bulk solution or intermediates. A lot of folks that are making biologics have intermediates sitting on the shelf for months, and they might not be of the same microbiological quality as when they were put up. Microbial limits testing, which Bryan already talked about for a couple minutes. A lot of people use that as an in-process test.
I put the final product release testing at the end for a reason. Jeanne Moldenhauer and I had a talk the other day, and I'm going to quote her. I'm not going to take the line for myself. We both think that use of these tests needs to be in some in-process testing areas where we can do some comparison testing and get a real feel for the efficacy of these tests with pharmaceuticals. So, we need to walk a little bit before we're going to run with what everybody really wants them to be used for, which is product release testing.
I'll go with a simple definition of validation. It's a process or a test that will, with a high degree of assurance, consistently give the intended results.
Now, in the case of one of these type of tests, the validation of a rapid method is going to demonstrate that small numbers of microorganisms -- and I should have put viable there because we can't underscore that enough. These are viable organisms that can grow -- can be detected in the presence of their intended solution. What I mean by that is in the vehicle that they're going to be administered to the patient in, whether that be an in-process solution or the final product solution in the container.
Leon Lachman beat me to this one. The key issue in my little talk here is about validation, but the key issue for these is that they need to be validated. Trust me, this is a lot easier than computer validation. It's just work that needs to be done. They need to be validated and used, in my mind, for in-process testing to gain some experience with the testing. We need to know what circumstances are likely to yield a false positive result and that these will be readily recognized. They should only be used for product release when a high level of confidence has been gained with these methods.
I want to talk about a couple of case studies. These are real and these are instances that I plucked from my experience both when I was here at the FDA and since that I think are real instances where these types of methods could have been utilized to prevent problems. I'm not doing a Hillary. I'm not saying could have, would have, should have. I'm just pointing out that these are detrimental events that happened that, if technologies like these are explored aggressively, are not likely to be repeated.
The first case is a sample from a bulk solution. This is a very high count. It's 10 to the 5th CFUs of Ralstonia pickettii per ml of product. This organism is well recognized that it will go through a sterilizing filter. A lot of people have switched to .1 micron filters when they recognize that this organism is in their manufacturing environment.
Several hundred thousand units of this sterile product were manufactured before they recognized that this organism had been in their bulk solution. All of this product, which represented a product that was needed on the market and had a value to the manufacturer of the product, was rejected. Then they also had to do quite a cleanup in the facility before they could do any more manufacturing.
The second case probably needs no introduction to any long-term FDAer. This is the Copley case, the contamination of the albuterol sulfate solution. The reason that the contamination was undetected is because the microbial limits testing, as was performed for this product, as a release test has a dilution in it. The product had a very low level contamination which escaped the microorganisms' detection during routine release testing. And deaths and serious illnesses occurred in the patients. I feel strongly that if a validated rapid method was available for low level detection, that this type of thing would never happen again.
It's well known. People in the FDA have published that they think it's high time that we move on with some of this technology. I would encourage the committee to at least support having a day or so to really take a hard look at what the FDA can do to help the industry in terms of moving this type of testing into the real world of product in-process testing and release.
Thank you so much for your time.
DR. BYRN: Our next speaker, while we're getting ready, is Dr. Jeanne Moldenhauer, who's going to give an industrial perspective.
DR. MOLDENHAUER: I'm probably a little different from most of the folks that work with rapid methods in micro in that I've worked both on the regulatory side and the scientist side. So, I have some different concerns in some cases than what some of the others may have.
From an industry perspective, business objectives are really what drive us. Laboratory compliance to FDA requirements is a major concern because our products don't get approved without them. One of the big concerns we have is the ability to understand in advance how investigators are going to look at rapid methods, particularly when there's no guidance from the reviewing division that supports us. When we get in the case studies, I'll tell you about why that became of interest.
In fact, it was such a big interest to me, that in one of the companies that I worked at, we brought the FDA in for their drug school to go through some of the rapid methods that were available. They're a fear because they're not familiar with the methods.
We have a business objective to be a low cost provider for high quality products. Lost cost providers have to look at the cost in the total process. Microbiological testing causes significant delays in the release of product. That becomes an issue if you look back at when parametric release was approved for the first time by Baxter, and they eliminated a 7-day sterility test and had millions of dollars of annualized savings. Well, that does reflect back into the product cost.
Sterile products all require some sort of sterility test. And there's a major reticence on the part of FDA to encourage people to go to other forms of parametric release, and they've documented that in many cases. We're looking for other ways to accomplish the sterility testing and still achieve some of the benefits of reduced inventory hold time. It becomes particularly important in the case of aseptically filled products where you're talking about a 14-day sterility test and there isn't any option for parametric release.
Reduced inventory hold time contributes significantly to the total cost of the product, cost in how much warehousing space we need and storage space as well. In the case of parametric release, when they reduce from a 7-day hold time down to less than a day, they were able to do just-in-time production with 6 hours from filling to release the product. So, from a business objective point of view, that's a big issue to pharmaceutical manufacturers.
We're also looking for expedited product approvals. Here's where the kick comes in looking at rapid methods. On one hand, people want to submit rapid methods and get them approved, but the great fear is that it's going to be the only thing holding up their product approval. So, there's a balance between wanting to use state-of-the-art technology and condemning your product that's in for approval.
There are other concerns over rapid methods. One of the biggest ones is that the regulatory expectations are not clear. The reason PDA had the major task force is that everybody wants their new product approved from a vendor point of view. Pharmaceutical manufacturers have a big business objective to want to use those technologies, and no one really knows who is going to approve or not approve them.
The cost of the equipment for doing these tests is significantly high. I'm most familiar with the ChemScan technology. That averages somewhere in the vicinity of $300,000 just to buy the piece of equipment. Then by the time you get the accessories and that that you need, that's about another $100,000 and somewhere in the vicinity of twice that cost to validate it. So, when I go in and try to get that approved through my management, they're looking for returns on investment. The return on investment comes from reduced inventory hold times, but there's a perceived high regulatory risk because there's very little guidance on what it will take to get those methods approved.
There are compliance issues versus submission issues. If you choose the route of picking a less critical test, if you will, than the final product release test, because you want to ease people into the technology, then you have the issue of convincing compliance to deal with them. I'm going to talk about that exact thing in one of the case studies that we talk about.
The other thing is that in terms of regulatory guidance, the thing that we always here is that you can do two methods that are equivalent. Most of these new technologies aren't equivalent because they have superior technology. So, when you go and try to explain that you want to do something, it won't be equivalent, but I'd still like you to get it approved, there are some concerns on that.
There are also scientific issues with them on top of everything else that's a regulatory issue that would be useful to obtain some guidance on.
The first one I want to talk about -- and these are two real life case stories. Fortunately, I got to participate in both.
As a result of the PDA Committee, everyone pretty much agreed that water testing -- and we had several FDA, USP kind of folks on this committee -- was probably not a product release test, and you could probably do this and get it approved as a compliance issue.
I'm a daring kind of person, so we went ahead and tried that. We met with the local district, told them we bought this equipment. We wanted to talk about it. We specifically wanted to address in advance the issues of it not being equivalent, as well as how many tests they would buy into or what strategy they would look at for testing.
Their first reaction in the first meeting was no way would we even consider it. But we got past that because I went in and explained, did you ever hear of this organism Campylobacter? You won't ever detect it in any of your tests, and by the way, it kills people. Now are you interested in a new technology?
They were willing to do that, and they agreed that it would probably raise the bar. Unfortunately, they also told me compliance is not likely to make any quick decision on this and, in fact, they'd get back to me.
Well, return on investments, business objectives. I've got to justify why I have a $500,000 piece of equipment that's validated that I want to use for a method, and I was starting up a new plant at the time. So, the benefit to me was to be doing all my water testing during the validation when you had thousands of tests to do.
Well, six and a half months later, I still didn't even get a follow-up phone call from the meeting, and went back and talked with them some more. The bottom line is no one wanted to make a decision, and we ended up not using the technology for that test method because they couldn't even agree on what it would take to convince them that the technology might be okay to use. And by the way, even if you did use it, don't ever use it as water for a raw material for your product because that wouldn't be okay. And we were talking about making sterile water for injection which, by the way, is grandfathered. So, that was water testing.
The next thing that we looked at is, okay, we'll go a different route. The folks in Washington have seen new technologies. Maybe they'd be more agreeable. So, we went to look with developing a test where we could get it approved through Washington, validate it, submit it with a drug. And you know how you do some drugs and you always know that there's going to be a deficiency anyway? Well, we picked one of those to submit it with because we didn't want it to be the only thing holding up the submission. And we also were going to do parallel testing so that if it died, you could just take the new technology out.
We had looked at a USP stimuli for revision that talked about one of the new technologies, and it said that the method was suitable for bacteria, fungi, and spores. So, we thought, hey, BIs. That's a really good thing. If we wait 7 to 14 days to qualify the sterilizer, that's still a big inventory hold time. We started to develop the method.
We had problems on the very first one with the counts being erratic, had to go back to the vendor, modified the tests multiple times because we were finding counts that were lower than you would expect. Don't forget, I read all these things that it worked great for spores. Well, not really injured spores.
So, we eventually were able to modify it, got it to work, we thought. And my counts were 4 logs higher. Well, if you're talking about a sterilization cycle, that becomes a big issue. Does this indict all the sterilization cycles you've been running and is your product really not sterile? Next new problem. Not good. We weren't really sure how we were going to handle that and what to do with the sterilization model.
Intuitively I never believed the results. So, we did some follow-up studies and we looked at with controlled kill times were you seeing the kind of logarithmic reduction that you would expect to see with the heat. And we did. It approximated the D-value within a hundredth of the count. So, that made me still believe that counts weren't true.
We were eventually able to find out that there was a scientific issue that had to do with clumping, and we were able eventually to get it down to be about a half log difference in counts. But from an industry point of view, there's no guidance that tells me when do I stop the test. What if I had stopped it at the point where it was 4 logs higher? I very easily could have done that because I had data that printed out and routinely told me it was 4 logs higher.
So, there are scientific issues that are also needing to be addressed along with the regulatory issues, and the perception out there is I just can't do it. I get routine calls, because I presented a paper on this, that you really would think that FDA might maybe think about considering to approve this. People are frightened to death to do this, and we're being bombarded because these technologies are used in all kinds of other industries. So, the higher management in your company knows that there are technologies out there to resolve our problems, and everybody is scared to death that FDA will not make a decision or will not approve them.
DR. BYRN: Thank you very much.
DR. DOULL: In your presentation and in the previous one, you talked a great deal about validation, and you may recall in Dr. Holt's presentation this morning he talked about ICCVAM, which is a multi-agency organization that has undertaken this task of validation. They're concerned primarily with validation of biomarkers, but they have a group that's part of that that's looking at the microbiological and I know the food people at Food and Drug here are, with Listeria and all the ones that they're looking at. Food and Drug is one of the members of ICCVAM, of course, and they're a player and, therefore, are somewhat involved and obligated by where they go and what they decide.
So, it seems to me that it's crucial that we have the ability to, in fact, validate these procedures and to get some kind acceptance of that process of validation in order that we can all move ahead in an efficient manner. ICCVAM wouldn't buy into this definition in here of validation because ICCVAM is more pointed towards the argument that validation involves getting the right answer from the test. If you don't have that built in in some way, you're not really validating the procedure.
But it would seem to me that because that's an area of concern that's pretty widespread, it would be something that we would all benefit from if we could have some utilization of validation procedures and some agreement as to our ability to accept those once they have been shown to give us the right answer.
DR. BYRN: Any other questions or comments?
DR. BYRN: Should we address the questions that were raised? The first question is not on our sheet. The second question is kind of on our agenda. The first question is, considering the advantages demonstrated by some of the new microbiological testing technologies, should FDA take steps to facilitate the pharmaceutical industry's use of these technologies? I guess translated: help develop validation or be involved in validation or work with people that are doing validation.
Does anybody disagree with that?
DR. MARVIN MEYER: I don't disagree with it.
I'm ignorant of the process. When some new technology becomes available that looks reasonable and people are interested in it, when we say let's get the FDA to buy into it, who are we really talking about at FDA? Does this vary or is there a group that gives final blessing, or how does that work?
DR. HUSSONG: One of the problems is FDA is a multi-part organization. So, when you're trying to get FDA to buy into something, it depends on who regulates what. Sometimes that becomes a turf battle.
In the example that Dr. Moldenhauer gave to us, a procedure was included in a new drug application and it was part of a validation of another process or if it was a procedure in the application that provided for a finished drug product test, then that would be controlled by the center. If, however, it's just limited to process testing in the line -- the example would be Jeanne's water testing -- that would be done by ORA and the field people. So, when we try to get buy-in, we need buy-in from everyone who would be involved in that method. This is something of a dilemma for us because, obviously, no single buy-in is going to work. It has to be across the board.
DR. MARVIN MEYER: I raised the question because that was a recurring theme with both the infrared, as well as this. Maybe it's a matter of some structuring or some group assigned responsibility for final blessing, rather than kind of helter-skelter, depending on who gets to look at it first.
DR. SHARGEL: I have sort of a comment about the pharmaceutical industry and it particularly deals in the compliance side. When one manufacturer adds a test or changes a test, then at times the field inspector feels perhaps everybody should do it and raises that bar and buys into it. There is probably in industry a worry if one company starts doing this. Does that mean that everybody should be doing it or would they be held responsible for not doing it? You can word it better, if you understand what I'm getting at.
DR. HUSSONG: I understand. It's a philosophical question. Really it boils down to what's the difference between good manufacturing process and best available technology. Certainly in the technologies we're addressing, you can use the most advanced technology, but if you don't apply it to the right circumstances, it's not what you should be doing.
Good manufacturing practices are conceptually to me a long way off from using the most cutting edge or best available technology. There is a difference. The situation you're describing has been a serious problem with the perception of regulators. It goes beyond the U.S. regulatory agencies as well.
DR. MUHVICH: I'll give you an example. It's not quite technology, but it's something that somebody did that was new. There are only two companies in this whole country that use parametric release for release of pharmaceutical drug products. Other people are able to do this, but they don't put in the effort and get the data that shows that they can do it. The other two companies have a huge number of microbiologists and they took the time and effort to submit the data that would allow the FDA review microbiologists to approve that. But all the other people kind of whine about it and everything, but they need to do the same thing. It's just a matter of effort. It's not a matter of black box technology or anything. It's just that they need to do it. If they want to do it, they should do it. They just need to make a corporate decision as to what they're going to do basically.
DR. BYRN: Back on the original question, it seems like there's consensus that we should do this or we should encourage FDA to do it. We just don't know how it can be done. Is that what we're saying?
DR. HUSSONG: I'd sure like to know how to do it.
DR. BYRN: Yes. Maybe we can just go on record as encouraging FDA. I'm not sure we can tell FDA how to do it. Right?
DR. HUSSONG: Well, if you could tell me, please do.
DR. BYRN: I'm pretty sure we can't.
DR. BARR: Maybe as a follow-up to Marv's inquiry, to make sure that all the decision making groups are together, to encourage a formation of a committee that would have those people who would ultimately be involved in making the decision.
DR. BYRN: Ajaz.
DR. HUSSAIN: I had proposed a subcommittee sort of a thing. Maybe this would also be amenable to that, a subcommittee model for this issue also. I was actually tempted to have one larger subcommittee dealing with technology issues altogether. There are enough common things there. A separate committee might be a better approach for that.
DR. BYRN: So, what Ajaz is saying is maybe this committee that we already said we would form, we'd just expand the duties of that committee to deal with all new technology and how to validate it. Okay, that sounds great.
Any other comments on that question?
DR. BYRN: The second question is on our agenda. I think I'll just read it. Well, I'll paraphrase it. Most of the guidances and compendia use CFU, use colony counts. Is it appropriate to permit changes to establish acceptance limits that use new technologies rather than colony counts? Can we replace colony counts with new technologies?
Maybe this is something else we send to this committee because it's interrelated, but let's see if there's discussion of the committee.
DR. SHARGEL: That would strike me almost like finding new impurities at times on an old product. I'm thinking now on an old product that has been out for many years and everybody is happy with it and it has not shown a problem. But using a new technology, you notice new counts. Should the manufacturer, if it's a small product, have to come up to that new bar?
DR. MARVIN MEYER: Then kind of following up on a previous comment, if not everyone adopts the new technology, will you then have different limits at different companies?
DR. BYRN: I don't know, but now you can think about the USP has parallel tests in certain areas. We're not the USP obviously. I don't know whether the agency has a mechanism to do that or not. I assume it could be done in the USP.
DR. BOEHLERT: It certainly allows the use of alternative technology that's equivalent to or better. Under that umbrella, certainly it could be used. But I would agree with Leon, that on old products, if you suddenly start applying a new standard, you don't want to go putting them off the market if they've been acceptable for many years. And that applies to a lot of changes in technology and limits.
DR. BYRN: In the USP, couldn't you have an entry that would have this test or that test?
DR. BOEHLERT: Its limits for that test. But the old test with its limits would still be acceptable.
DR. BYRN: That's one way to deal with it.
DR. BOEHLERT: But right now USP, I don't think, very often has alternative tests to measure the same parameters. They have alternative tests where the endpoint is different.
DR. BYRN: Well, they have different dissolution media. They have a couple of these famous ones.
DR. BOEHLERT: It's too bad Roger is not here.
DR. BYRN: Jeanne has been wanting to say something.
DR. MOLDENHAUER: I had two things.
One was, first off, in the case of microbiology, these new technologies are no different than doing an endotoxin test versus pyrogen test where you had different limits. So, that existed already.
In addition, in the case of microbiology, many of our tests are not product release tests, but they have limits and those limits are different from company to company anyway in the case of things like environmental monitoring and that. So, I think you're adding in commentary that really is not as relevant in the case of microbiology.
DR. MUHVICH: I'll make a comment about that. As microbiology with the regulatory authorities that exist today, right now you're not rejecting batches on in-process bioburden limits. However, your sister agency, CBER, is coming to that, and they're coming to it fast. They want reject limits for product in process, bulk. So, I don't know where that's going to leave us all, but I just wanted to let you know that.
DR. BARR: I think this is a very important area and I think it's something that requires very careful study. I certainly don't feel qualified to make a judgment if I had to make a vote on this, but I would hope that we would move this to a committee that would be more qualified and would have the time to consider it to make a wise decision on it.
DR. BYRN: It seems to me that this committee could handle these issues and maybe get some consultants that could deal with some of these nuances and handle the new technology in a general way.
DR. HUSSAIN: Steve, there are many common elements I think. The committee I had in mind probably would cover the common elements of validation, who does what. But there are technical issues which are very specific issues to microbiology. So, you probably would need a separate group for that.
DR. BYRN: I'm sorry, Ajaz. Are you thinking now about a separate group or a subcommittee of the subcommittee?
DR. HUSSAIN: No, a separate group might be a better approach.
DR. BYRN: A separate committee. So, we'd have two committees.
DR. HUSSAIN: Just for microbiology, right.
DR. BYRN: One would be microbiology, but they would have sort of a similar general charge. I think however the agency would like to structure it -- well, let's see what other people think is fine with us. Is there any comment on that? I don't think it makes a difference whether it's two separate committees or one committee. That's up to you I think. We're just saying we like the idea of having committees that study these areas.
DR. DOULL: But I don't think it should be limited to microbiology because the issue is once you validate a procedure and show that it's more predictive than what we were using before, then that technique or procedure needs to have some ability to be incorporated into the regulatory process. And that's not just for micro; it's for a whole bunch of areas. It's a very important issue. Whether that's a working group or a subcommittee or a committee or whatever, it clearly is, as you said, Bill, an area that needs to be addressed.
DR. BYRN: Vince?
DR. LEE: Yes, I think I might be repeating what John said, that it looks like that we have on the horizon a number of new technologies, and it seems to me that somewhere, sometime soon that we need to come to grips with what to do with them. In addition to that, we have two specific technologies on the plate. So, it seems to me it is very important for us to take a look at how to deal with new technologies.
DR. BARR: I don't know how the structure of this works, but it seems if there are places for outside experts or consultants to be on these committees, that it probably would be worthwhile to have one or two of the members of this committee, at least somebody there that would be sitting in on that that could come back and give us some of the details of the interactions.
MS. WINKLE: You're right. Actually every subcommittee has to have two members of the advisory committee as members of the subcommittee. So, you guessed it right. So, that's what we'll plan on doing. Whether we have two different subcommittees or one subcommittee that's going to handle both of these issues, we will actually ask members of this committee to be on that.
DR. BYRN: I think this committee could perform a tremendous service if we were involved in dealing with new technologies and how regulatory changes could accommodate those technologies. Maybe we'd have presentations like we've had today and then decisions would be made, it goes to this existing committee or another new committee is set up. Since it's hard to predict new technologies, it may be better just to let everything come to this committee and then a decision be made whether it goes to one of the existing committees or another new committee is formed. But anything like this I think will be tremendous for the industry and the agency.
Any other comments?
DR. BYRN: I think we turned over the issue of the different counts to this committee indirectly. We had some input on that, but I think we deferred that issue, unless somebody else wants to comment. We deferred the issue of the differences in CFU and the other data that are given to this new committee. Is that what everybody understands?
Any other questions or comments? Yes, Gloria.
DR. ANDERSON: Mr. Chair, it seems to me like there's a fundamental issue here that maybe the committee might want to think about making a recommendation related to, and that is whether or not in fact the FDA, as a matter of policy -- and I don't know enough about FDA to know where this goes. But from what I've heard this afternoon, it seems to me like there's apparently some resistance, for whatever reason, to move into the 21st century with the new technology.
I would just like to see us explore the possibility, if it's within whatever it is this committee has to do, to go on record as supporting any explorations of new technology that would improve the regulatory process, to the extent that this committee is empowered, so that we don't limit it to NIR or one particular thing. That would form the basis for any future applications.
DR. BYRN: Gloria, I'm just informed that the best mechanism would be to use subcommittees. I don't know whether we need a motion or we can just take this as part of our charge, but I think what Gloria is saying and what everybody is saying is this committee will become involved in new technology development.
So, do we think we need a motion or can we just take it as our charge, Helen, just directly?
MS. WINKLE: I think you can take it as your charge directly.
DR. BYRN: Okay.
Any other comments or questions?
DR. BYRN: Then we'll adjourn until 8:30 tomorrow in this room.
(Whereupon, at 5:02 p.m., the committee was recessed, to reconvene at 8:30 a.m., Friday, July 20, 2001.)