FOOD AND DRUG ADMINISTRATION
CENTER FOR DRUG EVALUATION AND RESEARCH
ADVISORY COMMITTEE FOR PHARMACEUTICAL SCIENCE
Thursday, May 22, 2003
Ballroom Salons A-D
Gaithersburg Marriott - Washingtonian Center
9751 Washingtonian Boulevard
Gaithersburg, Maryland 20878
PATRICK P. DeLUCA, PH.D.
Professor, Faculty of Pharmaceutical Science
401 College of Pharmacy
University of Kentucky
907 Rose Street
Lexington, Kentucky 40536-0082
ROBERT GARY HOLLENBECK, PH.D.
Professor of Pharmaceutical Science
University of Maryland School of Pharmacy
20 North Pine Street
Baltimore, Maryland 21201
KAREN M. TEMPLETON-SOMERS, Acting Executive Secretary
Advisors and Consultants Staff (HFD-21)
Center for Drug Evaluation and Research
Food and Drug Administration
5600 Fishers Lane
Rockville, Maryland 20857
AD HOC MEMBERS (Special Government Employee Consultants):
JUDY P. BOEHLERT, PH.D.
President, Boehlert Associates, Inc.
102 Oak Avenue
Park Ridge, New Jersey 07656-1325
DANIEL H. GOLD, PH.D.
12 Route 17 North, Suite 308
Paramus, New Jersey 07652
THOMAS P. LAYLOFF, JR., PH.D.
Principal Program Associate
Center for Pharmaceutical Management
Management Sciences for Health
5 Thomas Court
Granite City, Illinois 62040-5273
GARNET PECK, PH.D.
Industrial and Physical Pharmacy
Purdue University, 575 Stadium G-22C
West Lafayette, Indiana 47907
AD HOC MEMBERS (Special Government Employee Consultants):
230 Hillcrest Avenue
Blackwood, New Jersey 08012
G.K. RAJU, PH.D.
Executive Director, MIT/PHARMI
MIT Program on the Pharmaceutical Industry
Massachusetts Institute of Technology
77 Massachusetts Avenue
240 Aldrin Drive
Ambler, Pennsylvania 19002
GUESTS AND GUEST SPEAKERS:
EFRAIM SHEK, PH.D., Acting Industry Representative
Divisional Vice President
Pharmaceutical and Analytical Research and Development
Department 04R-1, Building NCA4-4
1401 Sheridan Road
North Chicago, Illinois 60064-1803
Director, Global Regulatory Affairs
Eli Lilly & Co.
Lilly Corporate Center
Indianapolis, Indiana 46285
FOOD AND DRUG ADMINISTRATION STAFF:
DENNIS BENSLEY, JR., PH.D.
YUAN-YUAN CHIU, PH.D.
H. GREGG CLAYCAMP, PH.D.
AJAZ HUSSAIN, PH.D.
Consumer Healthcare Products Association
C O N T E N T S
AGENDA ITEM PAGE
CONFLICT OF INTEREST STATEMENT
by Dr. Karen Templeton-Somers 7
TRANSITION FROM PROCESS ANALYTICAL TECHNOLOGIES (PAT)
SUBCOMMITTEE TO MANUFACTURING SUBCOMMITTEE
ROLE OF PAT IN THE GMP INITIATIVE
by Dr. Ajaz Hussain 9
CHANGES WITHOUT PRIOR APPROVAL - FDA PERSPECTIVE
by Dr. Dennis Bensley 41
PERSPECTIVE ON RISK ANALYSIS FOR THE GMP INITIATIVE
by Dr. H. Gregg Claycamp 65
OPEN PUBLIC HEARING PRESENTATION
by Mr. Frederick Razzaghi 99
UPDATE - REGULATORY APPROACHES
REGARDING ASEPTIC MANUFACTURING
SUBCOMMITTEE NEXT STEPS
by Dr. Ajaz Hussain 104
ISSUES AND FUTURE PLANS
by Mr. Joseph Famulare 142
by Mr. Richard Friedman 147
by Mr. Glenn Wright 152
CONCLUSIONS AND SUMMARY REMARKS
by Dr. Ajaz Hussain 191
P R O C E E D I N G S
DR. BOEHLERT: Good morning, everybody. I'd like to welcome you all to the second day of our subcommittee meeting. We had some very good discussions yesterday. Today we're going to change focus a little and it's more informational. We'll be hearing about two important issues: PAT and aseptic processing.
The first thing I'd like to do this morning is for us to introduce ourselves. First of all, I'll start. My name is Judy Boehlert. I'm a consultant to the pharmaceutical industry in areas of quality, regulatory affairs, and product development.
DR. SHEK: Efraim Shek from Abbott Laboratories.
DR. LAYLOFF: Tom Layloff, Management Sciences for Health, a not-for-profit building health systems in developing countries.
DR. RAJU: G.K. Raju, MIT Pharmaceutical Manufacturing Initiative.
DR. PECK: Garnet Peck, Professor of Industrial Pharmacy, Purdue University.
DR. HOLLENBECK: I am Gary Hollenbeck, Professor of Pharmaceutical Sciences at the University of Maryland.
DR. DeLUCA: Pat DeLuca at the University of Kentucky faculty of pharmaceutical sciences.
DR. TEMPLETON-SOMERS: Karen Templeton-Somers, acting Executive Secretary to the committee.
MR. PHILLIPS: Joe Phillips, regulatory affairs advisor to the International Society of Pharmaceutical Engineering.
MR. SERAFIN: Dick Serafin, consultant in manufacturing.
DR. GOLD: I'm Dan Gold, a consultant to the pharmaceutical industry in the area of compliance, regulatory affairs, and manufacturing.
DR. HUSSAIN: Ajaz Hussain, Office of Pharmaceutical Science, FDA.
DR. D'SA: Abi D'Sa. I'm representing Joe Famulare for the morning session.
DR. BOEHLERT: Thank you. The first order of business today is for Karen to read the conflict of interest statement.
DR. TEMPLETON-SOMERS: The following announcement addresses the issue of conflict of interest with respect to this meeting and is made a part of the record to preclude even the appearance of such at the meeting.
The topics of this meeting are issues of broad applicability. Unlike issues before a committee in which a particular product is discussed, issues of broader applicability involve many industrial sponsors and academic institutions.
All special government employees have been screened for their financial interests as they may apply to the general topics at hand. Because they have reported interests in pharmaceutical companies, the Food and Drug Administration has granted general matters waivers to the following SGEs which permits them to participate in these discussions: Dr. Judy Boehlert, Dr. Patrick DeLuca, Dr. Daniel H. Gold, Dr. R. Gary Hollenbeck, Dr. Thomas Layloff, Dr. Garnet Peck, Dr. G.K. Raju, and Mr. Richard Serafin.
A copy of the waiver statements may be obtained by submitting a written request to the agency's Freedom of Information Office, room 12A-30 of the Parklawn Building.
In addition, Mr. Joseph Phillips and Dr. Nozer Singpurwalla do not require general matters waivers because they do not have any personal or imputed financial interests in any pharmaceutical firms.
Because general topics impact so many institutions, it is not prudent to recite all potential conflicts of interest as they apply to each member and consultant.
FDA acknowledges that there may be potential conflicts of interest, but because of the general nature of the discussion before the committee, these potential conflicts are mitigated.
With respect to FDA's invited guests, Glenn Wright reports he is employed full-time by Eli Lilly & Company.
We would also like to disclose that Dr. Efraim Shek is participating in this meeting as an acting industry representative, acting on behalf of regulated industry. Dr. Shek reports that he is employed full-time as Divisional Vice President for Abbott Labs.
In the event that the discussions involve any other products or firms not already on the agenda for which FDA participants have a financial interest, the participants' involvement and their exclusion will be noted for the record.
With respect to all other participants, we ask in the interest of fairness that they address any current or previous financial involvement with any firm whose product they may wish to comment upon.
DR. BOEHLERT: Thank you, Karen.
First on the agenda today is Dr. Ajaz Hussain.
DR. HUSSAIN: Good morning. Before I start, as I mentioned yesterday, what we would like to do is after the morning session, we have three presentations, one talking about how PAT becomes part of the drug quality system for the 21st century initiative. Then you have a presentation from one of the working groups on comparability protocols by Dennis Bensley, and then you have a presentation on risk management. What I would like to do is to start connecting all these things together and start defining the topics for the next subcommittee meeting.
I would like to change the agenda, as I mentioned yesterday, with the permission of Madam Chairperson, to wrap up this discussion and define the subcommittee's next steps and then have an update on aseptic manufacturing. Aseptic manufacturing was designed to be an update for you. You have not been part of that discussion at the previous advisory committee, so it's simply sort of an FYI. So if you agree with that, Madam Chairperson, we'll try to do that. Judy?
DR. BOEHLERT: Yes.
DR. HUSSAIN: Yesterday I think we had very valuable discussions. One of the challenges I see is, as we proceed further, we have to start becoming more specific in terms of discussion topics and so forth. I think you will see that happen starting this morning.
Let me start with the PAT initiative. When we started the PAT initiative, this was a topic that we selected based on many different factors. PAT addressed review issues. It addressed inspection issues. It addressed computer validation issues. Therefore, it became a wedge to open the broader discussion that we have on the drug quality system for the 21st century. So it actually was an example that became the topic of discussion of the entire initiative now.
Some of you are already aware of the evolution of this, but for those who are new to this committee, I would like to trace back some history.
The PAT concept actually got started in '93 with an AOAC workshop in St. Louis that Tom Layloff initiated. At that time I think the consensus was not there, and it really did not progress well.
Tom and I spent several hours discussing these concepts, and I think I brought the industrial pharmacy/chemical engineering perspective with his analytical, so what evolved from those discussions was a presentation in the year 2000 to the FIP Millennium Congress on modern in-process controls. The transition that occurred, what happened at AOAC in '93 to what the PAT is now, is we moved the concept to an on-line in-process focus rather than end product testing. I think if you keep the focus on physical methods for end product testing, the concept really did not fit well. So I think the quality by design concept, building quality in, the basic tenets of the GMP, fit very well there.
Keeping that in mind, we took this discussion as an emerging science issue in pharmaceutical manufacturing to the FDA Science Board, and that was necessary because we realized that we are actually changing the paradigm with this concept and you needed the highest levels at FDA buying into this and providing support for this. So the FDA Science Board essentially is an advisory committee of the Office of the Commissioner.
At the first meeting which occurred in November, we invited several individuals to share their perspective. We had G.K. Raju and Doug Dean who actually identified for us the wonderful opportunities that exist to improving manufacturing efficiencies and actually, by doing so, improving not only the science of manufacturing, but also improving quality as well.
Norm Winskill and Steve Hammond represented their views from an industry perspective and outlined some of the challenges for us. The two phrases that I think I have used often is "don't use" and "don't tell." In a sense, the current system has created a scenario, perceptions and rumors and whatnot, that industry either has adopted a "don't use" scenario for new technologies or for process improvement in general or "don't tell." They would use it but not share that with FDA because of fear of regulatory uncertainty and what type of questions might be asked, a "why open Pandora's box" type of mentality. We felt that was unacceptable from a public health objective, and we wanted to start moving and changing that concept, and that's how the FDA Science Board discussion started and we got an endorsement from the FDA Science Board on two critical issues.
The first question we had posed to the Science Board is this is an emerging science issue and all new technologies that we're talking about should not become a requirement. These need to be adopted or adapted by companies that have the capability, that makes sense from a business perspective, that makes sense from a product perspective and so forth. So this could not become a requirement. So it has to be voluntary. That I think addressed some of the "c" in cGMP issues, and that was the first question we had posed to the FDA Science Board.
The second question that we had posed to the FDA Science Board was the issue of a safe harbor, but more accurately what we call research exemption because there is a significant fear of improving just because you may find something or trends which may suggest that something is not appropriate. But if we use that model, then there will never be continuous improvement, and that went to the discussion yesterday also that you have to start tightening your specifications as you improve your process.
The problem with that concept is that why would a company do that if there is no safety and efficacy justification. Simply ratcheting up requirements from a standards perspective is not a solution for that. Therefore, from a continuous improvement model, you have to bring into consideration broader perspectives and actually make more rational decisions. We have approved the product as safe and effective. It has been on the market as safe and effective. Therefore, continuous improvement in reducing variability, understanding it better should not deter and our focus should be on the safety and efficacy.
Ray Scherzer was part of our second Science Board discussion, and essentially he again highlighted the importance of manufacturing, how manufacturing essentially is a stepchild in this industry, and the technology does exist, but I think if we are willing to move in this direction, the opportunities are humongous. Essentially that was a challenge to the PhRMA industry itself that we should be moving to quality by design. He spoke on behalf of the Consortium for Advancement of Pharmaceutical Manufacturing.
These two Science Board meetings, I think, essentially crystallized our thought process and essentially defined a path forward. This support from the FDA Science Board was essential.
From those early beginnings, we essentially set up a PAT Subcommittee under the Advisory Committee for Pharmaceutical Science. This committee met on three occasions and it worked very efficiently to define several things for us: definitions of what PAT is, benefits and scope; identified perceived and real regulatory hurdles, but also identified significant internal, that is, within-company, hurdles that need to be overcome; need for across-discipline communication, pharmacy, chemistry, engineering, essentially an engineering concept; approaches for removing these hurdles. We also had companies come forward with wonderful case studies. General approach to validation, but also I think most importantly, we developed a PAT training curriculum for FDA staff.
We are in the process of training the individuals on PAT, and the training is being conducted by three schools. We focused on three National Science Foundation centers. The School of Pharmacy, Purdue University. That's the home of the Center for Pharmaceutical Process Research. University of Washington, Seattle, Center for Process Analytical Chemistry. University of Tennessee School of Engineering is the Measurement Control Engineering Center. So we essentially brought in a chemical engineering focus, a pharmacy focus, and a chemistry focus to do this training.
Now, the approach for the PAT initiative was to have a core set of individuals who are trained and certified. We have a PAT Steering Committee within FDA. This initiative is a collaboration between the Office of Regulatory Affairs, the Center for Drugs, and the Center for Veterinary Medicine. So you have a PAT Steering Committee that reflects these three organizations.
We have a PAT Policy Development Team that includes new recruits and available FDA experience, such as Raj Uppoor, an industrial pharmacist with extensive review experience; Chris Watts, a biomedical engineer with a pharmaceutics Ph.D.; Huiquan Wu, a chemical engineer with extensive mathematical skills especially in chemometrics, coming from the semiconductor industry; and more recently Ali Afnan, a person who actually has done all of this for AstraZeneca at the Plankstad facility. We hired him and stole him away from AstraZeneca.
We have a PAT Training and Coordinating Team. John Simmons and Karen Bernard chair that.
But more importantly, the heart of this program is the PAT Review and Inspection Team. We have investigators identified from key districts, and here are the names of those. We have compliance officers and we have reviewers. Now, this team is undergoing training. We hope to finish the training by the end of this year. The next session for training is at the University of Tennessee where they'll focus on process controls. They'll come back to Rockville for a second didactic session, followed by a certification program. So all applications that are considered to be PAT applications will only be handled by these folks who are trained and certified. As this program grows, then we start expanding the training and getting everybody on board.
So why PAT? Why process analytical technologies? We felt there was a gap in the type of measurements we do. We have focused for the last 30 years on chemistry, mainly wet chemistry. Physics was missing. So when you bring physics and chemistry together, actually you have more meaningful measurements that relate to product performance. You actually can predict performance attributes such as dissolution from nondestructive measurements.
Essentially from that basis, we felt that PAT provides an opportunity to move from the current testing to document quality paradigm to a continuous quality assurance paradigm that can improve our ability to ensure quality was built in or was by design. This is actually the ultimate realization of the true spirit of cGMPs. In fact, every guidance we have on cGMP, we state that quality cannot be tested in. But a critical look at the current system would say otherwise. We actually test mainly to document quality today.
PAT provides an opportunity for greater insight and understanding of processes. And this is the heart of the PAT initiative. I'd like to emphasize without process understanding, simply adding new measurements is not a solution. In the words of Ray Scherzer, it's like if you don't understand your process and put on an on-line sensor, it's like putting an earring on a pig.
Also I think right measurements, right time, and moving the measurements to the process, and the measurements being predictive of performance is the key here. So you have greater insight and understanding of processes at, on, or in-line measure of performance attributes, real-time or rapid feedback controls, that is, focus on prevention. This is a missing element, especially in product manufacturing, not so in drug substance. Potential for significant reduction in production and development cycle time. Minimize risk of poor process quality and reduce regulatory concerns.
So from the three meetings of the PAT Subcommittee, we created a conceptual framework for PAT guidance development. We actually held the guidance back for some time for two reasons. One, the Part 11 issues had to be clarified to some degree, and that has occurred. Second, since PAT is becoming part of the drug quality system for the 21st century, we wanted to see how best to position this guidance. So there were two reasons for holding the guidance back, but the guidance is on track and will come out hopefully later this summer.
The conceptual framework for PAT policy development will include these elements. Now, if you look at on your left-hand side, it starts with incoming raw materials. Traditionally we have laboratory tests for identity, purity, potency, and so forth. Those are still there, but I think we would like to see bringing in more modern methods that actually provide you information not only about chemistry but also on physics that relates to processability of that material. Today we have materials that come in that are variable in terms of their physical attributes, but our processes are fixed. So that creates a situation where you have larger reasons for deviations and so forth. I think you really have to move towards adapting a process that can manage the variability of incoming raw materials. We would like to keep as minimal a requirement a specification for processability, but to let companies manage that variability in more intelligent way.
So if you have, for example -- I'll just give you an example of near infrared. If you bring in the infrared for in-process materials there can be certain advantages. One, you can do the identity of the material. You can do moisture content. You'll get a sense of the particle size differences from lot to lot. You may not get an absolute value, but in many cases an absolute value is not necessary if you know the variability exists and if you learn to manage that variability, that provides a solution.
So with incoming raw material attributes, you bring physics and chemistry together and then use that information to predict or adjust optimal processing parameters. You move away from time, the blend for 10 minutes concept, to blend until it's homogenous, more towards endpoints which are predictive of the next step.
So if you look at this processor, you have incoming raw materials that differ in shape and size, and you have end product coming out. So you have incoming materials. You're gaining more information about that through very nondestructive, very efficient methods in a different sense now, and using that information not only to be proactive, a forward control sort of concept, but also then you're processing to an endpoint. The endpoint would be determined based on the performance. You will blend until it's homogeneous. You'll granulate to get the right moisture content, the right particle size, the right flow, and so forth. There are wonderful case studies on this from, say, GlaxoSmithKline on our web site through the subcommittee.
So the concept also comes in you have measurements on-line or at-line that are now focused on performance attributes. The in-process controls are now performance-based, not just time.
To do this, you have to identify what are the critical process control points, monitor those, and go to an endpoint, but also you have to bring in the control mentality of chemometrics and information technology for real-time controls and decisions.
You also have an approach for direct or inferential assessment of quality and performance that could be at- or on-line. This becomes nondestructive. You actually say you are predicting dissolution. Instead of doing the actual dissolution, you can actually relate all the critical variables that affect dissolution, monitor and control those, and actually start predicting dissolution. We have ourselves done many of these experiments, but also we have done experiments to link it directly to bio instead of going through an intermediate dissolution.
So that's the elements of the PAT, but to make this happen, you really have to think about development optimization and a continuous improvement framework. You have to think about design of experiments. There are many advantages of doing well-designed experiments. So you start predicting at least within the design space.
The concept of evolutionary optimization comes in. Today it is not an approach that works in the pharmaceutical sector. It works in the chemical sector, but through this process, you actually open the door for that discussion.
Clearly improved efficiency is also a driver here, but to make this happen, you really have to think not from a univariate perspective, but from a multivariate perspective. Now, you're not only focused on the drug substance in your tablet, you're focused on the homogeneity of all your raw materials and how that relates to performance. So you really have to move from a focus on a univariate thinking to a multivariate systems thinking to make this happen.
Then comes risk classification and mitigation strategies. Essentially this is the framework that the guidance is going to reflect. So it will be an approach that takes us in that direction.
Now, here is a pyramid. G.K. and I share pyramids. If you notice in my first slide, I have added B.Pharm. What has happened is a lot of people, when I make this presentation, think I'm a chemical engineer and they come up to me and say, you know, these pharmacy types don't know what they're doing. So I have to say I'm a pharmacist. But G.K. and I have quite a bit in common in that.
And here is a pyramid that somehow evolved in such a way that I thought he took mine and he thought I took his, but I think we just came up to the same thing.
Now, if we really look at it, to do product and process quality right, it has to be based on knowledge. When I started using this pyramid, I borrowed it from the information technology folks where they said data, information, knowledge, wisdom, as you go up in that pyramid knowledge structure.
So the question for FDA to assess was quality by design, and then we apply our GMP and CMC review to assess that. From our perspective, what we see in the submission and what is available to us, the impression we get is the data derived for all this is from trial and error type of experiments. There's not much information. That's the reason I think the chemistry perspective is "I know it when I see it." When there's a change, how do I know the bioavailability did not change, the shelf life did not change? The only way to make a decision today is to say do three batches or do a biostudy, and if it is okay, then the decision is it can be made.
So we are in the bottom of this pyramid today where we have to scrutinize every step, and it's difficult for us to assess whether quality was by design and so forth. So change management is difficult.
Also keep in mind the base of this pyramid reflects the volume of documentation needed. As you go up, the volume of documentation needed to do this decreases also.
Now, what PAT does is brings the focus on critical process control points. It also brings in an ability to generalize, but generalization would be limited to a certain design space of what you have studied. But it is going up in this knowledge pyramid, and as you move toward mechanistic understanding and first principles, clearly the process design, design qualification probably becomes sufficient.
So that's how we see science- and risk-based GMPs would be based on knowledge. As companies go up in this knowledge pyramid, they need to get a reward for that, and for companies who do not, we have the current system.
So the regulatory framework for PAT is the modern PAT tools that we're talking about are not a requirement. We'll have a research exemption so that you have continuous improvement without the fear of being considered noncompliant. There are two simple ways of looking at research exemption.
One is when you start applying PAT-based systems or any new systems to an existing product line, until that complete system is validated, all regulatory decisions are only based on the current approved validated methods. So that should allow companies to actually gather more information with new technologies without the fear of being considered noncompliant. So the regulatory decisions, as PAT is being applied, on an existing line will only be based on FDA-approved, validated methods. Every other method would be a research method from that perspective.
The second way to look at that is if a company starts from the right thought process, in terms of PAT, if PAT is process understanding, you have to start from the very beginning, start understanding the raw material and so forth, and move towards your end product. So that way, even if you see deviations and so forth, that can essentially be adjusted and corrected, and you really shouldn't have a problem.
The other aspect is in terms of when you have a new method, the acceptance criteria that you have should be different. If you test 10 tablets to make a decision today and if you test 10,000 tablets to make a decision, that is a different acceptance criteria. Essentially you look forward to receiving sound scientific, statistically based approaches from companies to do that.
So that was actually the first or the second question we posed to the Science Board. Unless we are ready for science-based decisions, this won't happen. So we're ready for science-based decisions.
We're also providing regulatory support and flexibility during development and implementation. We're meeting with companies who are ready for proposals. We've already met with several. In fact, I think the challenge for us is now to accelerate the process in such a way because we didn't anticipate things coming in so quickly, and they have started coming in. That's a good thing, but we have to ramp up our process.
The reason for this is to eliminate the fear of delayed approval, but also instead of dispute resolution, you want to avoid the disputes first. So these meetings are focused on science first, and then we define a regulatory strategy, not the other way around. Here is the regulatory strategy and then the science. The discussions for these meetings are first science, understand what is being done, understand what the issues are, then construct a regulatory part for that.
The last bullet here is science- and risk-based regulatory approach. So what is the incentive for companies to do this? I think one incentive, other than this makes sense from all other perspectives, but from a regulatory perspective, I think companies that understand their processes better essentially we have moving towards a low-risk categorization based on a higher level of process understanding.
So the strategy for moving forward right now. We have conducted several workshops, some of which we have co-sponsored, in both the U.S. and Europe. These workshops have been very valuable, especially in terms of the scientific discussion and debate. Some of these have been emotional. Especially the Arden House Conference was quite an emotional workshop. It was across discipline, pharmacy versus chemical engineering type of debate, R&D versus manufacturing type debate, but we had to get over that debate, and I think we had to move to the shared vision.
The general guidance on PAT is to be released later this summer. We'll have a training workshop on that guidance, and that will bring together different associations.
FDA cannot do this. All we can do is to create champions, and that's what our focus has been. Champions to drive this initiative towards a shared vision or desired state that we discussed yesterday.
Champions that have already been there. We simply supported them. Pfizer, GSK, Bristol-Myers, Aventis, and others.
Academia. MIT and Purdue were the champions that were already on board, but I think now we can see the list of universities growing tremendously in the U.S. and in Europe. But also I think this summer we have discussions to get universities in Japan on board here. PAT has been introduced in pharmaceutical engineering programs at Purdue, Michigan, and Rutgers.
We are moving towards a system where we would like to see all the instrument vendors come together as an association. The reason for this is we are getting so many requests for meetings to say here is our technology, here are the issues, and so forth. We cannot afford to meet with them on a regular basis. So we will issue a Federal Register notice to bring all these vendors together and encourage them to move towards an association so that we can address common issues.
Here I think the framework would be -- we have been in discussion with the National Center for Manufacturing Science in Michigan. That center was mandated by Congress for the automobile industry. That will probably be a framework for bringing them together.
Strategy for moving forward, continued. Improving FDA knowledge base for technical policy development. We have recruited several experts and I'm getting so many CVs from people who want to come to work for FDA. It's amazing. Many from Pharmacia. No.
DR. HUSSAIN: Intramural research refocused to address technical needs and for in-house training. Our research program is moving forward to support that.
We would like to learn from other industries. We are in discussion with ASTM, for example. ASTM has several wonderful guidelines for on-line process analyzers and so forth for the petrochemical industry. I think instead of reinventing the wheel, we would like to put together a working group of industry, academia, and FDA folks together to adapt or adopt some of these guidelines so that we don't reinvent the wheel.
We have a collaborative research and development agreement with Pfizer. I think it's almost signed off right now. This will focus on on-line methods, especially focused on chemical imaging.
We have finished the paperwork now, so this is now almost official. We will be part of the NSF Center for Pharmaceutical Processing Research. NSF invited us to be part of this, to champion this, and this is not the only one. We are in discussions with the bigger center, the National Center for Pharmaceutical Engineering and Research, with NSF. So NSF is very supportive in helping us move in this direction.
But finally, I think the strategy moving forward is to move the PAT initiative as part of the cGMP initiative for 21st century. This becomes an example of every element you see in the cGMP initiative. So it's an example of science- and risk-based systems approach to product quality regulations.
Now, within the framework of the cGMP initiative, which we now call a drug quality system for the 21st century initiative, what we have done is post-approval implementation of PAT. The draft guidance that we issued on comparability protocols -- and Dennis will talk to you about that soon -- is the PAT-comparability protocol concept. Now, several companies have already proposed this, and in fact that has become a framework for discussion. I think the main emphasis there is systems thinking, process understanding, risk mitigation strategies focused on manufacturing science.
The PAT Review and Inspection Team is also an example of training and certification, science- and risk-based review and inspection.
Clearly the product specialist on inspection concept is built into the PAT. We have experts who have done this, have hands-on experience in industry. So we have the right expertise. I think I won't be exaggerating if I say we probably are at the 90th percentile in terms of know-how on PAT. I think we do have the right expertise and we're getting more of that right expertise.
I want to emphasize what I mean by moving from testing to document quality to quality by design. I think this is a fundamental paradigm shift. What does this mean?
For example, if I take particle size as an attribute, effective methods for managing and controlling particle size variability to provide consistent performance. That's the thought process. For the last 20 years, we have struggled, especially when it comes to physical attributes, to define public standards. It's difficult. Instead of saying, this method, that method, that comparison, we'd like to focus on test methods for understanding variability and managing variability. There's a different fundamental approach to that and I hope you can see that.
Establishing causal links between material attribute variability and performance. So you're connecting your test measurements to release something which is meaningful.
Reduce reliance on lab-based test methods. That's what we mean when we say move from testing to document quality to quality by design.
It improves focus on process understanding as compared to test to test comparisons, and with particle size, I don't think we have a clear solution in mind if we keep the focus on test to test comparisons, as we have been doing.
Let me change and start setting up for the next two speakers and start setting up the whole concept for the next meeting, the risk and how PAT process understanding can help us move in that direction is my focus now.
Now, change is risk. That has been the focus of the SUPAC debate because change is considered risky because if you have a black box, if you change something in the black box, then how do you know what the impact is unless you do all those tests to find out. That has been the framework under which we have operated, but with a high level of process understanding, change may not be bad. Change is innovation. Change is improvement also. So you really have a means for distinguishing good from bad.
So if you look at section 116 of the Modernization Act, a change can have a potential to have an adverse effect on identity, strength, quality, purity, or potency of a product as they may relate to the safety or effectiveness of the product. That's what the risk is. And the risk categorization that we have today, if there is a substantial potential, we require a prior approval supplement. If we have a moderate potential, we now require a changes being effected-30 days or changes being effected supplement. If you have minimal potential, it's an annual report. The regulatory scrutiny is different. The test required to justify is different.
But through the quality by design concept and process understanding, actually what might be substantial potential now can become minimal potential through process understanding. That's the theme that we would like to think about.
On the review side, I think we are moving towards a quality system for review and creating a risk-based approach to the review process itself. Now, you have to consider this. What is the objective of the review process? Review is to minimize intolerable risk to patient safety. That's essentially what the end goal of that is. So in the review process, what we have to start thinking about is identify risk scenarios, assess likelihood of fault condition, assess severity of impact, assign risk grade, assess probability of detecting fault condition, and determine the mitigation strategy, if it's right or not.
That's what the review process in an ideal way should be in my opinion. But today it's not. It's more on test and these are your batches and so forth. So how do we transition from today to something like this if this is what is desirable and what is necessary?
In a risk scenario perspective, what is risk of unacceptable quality? Again, building on the SUPAC example, releasing an unacceptable quality product is a risk. This could happen because of inadequate controls or specifications where you might have a new impurity that comes in or you may lead to a bioinequivalent situation, or you may have inadequate process validation. Sampling may not be representative is one example of what that risk scenario is. You have stability failures. You have bioinequivalence, and essentially the poor process quality leads to some of this. So these are the typical risk scenarios that SUPAC and other things that we have done have tried to address.
But I think SUPAC is one example. The biopharmaceutics classification system was another example of the risk management that we developed before. And here the biopharmaceutics classification system went to the heart of what is the rate-limiting step in the absorption process and how is the product and drug attributes related to that list.
So when we were developing this guidance, it was fortunate enough that I had the lead on this. I spent a couple of years just on this guidance itself. So how did we approach this? We started looking at what are the risk factors. Manufacturing changes pre- or post-approval we have already defined as minor, moderate, and major changes based on SUPAC.
There's also the issue of poor process capability. This was important in our discussion because most of the decisions we make are based on 6-12 tablets for analysis. How representative is that and how do you really rely on that decision? Plus, you have a test which could be variable itself.
So the real question came back to can we rely on in vitro dissolution tests. Especially when you have a single point specification with the sampling issues, we don't know whether that correlates in vivo or not.
So that was the heart of the BCS classification discussion that we had. And there were other factors that can lead to problems. So when we developed the BCS classification and allowed dissolution to be used only in the case of highly soluble, highly permeable, rapidly dissolving, we were not comfortable with saying you can rely on dissolution if you have not a rapidly dissolving tablet because clearly there are certain elements of the test method itself which are challenging, as well as unpredictability of what it means in vivo.
So the assessment of risk was what is the risk of bioinequivalence between two pharmaceutically equivalent products when in vitro dissolution test comparisons are used for regulatory decisions? That was the heart of the question with the BCS guidance that we developed. So we wanted to look at the likelihood of occurrence and severity of the consequences. So narrow therapeutic index came into that perspective and likelihood of occurrence was an evaluation of the entire database that we had and saying that when the dissolution is not rapid, we were not comfortable with making that decision.
So the regulatory decision came back as whether or not the risks are such that the project can be pursued with or without additional arrangements to mitigate that risk. And all the other arrangements that you see in a bioavailable request were designed to minimize this risk.
The most valuable experience that I had with this guidance was ask the question, is this decision acceptable to society? It took significant effort to make sure that it was.
Now, as you move towards Dennis' presentation and SUPAC-comparability protocol, I would like you to think about the PAT and quality by design and how that will evolve the SUPAC or the change management system that we have. If you look at the SUPAC guidances today, we have three categories of changes, and high, medium, low are the risk levels. To a large degree, the risk levels were determined on the basis of AAPS workshop as consensus risk factors. We did extensive research at the University of Maryland that confirmed that they're fine, but the SUPAC guidance is overly conservative. If you look at the University of Maryland data, we could have made many more changes, and I think it would have happened. We did not go there because of the issue of generalization. Can we generalize the University of Maryland data based on six model compounds to the rest of the population out there? That was the reluctance. That was held us back from that perspective.
Now, in a "make your own SUPAC" concept, when you have a high level of process understanding, we can take the SUPAC to the next level. What I have done here is I have combined the SUPAC, high, medium, low, with GAMP-4, which is an ISPE document which has a risk assessment. Essentially it's based on failure mode/effect analysis.
The next two levels of improvement that we can bring in SUPAC is this. Today we do not talk about risk likelihood. Everything is risk. So we do not have a sophisticated way of saying what is the risk likelihood. When you bring development information and knowledge and quality by design concept into systems thinking, we can actually start talking about risk likelihood. And if the risk likelihood is low, what is high risk today in SUPAC could become low risk based on that.
Example. A manufacturing site change -- Colin mentioned this to you earlier -- a change in ZIP code is a major change if it's a controlled-release product. We will require three batches of stability, a biostudy if you don't have in vitro correlation. So just changing ZIP code, no other change is that requirement.
Now, what is the risk likelihood? Because we are treating that as a black box. We don't know what will happen. So if you have process understanding, you know what the critical variables are, you know what the risk likelihood will be in a more sophisticated way. So you start reducing the risk likelihood. If the risk likelihood is low, then what is high risk today in SUPAC could become a low risk.
But that's not enough. We can go one step further. In the previous slide, you have essentially decreased the risk classification in SUPAC. The risk classification has gone down. But now suppose you have a process understanding as well as on-line controls and so forth. Even if there is a fault condition, then you improve the probability of detecting that fault condition. So how do controls allow you to mitigate risk factors? That was the question Gerry had raised to you. So with quality by design systems thinking, right measurements, right time, even if there is a risk factor, if you increase the probability of detecting the risk factor and sort of managing that, then from a regulatory perspective, the risk goes down.
So I will wrap up here. A perspective on PAT is just one piece of the puzzle. It was a wedge to start this process. It becomes an example, but I think the entire system is this. Today I'd like to use this book by John Guaspari, A Modern Fable about Quality. "I know when I see it." In a black box situation, our chemists have to see the stability, have to see the bio to make a decision. So the current situation is "I know when I see it."
Vision 2020: "I can see clearly now" essentially is the direction we want to go. Here quality and performance by design, continuous real-time monitoring, specifications based on mechanistic understanding of how formulation and process factors impact product performance, high efficiency and capacity utilization, science-based regulatory decisions focused on product and process quality. That's the shared vision that we discussed with you yesterday.
I will wrap up with this. We are planning an Arden House 2004 conference. Now, the PAT essentially is a tool for process understanding. And this committee I think will really help us bring this together. How does process understanding link to risk-based regulatory assessment? But then I think process understanding is a function of design, predictability, and capability where design is based on intended use of that product. Predictability is based on first principles modeling and so forth that you're bringing in. Capability is optimization, continuous improvement, including corrective action/preventive action. I think we are trying to create this equation this is the desired state for the future.
And the triple integral is because it has to be across disciplines, clinical, chemistry, biopharm, and so forth. It has to be across time. And as G.K. says, it has to be across space.
DR. BOEHLERT: Thank you, Ajaz.
Questions from the committee or comments? You did such an excellent job, that they're speechless.
DR. GOLD: Judy, I have a comment. I wanted to thank Ajaz for an excellent presentation. It was very, very well organized and very well presented.
I do have a question that perhaps you can answer. This is an excellent vision for the future. Where are we right now in terms of what is happening? You mentioned that there are several initiatives underway with some of the major PhRMA companies. Are you free to discuss what those initiatives are in general terms?
DR. HUSSAIN: No. These are submissions. We cannot talk about that. But we have proposals being submitted for discussion and we have started moving on that already.
DR. GOLD: Are you free to indicate the type of technology that is contemplated at this point?
DR. HUSSAIN: Not really, no.
DR. GOLD: Not really, okay.
DR. BOEHLERT: Thank you.
Second on the agenda this morning is Dennis Bensley who is going to talk to us from CVM.
DR. BENSLEY: Good morning. My name is Dennis Bensley. I'm from the Center for Veterinary Medicine within the Food and Drug Administration. Yes, the FDA does regulate animal drugs and it's very similar to human drugs. So quality issues for animal drugs are just as important as they are for human drugs.
Before I begin, as you can see my title is "Changes Without Prior Approval: An FDA Perspective." And this is pretty much the same presentation I gave at the PQRI late last month. And some of you were there and have seen this talk already. You're excused, but then again, if I excuse you that will be one-third of the audience gone, so you need to stay.
What this changes without prior approval is, is just another word for supplemental applications. And a little bit of a background before I continue.
When we get an original application for approval, one of the components of the original application is a chemistry and manufacturing control part of it. The chemists or microbiologists, the CMC reviewer will look at that information, review it, find it to be acceptable, and then eventually when the product is approved, that's what's legally binding for the sponsor. It's approved processes, it's approved specifications.
What's in that package can include various things: raw material controls, the formulation, manufacturing process, descriptions for both the drug product and drug substance, analytical controls, validation information on analytical controls, stability information. And once we approve this application, the sponsor is legally bound to follow those items in that application or any commitments they made in that application.
Now, when supplemental applications happen is after the original approval of the drug product. And a manufacturing change is a constant. Chemistry and manufacturing control reviewers within FDA see manufacturing changes for the lifetime of a product, which makes this kind of unique in the pre-market arena because we see supplemental changes on a continuous basis. Our focus here is primarily with those types of supplements that require prior approval from us because those are more burdensome, from a regulatory perspective, for the industry and also somewhat burdensome for us also.
So I'll continue with my talk. A little bit of the outline of my discussion will be just a quick introduction, background which is more of the legal aspect associated with supplemental applications. Our current FDA assessment on the supplemental changes process. Current risk analysis, and Ajaz did touch on that a bit. Somewhat on the comparability protocol, which we're very excited about. Strategic goals that we intend to do for the future regarding this area, and the conclusion.
Now, the Changes Without Prior Review Working Group was established by FDA'S Drug GMP Steering Committee, which was headed by Dr. Woodcock, who's the center director for CDER. The working group members. As you can see here, it's a pretty big cross-representation from the three centers and various offices, and it's co-chaired by Drs. Hussain and Sager.
What is the charge of the working group? It's to examine the current state of the supplemental change approval process, specifically those manufacturing changes requiring prior FDA approval. And it's to identify and recommend implementation of other means to reduce reporting requirements. For example, the use of risk management tools, comparability protocols, product development information, and PAT, which Dr. Hussain just talked about.
The purpose of the workshop, when we presented it, was to present a summary of FDA's current thinking and activities regarding the supplemental change approval process and to stimulate discussion and constructive feedback from the stakeholders.
Background. What are legal requirements regarding supplemental applications? This pretty much started out from what I'm going to talk about, basically from the FDAMA, Food and Drug Administration Modernization Act of 1997. The legal requirements are that the applicant must notify FDA of each manufacturing change in accordance with section 506A of the Federal Food, Drug, and Cosmetic Act and when it's finally finalized within our regulations for both CDER and CVM. CBER has very similar language.
So pretty much, the applicant must report any manufacturing change that was approved in the file. If they make any changes, they must report it to us. But there are different mechanisms of reporting, and as I stated earlier the prior approval supplements are the most burdensome.
As part of the reporting of these changes, the applicant must also assess the effects of any change on the identity, strength, quality, purity, and potency of the drug as they may relate to the safety and effectiveness of the drug before distributing the product made with the change. In layman's terms, that means they can't market the product until they get approval from us. That's for prior approval supplements. And as part of this application, they must provide information to us, data, anything that convinces us that they've done enough studies on this change, that the impact of this change will not have a significant impact on the quality of the drug product and will not impact the safety and effectiveness of the drug product.
There are four legal reporting categories under FDAMA and these include: prior approval, immediate CBEs, CBE-30, and annual reports.
Prior approvals are for major changes, and major changes are those types of changes that have a substantial potential to adversely affect the identity, strength, quality, purity, or potency of a product. Products made with a major change may not be distributed until approval. We have identified a lot of these major changes through guidances. Some are identified in our proposed regulations.
The next category is considered moderate changes and there are two types of moderate changes. Reporting categories: these are immediate CBEs and CBE-30s and these obviously have a moderate potential to adversely affect of the drug product. Now, for immediate CBE-type changes, the product may be distributed at the time that the change is reported to the FDA.
The one that's actually more popular, at least for CVM, what we see more often, is the 30-day CBE. That allows the agency 30 days to determine whether that particular change that they're reporting is either a moderate or minor change or it's a major change. If we feel it's a major change, we notify the sponsor and then we review it as a prior approval and they may not implement the change. However, if we feel that, yes, we agree that it is a moderate change, they may implement the change after 30 days.
Then we have annual reports. This is where all the minor changes are being reported. These obviously have minimal potential to adversely affect the drug product. Obviously, they may be immediately implemented.
Now, the section 116 of FDAMA, which is 506A of the Act now, meet the expectation of providing regulatory relief by lessening the reporting requirements of manufacturing changes without compromising the drug's quality, safety, or effectiveness. I believe the answer is yes, with a caveat. A little bit of background here.
Many of the types of manufacturing changes that you are going to report to the agency are identified through regulation and guidance. Section 506A of the Act and our regulations, at least the proposed regulations, identify major, moderate, and minor changes.
We have what I call changes guidances that are currently published. These are changes to approved NDAs or ANDAs. You see there's one for CBER and also one for CVM. These are fairly harmonized documents between all three of the centers. They do identify in more detail the different types of changes and different categories.
Then we have various PAC and SUPAC guidances that also identify even more types of changes under the different categories, but in addition, they also describe the type of documentation to file in support of that change to the agency.
What was the impact of FDAMA on filing? I have it for all three centers, and since I'm from CVM I have CVM first. As you can see from pre-FDAMA times -- that's up to 1997 -- about 95 percent of our manufacturing changes were reported as prior approval supplements. Post-FDAMA, 1999 to present, you can see that it dropped down to about 20 percent for prior approval with a significant increase in CBEs and annual reports.
CDER, for this three-year period from 1999 to 2001, sees the same trend for both pioneer and generic drug applications. As you can see, it's dropped fairly significantly for prior approval supplements, and there's obviously a concurrent increase in the CBEs.
CBER sees the same trend over a six-year period, going from 100 percent for PDUFA products down to, it looks like, about 25 percent for prior approval supplements.
So yes, these are a significant increase and decrease in the number of submissions we're seeing. So FDAMA has significantly reduced the reporting requirements. However, we recognize there could be an additional improvement in the change reporting process.
What are our current concerns regarding the supplemental change process? Though, as I've shown you earlier, the relative percentage of prior approval supplements as compared to the other reporting categories has significantly decreased, however, the number of prior approval supplements are starting to increase because we're talking about relative numbers. So we're getting a lot more supplements based on a lot more original approvals. So we're still seeing a high number of prior approval even though the relative number has decreased.
Though it significantly reduced overall from pre-FDAMA times, the number of reporting prior approval changes remain high for certain product types and processes. For example, sterile products is specifically more in the aseptic processing, which will be a very difficult issue to tackle because with all the models we've used so far or are contemplating, these are considered high-risk products and will likely still remain in the prior approval. But I think we still need to work in that area and try to reduce that burden somewhat.
We recognize that any prior approval change could affect business planning and possibly impede innovation. You have to remember, they require prior approval from us before actually implementing the change, and legally they have up to 180 days, which is six months, to make that change. Obviously, there are some variations because of PDUFA, but on the record legally, it's 180 days and sometimes it takes longer to get the approvals out. So six months is a long time to do business planning to make a change.
There's no guarantee that prior approval supplements will be approved during the first round. It's our experience -- and I assume it's very similar to the other centers -- that 40 percent of the first round prior approval supplements are found to be incomplete. The data was not sufficient. The GMPs were not adequate. There could be all kinds of reasons.
We also have a compliance dilemma if we find that a changes being effected for an annual report reports a change either that should be in a higher category or the data that assesses the effects of the change is inadequate. What do we do if the change has already been implemented? Obviously, the act does allow for us halt distribution of a product, but it takes a lot of resources to do that. A lot of times, we like to work with the company to get this resolved, but it is a dilemma and the companies do realize that this is the dilemma that they face, that they need to address when they make these changes. Some companies are actually very reluctant to do CBE changes because of this reason.
What are potential solutions?
Use of comparability protocols. And I'll discuss that a little bit more later, and I think that could address many of the issues I just finished talking about.
Drafting and publishing more PAC, SUPAC guidances.
Identifying potential risk management tools.
And encouraging the use of product development information and process control improvements, for example PAT. For product developmental information, we'd like to see the developmental report because basically the companies know how their product works, what doesn't work. A lot of times we don't see that development work. That's not really part of a requirement to submit that to us as an agency. But if we see that information and they can convince us that this product is rugged, this type of change doesn't affect it, if they have that type of information, then they can propose for future changes alternatives rather than a prior approval supplement or a CBE supplement.
Current risk analysis. Ajaz covered this a bit and it's a very simple model for supplemental changes. We have three potentials for adversely affecting a drug: a significant, moderate and minimal potential. The level of risks are corresponding: high, there is some risk, and there's a low risk. If it's a high risk, yes, you need a prior approval supplement. If it's a moderate risk, no, but you need a CBE supplement. And if it's a minimal risk, it's submitted in the annual report.
Now, how do we determine whether a change is major or requires prior approval? When I originally wrote this up I was thinking -- because I am a team leader in CVM and I deal with these issues on a daily basis. Companies call me up, I get 30-day CBEs and I have to make that determination whether it's major or minor. These are the types of questions I would go through, and it's pretty much I think what the agency does go through, too.
The first question that would come up, what is the likely impact of the change on the identity, strength, quality, purity, and/or potency of the drug product? And obviously, we have some changes that are actually identified in the act that they must submit as major changes, but if we believe that it has a potential adverse effect then it's likely a major change. So it's important again, in the original application, to build up that knowledge base so that we know that this is not going to have an effect.
Will additional clinical or non-CMC like tox studies be required? If yes, then it's likely a major change.
Is the reported change either not well described, too complex, or is the potential impact on the drug's safety or effectiveness not certain? If yes, then it's likely a major change. And I see this a lot. A lot of companies say, okay, we want to make this change, but there's no justification, no rationale. It's not described very clearly. For example, with a 30-day CBE, I only have 30 days to make the assessment, and I have many other applications to go through. I don't have time to actually do the review and determine whether it's going to be a moderate or a major change, so I'll be very conservative and make that a major change.
If applicable, what is the current GMP status? If unacceptable, then it's likely a major change.
So what's the basic question that we use when we address a risk assessment, when a risk assessment is performed regarding a CMC change? Basically it comes right out of the act. It is, what is the potential -- or in other words, what is the risk -- for the change to adversely affect the drug product? The potential risk for a CMC change increases when the knowledge regarding the potential impact of the change decreases.
What is the purpose of a prior approval supplement for specific changes? Well, these are changes that we identified, those having a substantial potential to adversely affect the drug. This is just based on our history and our experiences in reviewing these drug applications. We have these listed in the regulations. We have these listed in the guidance documents. And it allows the FDA time to review and concur or not concur with the proposed major change and its assessment prior to product distribution.
FDA tends to be conservative in regard to accepting levels of risk. If we are not certain about the potential risks, then a higher filing category will likely be required. That goes, again, back to building up that knowledge base for original approvals. PAT will nicely address that also.
FDA employees use risk analysis daily. I think everyone here uses risk analysis daily. For example, deciding whether a change is major or moderate, that's a thirty-day CBE assessment. In CVM, that's a team leader's job. That's what I do. Deciding whether the assessment of the change is satisfactory or not is part of the review process. Deciding whether a GMP inspection is required or not. And you can see CBER has an SOP regarding that.
However, risk assessments for CMC changes are neither formalized nor uniformly structured throughout FDA. It can either be very subjective individually as, for example, myself as team leader, I make a decision. It may not necessarily be what the other team leaders agree to, or as a group. Maybe CDER makes a decision that may not necessarily be what the other centers agree to.
Possible ways to reduce the risk potential include the use of comparability protocols. The premise is the acceptance of proposed assessments of anticipated change will likely lessen risk of implementing the change, which will lead to less burdensome reporting categories.
An applicant may establish their own filing criteria based on developmental information in original or supplemental applications. The premise is increase in scientific understanding or knowledge of a change's impact may lessen risk for implementing the change and could lead to a less burdensome reporting category.
Incorporating significant process control improvements. For example, PAT. Improvement in process controls may lessen risk for producing poor products and could lead to less burdensome reporting categories.
Can other risk analysis models be used to identify the level of risk for implementing CMC changes? For example, can we identify through risk assessment low-risk drugs, dosage forms, processes, et cetera, and significantly reduce the number of changes requiring prior approval before implementation?
Now, on to comparability protocols. What is a comparability protocol? A comparability protocol is a well-defined, detailed, written plan that prospectively specifies the test and studies that will be performed, analytical procedures that will be used, and acceptance criteria that will be achieved to assess the effects of specific changes for specific products.
A draft guidance for CPs has been published recently, for what I call the small molecules, and the public comment ends by the end of next month. A CP is described in the proposed regulations, and actually in the current regulations too, and FDA believes that additional prior approval changes can be reported in CBEs or annual reports through the use of a comparability protocol.
What are the uses and benefits of a comparability protocol? If you recall, a comparability protocol is actually submitted to us as either a supplemental application -- so it is a prior approval, so we do have a prospective analysis of that -- or it can also be submitted as part of an original application.
What are the uses and benefits? It can allow for a reduced reporting category of CMC changes covered by the approved CP. The CP can describe single or multiple related CMC changes, including those that may occur sequentially over a period of time.
Earlier implementation of manufacturing changes. Likely reduction in incomplete deficiency letters issued by FDA, more first-round approvals, because the means of assessing the change has been approved in the CP. This gets back to my earlier slide when I said 40 percent of the prior approval supplements are found to be deficient. If we had a prospective analysis of those types of changes, and we agreed to the type of testing they will do, then likely that would be reduced significantly and we could get more approvals out.
They allow a sponsor to design his own changes filing and documentation criteria based on experience with the drug product or similar drug products. For example, developmental studies. Ajaz coined the term, "make your own SUPAC" concept.
It allows sponsors to continually improve manufacturing processes without necessarily requiring prior FDA approval, potential for PAT implementation. I can see PAT being introduced as part of a comparability protocol.
Reduces the potential risk for the change to adversely affect the drug.
And it's the potential win-win situation for a public, industry, and FDA. You get timely products. The quality in many cases actually improves if you use PAT, and it actually reduces some of the burden of reviewing from our end.
Unfortunately, for CPs, there's limited CDER experience, and absolutely no experience for CVM, so I'm the perfect person to talk about this subject. CBER has most of the experience because I believe the comparability protocol is a concept that was devised by them. Currently they have more than 100 comparability protocols that have been successfully used for CMC changes for all product classes since 1997, and a submission of developmental information of CPs has convinced CBER to accept reduced reporting categories for some CMC changes.
This is very good news, I thought, because CBER tends to have more of the complex products, the biologics and so forth, as compared to CDER and CVM. So if they're able to do this then I'm certain that CDER and CVM can just as easily do it.
What are our goals? We're going to publish another draft comparability protocol called Large Molecules. That's primarily the protein molecules. Finalize both the comparability protocols, continue to amend or introduce new PAC/SUPAC guidances, and hopefully publish the final regulations for all three centers. Conduct studies. This is part of our working groups' jobs. Conduct studies evaluating existing data on prior approval changes and identify opportunities for further reducing of reporting categories. That includes determination of the number and types of prior approval supplements submitted to each center over a designated time period. To a small degree CVM has already done some of the studies, and we shared that with CDER and CBER. Identify other potential risk models or other means for reducing reporting categories, and consider additional ideas as the result of discussion and feedback received during workshops.
And these were the following discussion points that we had during the workshop. Scientific risk-based approaches for identifying low-risk manufacturing changes, the comparability protocols, and effective use of developmental data and other information to justify less burdensome filing requirements.
And that's it. Thank you.
DR. BOEHLERT: Thank you. Questions, comments? Tom?
DR. LAYLOFF: Yes. I had one question on it. This is a harmonization activity on CP, and is CBER involved in harmonization also?
DR. BENSLEY: Yes.
DR. LAYLOFF: So you're going to have a single regulation for CVM, CDER and CBER as to how --
DR. BENSLEY: We're going to have the same guidance, yes. All three centers are on the same guidance, yes.
DR. LAYLOFF: How many different guidances are there in this harmonization process?
DR. BENSLEY: In the comparability protocol? In the other ones? Well, we have what I call the changes guidances. CDER has their own. We have our own because our products are a little bit different from theirs, so we sort of have to adjust it differently, but the language is very similar. CBER has their own. SUPAC/PAC documents. I don't believe CBER has any of those, but CVM is harmonizing with CDER on a number of those. It's mostly CDER's.
DR. LAYLOFF: So the agency is moving to harmonize.
DR. BENSLEY: Yes.
DR. BOEHLERT: Any other questions or comments from the committee members? Efraim?
DR. SHEK: I have a question with regard to the statistics you have shown and the change, I believe, moving from preapproval supplements to CBEs. And I believe those changes are for the better to improve the product or the process. I wonder whether the total request for changes has increased as well because what you have shown is the relative. Are more companies submitting more requests for changes than they used to do before?
DR. BENSLEY: Yes, it's a little more difficult to define because we base it on applications. Our metrics is based on the applications. There could be multiple changes within an application, or annual reports could have dozens and dozens of changes reported in those. So it's kind of difficult to make an assessment. But it seems, from a personal experience, I think there are more changes being reported in CBEs and definitely a lot more reported in annual reports. So we're seeing less and less prior approval supplements, in general.
DR. HUSSAIN: We looked at some of the statistics on the CDER side in terms of number of supplements coming in. Since the number of applications being approved are increasing, I think the number of supplements are on the increase also. At the last count when we did that for the Science Board, I think we were over 4,000 supplements a year.
DR. GOLD: Dennis, a question. On the length of time that it takes on average to approve a prior approval supplement, has there been any change in that time period during these numbers of years?
DR. BENSLEY: I think with CDER I think they can respond to that from, I guess, the PDUFA funding. They have 120-day cycle for prior approval? I don't know.
DR. HUSSAIN: 180.
DR. BENSLEY: It's 180 days? Okay.
DR. GOLD: That's the allowed time. What I'm asking for is, do you have any statistics on the actual time for approvals?
DR. BENSLEY: I can only speak for CVM, and we're seeing a reduced time in reporting now.
DR. GOLD: Let me just say, that would be a very interesting number for perhaps this committee. Certainly it would be a very interesting number for us to look at, I think.
DR. BOEHLERT: Pat?
DR. DeLUCA: Yes, Pat DeLuca. Your slide 52 mentioned there were 100 comparability protocols that CBER had successfully processed.
DR. BENSLEY: Yes.
DR. DeLUCA: What was the number that was submitted? Do you have an idea?
DR. BENSLEY: No, they didn't share that with me, so I don't know. I would assume it would be over 100.
DR. GOLD: I have another comment. I've heard from various practitioners that in the drug product area that the preparation submission of comparability protocols is not a very attractive opportunity because they're really not able to predict the type of change well ahead of time that they may want to make. And that may largely be the reason why you have reported no comparability protocols in the CDER area.
DR. BENSLEY: I think it's a misunderstanding too, from industry. With our industry, they just didn't read it close enough. They just thought it was another protocol, and it had to be submitted as a prior approval supplement. They didn't understand what they could do with that protocol. So basically if they have a plan change in the future and they know about it, or they have changes that are constant, maybe across product lines that they know it's going to happen, then those are ideal cases to be submitted as a comparability protocol.
A lot of the companies, after the PQRI, especially for our stakeholders, they'll say, we're going to be submitting something to you now, now that we understand it. It's just a matter of getting the word out there and having them understand it.
DR. GOLD: Dennis, I hope you're correct.
DR. BOEHLERT: Any other comments or questions? G.K.?
DR. RAJU: Dennis, to what extent do the phase IV data from the world out there help you decide your risk as you go forward deciding when something should be prior approval? It seems like that's real data around safety and efficacy. Does that come into your database somewhere?
DR. BENSLEY: Yes. I mean, I only can speak for CVM. We don't have phase IV. We have clinical studies and it's based on the marketed drug product. So we don't have the same phases as CDER has. But yes, we consider the safety and effectiveness, and we do consult the appropriate people within our center.
DR. HUSSAIN: Well, I think in terms of phase IV commitment, these are predominantly clinical studies, extra studies, different populations and so forth. First of all, I don't think we have truly gone out to say what value that does add. We haven't done that analysis. And so the answer is probably not much. The clinical studies keep coming in, and I had an opportunity just recently to go through one application, all the phase IV commitment. I did not see any connection on that particular application back to the CMC process. My guess is, not much.
DR. BENSLEY: And it's even less for us.
DR. RAJU: Do you have the recalls and FIR kind of data?
DR. HUSSAIN: That's not phase IV commitment.
Let me share with you. I think this is an important point, and as part of the systems thinking, at some point we want to sort of bring in the CA/PA concept, this corrective action/preventive action. What is happening today is these reports come in in different parts of the agency and so forth. So, first of all, we don't have those connected well enough.
The second is, some of the categories we collect this information is not truly ideal. So David Horowitz and the Office of Compliance actually are moving towards a better way of managing that. I think that would really help.
I have sort of been struggling with this because I chair a committee called Therapeutic Inequivalence Action Coordinating Committee, assessing all the reports that come in on therapeutic inequivalence of generic drugs, and try to sort of connect the loop on that as part of systems thinking. We struggle a lot because the quality of information available in some of these reports do not really allow us to really get to the root cause and so forth. So there is an element of improvement for that, and what the Office of Compliance is doing with their surveillance and their databases I think will be a step in the right direction.
At some point I think we really need to go back and look at how are we capturing this, what are the categories, and so forth. I think we'll have to improve that process also.
DR. BOEHLERT: Any other questions or comments? If not, thanks, Dennis. We're now scheduled for a break and we'll reconvene promptly at 10:15.
DR. BOEHLERT: We'll get started. Our next speaker is Gregg Claycamp, who's going to talk on risk analysis.
DR. CLAYCAMP: Thank you, and good morning.
I have come to the FDA only two years ago from academia, and so I am offering the GMP initiative as a more generic and theoretical approach to how risk analysis is done in a variety of fields and how it might be brought to bear on this problem.
I'm also at CVM, and one of the opportunities at CVM is that we do have a side that is an animal drug side, but we also track the human health risk through the fact that we eat food animals, and so we're looking at a broad range of risk-based issues.
This talk will start with some premise and questions. We'll spend a little time on basic risk analysis, and that is a very broad overview. It's not going to be a probability calculus exercise or anything like that. At the same time, I hope I don't talk under anyone in trying to capture a wide range of backgrounds here.
The talk will then go on to some possible ways of bringing risk assessment into this initiative, and risk management. Risk ranking is a possible way of doing that, and we'll talk a little bit about that, then conclude with some other ideas on pilot scales. And, of course, these ideas are only discussion at this point. There isn't a guidance that I'm either presenting or promoting at this point.
The way that I've looked at this problem and heard it from a variety of work groups that I've had an opportunity to visit and work with is that the in GMP process, from an inspectional point of view, there's a variety of risks, and those might be linked to actual items in the GMPs or not, but they're kind of all over the map in terms of the actual risk to public health. And at the other side there is the risk to the patient and, more generally speaking, the risk in public health terms. These two factors are really out of alignment in the current conception of this issue.
What we would like to do is to line up the actual inspection part of GMP and the concepts in GMP risk assessment with the actual patient risk and/or public health risk in a broader sense. That's certainly not an easy task to do. Like many have said, it's a process of getting together and deciding who's going to make first steps at this very difficult and tricky area to work in.
Somewhere back in history we can assume that each one of the GMPs had a risk basis for it in the first place, but things change over time and we need to think about how to reassess those risks and realign the GMP risk with the actual public health risk.
So the question, as I see it, is can risk management theory tools or practice be employed in this process, and secondly, there's a broader need of how can we share a common language about risk and risk management, and ultimately science-based decision making so that we can develop a high quality risk management model in this area.
What theories and tools and lessons have been learned in risk analysis that can help address these questions? Well, there are off-the-shelf models and tools that might be used, for example, and there are other questions that we might ask about which risk management processes can foster the changes needed in both the regulatory and industrial arenas.
Well, starting with some basics, as I taught for quite a while in academics, the first question I brought to a course the first day of every semester in a risk course was, how many of you out there do risk assessment? It's surprising that even in a graduate school of public health you don't get very many hands going up. In fact, risk and its concept is extremely broadly based, and it's something that everybody does all of the time. So in that sense it can be something that's extremely intuitive. That is to say, you do it without any conscious forethought. And at the same time, most of us can think of a risk analysis in the government or in industry that is extremely complex and sophisticated and has many experts brought in to work on the problem.
Risk is defined in many different, yet similar, ways as you go from field to field. It's almost a hobby of mine to look at the many different ways that risk is defined and try to tease out of domain-specific definitions the constant features of risk. And I think for this exercise we can take a very fundamental approach and say that risk is an exposure to a chance of loss, and moreover that's losing something we value. So it doesn't mean that there's necessarily a loss of money or health or life, but it could be even something that's more aesthetically defined.
When we get closer to the formalism of risk, which I will not go into really any formalism today, risk is defined as some combination of hazard and exposure. In other words, you can't really get risk from a given hazard unless you're exposed to it. There's no way the hazards of vehicles, when you're thinking about crossing the street, give you risk until you step into the street. Then you're exposed to it and you have a significant risk of an adverse effect.
This simple definition assumes we're looking under a single consequence or a class of consequences. One of the things in this area is we'll see that there's a wide range of consequences, all the way from a possible death as an adverse event to an effect on quality, which is more difficult to measure quality by itself. In other words, one of my colleagues on a committee said, well, what if there's a gel capsule and it has spots on it that have no effect on safety or efficacy, what kind of risk is that? And so there is a huge range and we must assume each time we approach a more specific risk analysis that we're considering a given consequence. We'll come back to that later.
Contemporary risk analysis has models in just about every field, of any science-based endeavors for sure, and most other business fields. I like to think of it as including four major activities.
Hazard identification, which is also called problem identification by some fields. It's actually looking at what could be a problem out there and just asking that simple question.
Risk assessment is the more formalized process of assessing the risk, given exposure to that hazard.
And risk management is the process when you start to take that information you gained from the risk assessment and use it to support decisions you have to make as a manager, as a risk manager.
There's also a fourth activity that's very important, especially in regulatory risk assessment, and that is risk communication. That's the process of sharing information among all of these phases of risk analysis and engaging stakeholder communities in the discussion and trying to put the sometimes sophisticated risk analyses into everyday terms.
Risk assessment usually precedes risk management. Risk assessment, as I'm using it, is not a single process, but as a National Academy of Sciences committee said in 1994, it's a systematic approach to organizing and analyzing scientific knowledge and information. That's a fairly robust definition, that if we spend a lot of time saying, well, whose risk model is the exact fit for this exercise, we could spend hours and hours looking at the literature and various paradigms for this process. But if we bear in mind that it's the process of organizing scientific information, it becomes a more tractable task.
So these paradigms that are there for risk analysis in various fields are really geared for the execution of the risk assessment, but there are fundamental principles shared in the process of risk assessment in a more broad basis. For example, risk assessment generally asks, what can go wrong? What's the likelihood it would go wrong? And there we get likelihood. We're getting closer to the probability concepts, the chances. And what are the consequences should that go wrong?
On the other hand, you know you've entered the realm of risk management when you start to ask, well, what can I do and what can be done with this problem? What are the options available, given that there are many different ways to address a particular problem? And what are the risk tradeoffs in terms of risk benefits and costs? So the managers are stuck with the task of figuring out, well, if I go fix risk A, what did that mean for risk B. It's certainly a big job on its own.
What are the impacts of current risk management decisions on future options? So the risk manager also has to be looking forward to the effects of their decisions on the risks and on generating new risks.
Well, as presently practiced, risk analysis gets even further complicated, and that's that we have a democratic society for how we deal with our public health regulations and risks. We might think of this risk analysis in a democracy as risk assessment, as providing the facts. It's often thought of as the "ivory tower" part of the risk analysis group, that risk assessment is the objective place. Well, we could argue at length how objective science is in general, but take it, for simplifying argument at the present, that those are the facts. And risk assessment then idealistically would line up the facts from worst to best in terms of the risk.
Well, risk management decisions are managing risks, and those decisions are value-laden decisions. There are all sorts of parties to a risk management decision, from the public to the agency and to industry, et cetera. So we bring values into the picture and we bring costs and all those other factors that may not deal directly with the actual estimate of health risk, and we end up realigning, re-prioritizing. This is in a global sense, as agencies look at their risks and try to manage them.
The questions I asked before, what can go wrong and what are the consequences, fall within the risk paradigm here, which in some of the literature in the health risk assessment area would be broken into release assessment, exposure assessment, et cetera. In the GMP problem, we could call that, for example, the starting place might be to just say a GMP failure as a more broadly based term that would fit this particular problem.
For the possible stages of risk assessment for this initiative, hazard identification is going on all the time in the review process and the inspection process, and I'm sure in planning. What can go wrong? What are the events that can bring potential risk to the public and to patients? This is identifying also the hazardous agents and those in more traditional health risk assessment are thought of as the chemical, biological, or physical agents themselves, but in our terminology here it may be more useful to think of an event itself.
Given that the event occurs, is the consequence catastrophic, is it mildly annoying? In trying to identify the problems out there these are the types of questions that you would ask. How likely are the events to occur? For example, what essentially happens in practice is that risk managers are looking at potential hazards to send to the risk assessment team. You need to have some rough idea, generally from experts who are familiar with the area, who would say, this is really a big event, the big problem, or it's a small one, and they can get a crude estimate of risk for prioritization purposes.
Exposure assessment in the risk assessment process is conveniently broken into a couple of compartments, and not all people in risk analysis do that, but conceptually there are at least two processes going on. One is there's a release. You can think of that as the source term, is that hazards and hazardous agents are being released, but again recall that risk only happens when you have hazard and exposure.
So we might think of breaking apart the process and saying, well, how much is being released out here, and then a separate question is getting to the consumer end, how much are they exposed to, how much actually makes it out of the drug manufacturing facility, through the distributor to the retail counter, et cetera, or to the pharmacy. It is very helpful to think about exposure assessment in pieces simply because it's a huge undertaking to go from something that may happen on a process line all the way to what's in my medicine chest at home. There's a whole lot of events and physics and human factors and so forth to try to tally in between.
So for example, the release could ask, is a non-sterile event, whatever that may be, involving one or 10,000 vials? That's a release question. How many of those happen? If the hazardous event occurs, exposure assessment asks, what are the pathways that expose humans to the hazard? That is a huge undertaking just to consider ways that people can be exposed. Then the extent of exposure gets at, given the event, how many people are potentially in harm's way.
So in the context of GMP assessments, how frequent are the identified GMP events, and what is the boundary of release? Do we call it at the process line, the plant, the warehouse, the distributor? And release rates or faults could be obtained a variety of different ways in order to do this release assessment, including fault trees, empirically based assessments. You can have historical data, expert analyses. For example, one of the ways this is written up in more manufacturing areas is failure modes and effects analysis is one of the ways to get at those data for release and exposure.
Consequence assessment. Given an exposure to a hazardous event or agent, what's the likelihood of harm under a predefined endpoint? And this is really a process in consequence assessment that is done in drug approvals all the time and drug research, and that's that you ask, what is the effect level given a dose. You can take it as that isolated of a question. So endpoint examples could run from death all the way to inspection-based criteria. It doesn't have to be a human endpoint. We could ask, if we have so many events, what's the likelihood it will generate an administrative action by the agency? That's a real practical point for modeling in terms of business needs.
So classically speaking, consequence assessment in the health arena looks like a dose response curve and just as, again, an example off the top of my head was to take a quantity of contamination, say non-sterility, but it could be metered in terms of bacteria counts per vial, and what's the proportion of exposed persons who would become ill. That is classical dose-response. It may have quantitative measures such as the dose that causes the effect in 50 percent of the population.
What we'll see in this area is that most of the hazards identified in a GMP framework are going to defy quantitative dose-response analyses for the risk analyst, and you'll see more of a low, medium, high type of qualitative/quantitative assessment as we've seen in a couple of presentations. This is saying that in our minds there's some kind of relationship going on that if you have greater increasing units of whatever dose metric it is, you would expect greater effect. But we'll probably seldom see a quantitative relationship.
Finally, the last step of the risk assessment portion is to bring together the hazard, the extent of the exposures, the consequences, and estimate the risk. As the contemporary practice of risk analysis has evolved, it has focused more and more on the importance of thoroughly describing the limitations in the risk assessment and thoroughly describing the uncertainties in the estimate of risk.
As one colleague in the risk analysis field says when health risk assessors argue about, say, the exact cancer risk from an environmental release, he always characterizes it as, why should we worry about where that point is when the uncertainty is like this? If you don't know what your uncertainty is, you really don't know much about the risk estimate.
In risk analysis, the field prefers to think of uncertainty, which is a well-formalized mathematical concept and statistical concept, but we like to add another dimension to it, and that's to break uncertainty into pieces, including the part of uncertainty that's created by a lack of knowledge and the part of uncertainty that's just regular variability.
So, for example, we have a normal variability among a group of individuals when you try to characterize, say, heights and weights in a room. They vary, and you can't get rid of that variation by learning more about everybody in this room. There would be that variation.
However, if I were using this room as a sample of height and weight in the United States, I would have quite a bit of uncertainty about that variability. Is this measure of variability adequate to describe the population of the U.S.? So there, that part of the uncertainty is due to my lack of knowledge about the variability in the height and weight in that case.
So risk assessments, we'll spend a good deal of time sorting that out and talking about what could be reduced. Dennis said that the potential risk increases as the knowledge decreases, and that's another way of saying that we like to think that as knowledge increases, uncertainty decreases.
So that's some very quick concepts about risk assessment. We're about halfway through a semester course in brief form.
DR. CLAYCAMP: Now on to trying to put a more domain-specific spin on these concepts.
First of all, regarding the GMP risk management problem, as I've been referring to it, there's a diverse collection of hazards that have been identified. I know there's guidance from Canada listing the types of GMP processes, whether they be high risk, medium risk, low risk, and the same types of activities are going on in the GMP initiative here.
I know I've gathered from a few lists ideas such as a risk factor being lyophilization and a risk factor is dry mixing or blending, and one called cartoning and packaging, and so forth. Well, the first reaction that a risk analyst has in seeing such lists is that the endpoints are all over the map. You could envision at each given risk factor, well, maybe there's a risk of lethality. Maybe there's no risk of lethality that's imaginable, if a piece of the carton is wrong or something that affects quality. So the question that comes to mind is, well, how do you sort those out and try to put them all on the same page in terms of the actual human health risk, or actually quality risks?
So it's a wide-ranging risk that comes out of this, and there are wide-ranging consequences. There is all the way from death to just worry about the product, which could have an impact on compliance if someone is just worried about the quality of the product.
The quantitative risk analysis on a hazard-by-hazard basis in my view is too vast an undertaking. Not that I wouldn't like to see full employment for risk analysts for the next 50 years, but it's extremely vast, and I'll try to give you some feeling for that problem.
Ranking of risks or to re-link the worst GMP risks with the health risks might be a more tractable approach. And ultimately we're trying to, in this list of factors in GMP areas, we're trying to objectively rank apples and oranges among potatoes and beans. So it goes beyond the usual mixed problem of the apples and oranges.
And also there are the questions we constantly consider, whether you're in the private sector or in the government, and that's, how do you balance the cost of a high quality analysis with the need for reducing uncertainty? So there's trading off that goes on all the time. From having an expert down here on the qualitative scale, you might have an expert grab the back of the envelope, make a couple of quick calculations and give you a risk. Well, is that good enough? That comes with a very high degree of uncertainty and you end up facing these kinds of questions. As I've just mentioned, I think it's too vast an undertaking, fault by fault, to go through this.
So let's just strengthen that idea a little bit and think about something simple in our everyday life, and in the urge of a risk analyst to take something apart into its smallest pieces, what does that look like? Well, this took me a couple of minutes to put together, and it's only a beginning, really. If your light bulb doesn't light on your desk at home, how come? You can go backwards and say, well, there was no electricity, or the glass is broken on the bulb, or the filament is broken, or there's a vacuum leak. You can go backward from no electricity and say, well, it could have happened because the power plant failed, or the power line failed, which goes backwards to, well, maybe a tree fell on it, et cetera. This is a small piece of one event in the mind of a risk analyst.
The fact is, if you get to an industrial process, it just magnifies over and over. When I first came to the GMP work groups, the vision, being a recently recovering academic --
DR. CLAYCAMP: -- was, wow, we could take one risk factor per Ph.D. student and they could break into this for the next 100 Ph.D. students.
So how do we get this problem that is so potentially large and get it into the scope of a manageable exercise, manageable in terms of producing the desired effect as well?
Well, decision models -- I just stuck one in -- are also as complex as doing the fault approach that I showed. The potential solution is that there are simpler, multi-factor approaches to risk assessment and management that already exist. And they have been in practice for literally decades and there's already even some software tools that help you do this. The overarching question here is, from the risk analysis side is that we need to look at these methods, the wide range of methods, and appropriately scale the approach to the question, to the quality of data, and to the nature of the decision we need to make, and to our understanding of the whole process.
So as a starting point, it's helpful to state the assumption, and that's that compliance risks are historically -- we think that if you increase compliance, the overall health risk would go down. We also think increased compliance with GMP leads to an increase in quality. Otherwise, why would we have the process in the first place?
Given the assumption, can we model compliance risk as a surrogate of health risk? That is a pretty broad starting assumption, but nevertheless, for this purpose we can move on into a little more detail with it.
In GMP failures, considering those to be the hazards, what can go wrong? You could organize this into a top level to get a multi-factor risk ranking. You could organize it in terms of health, compliance, resources, sociopolitical, and there should be an ellipsis there because it could go on to other factors. In that brief list, out of a long list of risk factors, the mixed ones, sterility and cartoning and packaging and so forth, we would take them one at a time and say, what does this mean in terms of health, and try to rank up a list of risk factors.
What does it mean in terms of my compliance risk? What are the odds that having a fault in cartoning will lead to an OAI or VAI on the next inspection? There could be resources needs, et cetera.
Then there's a second level of organization that includes looking at what exactly is the detail in the hazard, or the GMP failure, in terms of is it a sterility problem, dose, toxicity, et cetera. And there can even be finer details that we need not go into any further at this point.
So we'd start with the assumption, state the questions to be answered, sort under those questions, and re-sort, et cetera. What this might look like in a multi-factor approach is just basically lining this up. Risk analysis. Sometimes I feel like we're explaining common sense a lot of times, and when I get that skepticism of, well, no, it looks kind of fancy, well, just think of trying to decide if you have restaurants A, B, C, and D, how do you decide which one to go to? If you're just going by yourself you might say, well, gee, A has the highest price and I don't want to spend the money, so maybe I'll go to D. But B has the best food, and so forth. Or C, you have to wear a coat and tie and I don't want to do that. But you're taking them one factor at a time in your mind. Then you have some model for combining those decision variables into your overall decision.
Well, that's fine and simple and I hope nobody goes through a quantitative exercise to do that, but in fact, when you get into a group, now you've got a group of decision makers and they each are working one of those models and we all know how hard it is in an advisory group or study section or something like that to decide where to go to dinner as a group.
That's essentially the process that's going on here, as we look at each factor one at a time, under these categories. So there would be health risk endpoints to rank risk factors identified as either GMP items or new GMP items that could be organized under health, compliance, et cetera.
This just breaks it further, that if the endpoint were death, is sterility the problem, linking it to death? Was it a lyophilization step? Final sterility? Where are the things that lead to that particular one, and each one would have its own characteristics.
A second step after that organization is we need some kind of prevalence estimate to get the initial estimates of the risk. This would borrow from data that are taken as in-plant failure analysis, failure in compliance inspections, failure rates, and human adverse events. Just a quick look, there are all sorts of databases that have been taken for other purposes and compliance and so forth that might be mined for some information to start the process.
For each hazard, once you get those data then it becomes the exercise we've seen in a couple of previous presentations, is we're really working in a lot of qualitative regions and not very quantitative as the process begins. So one way to do that is to try to give these scales -- the probability of occurrence, for example, might range from very low to very high, and the endpoint could run down from death to worry. There would be a system of ranking that hazard based on this.
Of course, the modeler sees a bunch of numbers, so this can fit into the aggregate quantitative model, although we may not know much about the individual qualitative model. It's not that big of a problem to try to put it on quantitative scales when you're looking at the aggregate. So compliance could have endpoints such as OAI, VAI or others. These were just literally off the top of my head. Prior history of actions might convey the level of chance that it occurs, whether it was never violated or had few violations or all the way to many.
Once you've done this under each of the categories that might be suggested, each one of these produces a scoring and a ranking in their own right, and then they can be compiled into something that re-sorts the list, the type of GMP problem under the categories that were considered to be important by the risk managers.
Then fitting that into the bigger picture of what do you do with that kind of information, this is what risk analysts see as the really important part of the global process, and that's that it's a cycle. You start a process and you end up doing your assessments, making your model. You might use that for work planning for other processes. Here it's shown as work planning and going to inspections. But you always want to take the data and go back. Recall that I said risk managers are charged with seeing what were the effects of their policies and decisions on the future options and risks, and that is the risk analytic cycle.
Breaking it down, the risk assessors play in this area most of the time, and the risk managers over here. That doesn't mean that they're two different people. Sometimes in a small center such as CVM, one day I go to work and I wear the risk assessment hat. The other day I go with the risk manager hat. It's important to keep the concepts straight, the questions you're asking under each area. Keeping those straight helps keep the process rolling forward. It doesn't mean you have to actually have a second person in the process.
Is this subject to a pilot scale or something that people can look at and decide that that is a valuable way to go or not? A number of us met and believe that it could be scaled from a variety of processes, including asking individual risk managers and experts and senior managers in industry to actually score in a user-friendly interface and collect the scores and database and analyze it and come up with a ranking table. This is actually something that can be done from a very small scale of experts to a very large scale because it amounts to being a survey-type process. Certainly the fields of expert elicitation and the focus group-type technologies are well known to everybody, I think, and those techniques could be brought into this process to generate lists the first time around, when there's really not much else to go on other than a lot of opinions out there.
Well, the opinions are linked to the experts, so hopefully there's a good correlation between expert knowledge and what makes sense in this type of modeling in the end. That's what you try to tease out in the pilot study. Ultimately that risk-ranking table could lead to risk management decisions.
In conclusion of this very quick overview, risk assessment provides a process for organizing the information in support of decision making, and this has been put throughout a lot of the strategic initiatives, et cetera, as science-based decision making, and there's really not a lot different between what risk assessment does for risk management decision making and what we call science-based decision making. They are pretty much synonymous in my view.
Risk assessment is one of the tools available for risk management, and risk management is that activity in which the options for controlling risk are examined in light of the costs, benefits, and risks tradeoffs, et cetera.
Multi-factor risk ranking and filtering might be a robust process to start such a very broadly based and complicated initiative.
Thank you very much for your time.
DR. BOEHLERT: Thank you.
Are there questions from committee members? Comments? Okay, thank you very much. Wait a minute.
DR. RAJU: In your definition of risk assessment, that was pretty much about problem solving in the very early slide. If you look at your National Research Council definition, that doesn't necessarily have the context in which it's being applied because that set of words is the same as the definition for science. You said that, and they're synonymous. But there's a reason why it's not called science, if they are synonymous. So is there another piece of that in terms of the context for applying science that makes it want to be called risk assessment?
DR. CLAYCAMP: How would I answer that?
DR. RAJU: It's good that they're synonymous, but there's a context to why it's called risk assessment rather than science.
DR. CLAYCAMP: Well, the context is this. Going back to estimating the chances of losing something we value, and then from there it gets --
DR. RAJU: So that wasn't the definition --
DR. CLAYCAMP: Yes.
DR. RAJU: I think it's a very exciting thing that they're so synergistic and so synonymous. It comes out so clearly. While the FDA might talk about a risk-based approach, and an academic might talk about a science-based approach, and an investigator in industry might talk about a modern quality system approach, in the end, the win-win is to get them all together, which is another point I think in the making.
DR. BOEHLERT: Any other questions or comments? Ajaz.
DR. HUSSAIN: I think this is a wonderful framework for risk discussion, and that's the reason I wanted Gregg to come and share this with you. As you start thinking, we have done this. As Gregg mentioned, we will do it on a daily basis. But I think having a formal framework really would help us sort of come on the same page and define things very carefully and clearly. I think communication is one part of that.
But at the same time, I think what is also important here is, and the message that I wanted to come out from his presentation was, you cannot think in a univariate way. That was the point I was making in my presentation. Today we are in a univariate way, in every sense of the discussion. We have to think in a systematic way and a multifactorial way, and we have to know what connects to what and so forth and make the right decisions.
That's where, I think, knowledge-based decisions are better than simply data-driven decisions. So the conceptual framework of the systems thinking, risk, science, PAT, everything sort of gets connected.
DR. BOEHLERT: Tom?
DR. LAYLOFF: I think also it's real and perceived risks, because society may have perceptions of risk which are different from the real risks, and allocate resources against perceived risks.
DR. BOEHLERT: Yes.
DR. CHIU: I think this is very exciting, not only to the GMP. This concept, this model can also be applied to the CMC reviews. When we do a review, we always look at, is this important, should we get more data. With the model, I think it gives us a systematic way to approach that.
DR. CLAYCAMP: Exactly. It fits with that as well. There's explorations on the pre-market side as well as post-market.
DR. TEMPLETON-SOMERS: Excuse me. Can you please identify yourself and your affiliation for the record.
DR. CHIU: Yuan-Yuan Chiu, OPS.
DR. BOEHLERT: Any other questions or comments?
I was listening this morning, wishing I knew some of these techniques. In my former career as a quality control director, I got involved in a lot of risk assessments in deciding whether to release product to the field. It's a very good beginning and I think it's going to change vocabulary on the part of lots of folks.
DR. PECK: In hazard identification, is this where we start to set possible limits to what we're going to look at through the identification step?
DR. CLAYCAMP: If I understand correctly where you're going with that, practically speaking that's what happens. To speak in more general terms about a senior leadership team in an organization, they get a lot of hazards brought to their attention, and right away they need to make some call. You can't order a large risk assessment team for each of the hazards on the table, so how do you prioritize them kind of off the cuff? That in essence is actually giving you a mini-risk assessment. It may be in the mind of the expert at the table at that time, but essentially there is a ranking on what could go wrong without any real knowledge of specifically what the risk is that it will go wrong. It's sophisticated guesswork in a sense, but it's the reality of not having infinite resources to deal with every hazard that comes before us. In the ideal world you would get the same level of information for each hazard before you ranked them.
DR. PECK: Thank you.
DR. LAYLOFF: Yes, I was going to say it's limited resources and limited quality of the database. The formalism I think is useful to help guide your decision, but to move it to an absolute term is going to be impossible because the quality of the data and the resources required and the timeliness of making a decision. But it's a very good formalism, I think, to help bring it together so you can make a more rational decision.
DR. GOLD: Tom, let me add in that, that's where the professional expertise comes in. We cannot quantify these issues. That's why the quality of the background of the individuals and the amount of experience all comes to bear in making these decisions.
DR. LAYLOFF: And that's the risk of making the right decision or the wrong decision.
DR. GOLD: Correct.
DR. CLAYCAMP: Could I add to that last comment? That's really my view of the risk analyst or risk assessor in this, more in a facilitative and guidance role in that idea. You cannot do the risk assessment without the domain expertise. All of the right questions have to be brought out of those experts.
DR. BOEHLERT: Ajaz?
DR. HUSSAIN: I think there are two thoughts in here and I want to build up on Yuan-Yuan and what I presented this morning. I think the important point here is linking risk to a safety and efficacy domain is the only way to move forward here, and that cannot happen if it does not happen starting with the review process. That's where it has to happen first because clearly, I think, as the review process evolves from an IND stage and so forth, leading into the clinical trials, that's where the database essentially becomes the link between safety, efficacy, and quality. So I think Yuan-Yuan's point is well taken, but I think it has to happen at that point because if it does not, we'll never really get the link between safety and efficacy and quality parameters, to the degree we could, to the level we could from that starting point.
We do that today. It's not that we're not doing that today, but I think we'll have to think about it from a multifactorial way and a systems thinking, rather than point by point because our specifications are a means for reducing hazard. I think that's how it starts.
DR. BOEHLERT: Any other comments?
DR. D'SA: I have a question about experts, your expert systems. Experts can be wrong. So how can risk assessment change or risk tolerance change as a result of having bad information to begin with?
DR. CLAYCLAMP: Yes, it can change from group to group of experts. I'll be a little bit speculative because I'm getting out of my field and into the social constructionism areas. There's a risk that the closer the experts are together and meeting in the same committee and so forth, they start to come up with the same answer. That's what goes on within the halls of annual meetings in science all the time and in study sections and so forth.
So there has to be a lot of care in how to elicit the knowledge from the experts. There is a whole field unto itself that is based on that. I'm surely no expert in that. I've participated in a couple of studies and in one in which we could only identify five experts nationwide. How quantitative a sample is that? And these guys all knew each other.
It's full of those potential pitfalls, but there are methods for teasing out the uncertainty in an expert's opinion and for combining in a meta-type analysis the expert opinions.
DR. D'SA: I have a second question. This is about your detection ability for hazards. This is something that is connected to PAT, and I think that one of the reasons why aseptic processing is under such tight control is because of poor detection ability of a hazard.
Then the next aspect is, just because you can detect something but cannot control it, does that hazard decrease? I think that we have to have some mechanism of addressing that. You may be able to see everything, but if it doesn't improve your state of control -- I think that knowledge has to reach to a point where you arrive at a position of state of control as a result of that knowledge.
DR. HUSSAIN: Well, I think detection simply provides the information to make a decision. Abi is right in the sense that if you're not controlling it, at least you have the ability to make a decision.
DR. CLAYCAMP: That's correct. The improvement there is a reduction of uncertainty by the additional knowledge, and your decision may be that I don't have enough information and control.
DR. DeLUCA: I'd like to follow up on what Dr. Raju said here with regards to that definition, science and the risk. When you made, I said, gee whiz, now I maybe understand it more. Because I never thought about risk. I thought about science.
And actually in your slide that preceded the one that your refer to, you have contemporary risk analysis and that includes four major activities. One was hazard identification. And I wrote down problem identification, thinking this is a process I'd use in talking about dissertation research or Ph.D. research. You'd start with the problem identification. I thought this would be good to bring this out in this kind of thinking into the development of that.
But as I thought about it more, I'd just ask you a question. Are you satisfied with that definition, that it is science? Because to me as I think about it, unless the risk is the problem then this definition won't hold because science to me has got to involve also correcting. If you're identifying a problem, correcting the problem. I don't see that in this correction of the hazard.
DR. HUSSAIN: If I could jump in there. I think the distinction I have in mind is scientific pursuit of knowledge and problem-solving essentially comes to a test of hypothesis and a conclusion related to that hypothesis. In risk management, I think, the way I distinguish it from that is, even in the absence of certain knowledge, we have to make decisions and you make decisions on a daily basis. So making decisions in absence of knowledge is sort of a way of distinguishing between the two. You have to make decisions. Let me put it that way.
DR. RAJU: Let me see if I can add to that. I think the definition is fine if the rest of it is put along with it. I think in that definition what Gregg did was probably cut and paste a portion of a bigger definition. It makes the point because he made the point about the hazard on the previous slide. So I'm fine with the definition. I'm actually ecstatic about the definition because you have to be careful about talking about risk as just a hazard because there are many levels of risk, and only a few levels of risk are appropriate for this context on the cGMP initiative and the FDA.
The risk of not fully understanding is the greater risk around that pyramid, and there are business risks, business risks of having lower yields than you can. There's the risk of not having enough resources where you could have put it somewhere else. You climb that whole pyramid and expand risk to the internal customer, not the FDA, each person who wants to do the right thing inside your process will all together come to the same as science, which will be the holistic version of risk is the holistic part of science, which is when the definitions merge, which is very similar to what Ajaz said.
It may not be relevant in this context, in the cGMP committee, but if you think about it, it may be extremely relevant in this context to define why we're doing all this. So I think the fact that it connects with science and the fact that he's connected those two slides with the other parts of the definition could be exactly what you've been thinking about in terms of problem/opportunity. Understand the causes. We could do fault trees, all for themselves for every investigation, for every deviation independent of the connection to the second level of the pyramid, just for the sake of doing it because we want to understand.
DR. DeLUCA: I guess I was trying to bring in what was missing here. What I would also include in the science is the application of that for a purpose.
DR. RAJU: Sure. For the business purpose, as well as academic purpose.
DR. BOEHLERT: Any other questions or comments? If not, thanks, Gregg, for an excellent presentation.
We're now at the open hearing part of this morning's program and we have one speaker who's asked to be heard, and that's Frederick Razzaghi, from CHPA.
MR. RAZZAGHI: Good morning. My comments are not meant to be educational. These are prepared remarks on behalf of CHPA, which is consumer health care products association, and this is our entre into this current discussion. I'm just going to read you my remarks and then close with a few comments.
It is widely recognized that the pharmaceutical industry serves as a benchmark for innovation and delivery of quality health care products for consumers and patients. CHPA is proud to represent this industry by working to provide consumers with convenient access to safe and effective nonprescription medicines and other self-care products. CHPA acknowledges that PAT is a proven and efficient tool which may be utilized for continuous improvement and continuous quality verification.
CHPA supports the FDA position that utilization and implementation of process analytical technology can be and should be applied in drug development and manufacturing on a voluntary basis.
CHPA recognizes the potential for utilization of PAT in various applications including improvements in drug development, process control, process knowledge, occupational safety and other issues. PAT has been proven to be especially useful in high volume, dedicated manufacturing or continuous processing operations where on-line monitoring and automated adjustments can be made during manufacturing or filling operations.
PAT, however, is not a cure-all for all manufacturing issues. It is not the correct tool for all processes and does not lend itself to implementation across the board in all manufacturing or packaging related applications. As such, the implementation of PAT should remain as a voluntary option and left up to the individual company to determine the benefits it can derive from its utilization.
Successful implementation of PAT will strongly depend on the integration of pharmaceutical manufacturing practices and guidance documents or regulations. It is anticipated that modifications to applicable regulations can be accomplished through review of the cGMP for the 21st century as part of the risk-based approach. As a regulated industry, we encourage FDA to continue to work with us in order to identify and qualify various levels of risk and define a robust process that can eliminate uncertainty in implementation of various changes. CHPA views the current climate as an opportunity to improve not only processes internal to both FDA and industry but also to devise new ways to clear the accumulative effects of rules currently impeding operations on the industry side and the FDA.
As an initial step, CHPA looks forward in assisting FDA in developing good science-based guidance documents, within the established regulatory framework, in order to clearly define expectations of utilization and implementation of PAT. As a longer-term objective, CHPA is eager to work with FDA in the establishment of new or revised regulations as may be useful or required.
I would just now conclude with three brief comments. We heard yesterday G.K. talk about his issues, and off-line we talked about developing a business case, and Ajaz this morning talked about when he first started with the PAT approach, he thought that it was useful to go and get upper management or executive management buy-in.
We recommend that, from G.K.'s point of view, the business case be made because manufacturing is seen as a critical part of the company's operation, and the business case has to be made to executives so there's buy-in at that level.
I also refer to G.K.'s comments yesterday regarding the dynamics inside the manufacturing operation of a company. You have a director or vice president who's running the operation. At the time there are dynamics in place that include both manual and automated operations, and there are complexities there that have to be explored and identified.
That concludes my comments.
DR. BOEHLERT: Any questions from committee members?
DR. BOEHLERT: Thank you.
We are running well ahead of schedule. Is there anybody else in the audience that wishes to be heard? We can give you a couple of moments if you have some burning issue to present.
DR. BOEHLERT: If not, we will break for lunch. We will reconvene at 12:30, so we'll see you then.
(Whereupon, at 11:21 a.m., the subcommittee was recessed, to reconvene at 12:30 p.m., this same day.)
DR. BOEHLERT: Well, I think we can get started because Tom Layloff is here, so it must be the right time.
As Ajaz noted this morning, we're going to change the order of this afternoon's session and Ajaz will be going first to talk about our future.
DR. HUSSAIN: What I would like to do now is to engage the committee in helping us develop the agenda and the format and the background information packet for the next subcommittee meeting. We have a tentative date for that. I think we'll confirm that through e-mail to all of you as soon as possible. But to make that as efficient and effective as possible, I think what we tried to today was share with you different perspectives, especially introduce the risk management, to help define our next meeting agenda.
What I'm proposing is -- and this is a proposal to you and we'll modify this based on the discussion of the subcommittee -- meeting number two is to move towards more effective and efficient approaches for maintaining product quality and encouraging continuous improvement in manufacturing and quality assurance. That would be sort of a broad, general theme of continuous improvement, change, and so forth.
The reason I wanted to use this as a backdrop is I think we will have to start focusing our discussion to more specific topics and issues as we move on. I think this was a broad, general discussion to make sure we are all on the same page, at least start speaking the same language, but now I think we need to start drilling down to more specific issues.
I think we need to have a common understanding on quality, risk to quality, continuous improvement, and the role of formulation and process understanding can make the change control more efficient. So that becomes a framework of connecting, risk, quality, continuous improvement, change, science all together.
So the proposal is to build a second meeting on the past experience. That's the reason we presented the change model, build on the SUPAC experience where the issues we grappled with were maintaining quality while allowing changes, continuous improvement, risk to quality. All aspects were part of that discussion.
The draft comparability protocol was exposed to you today and I think you will have a chance to look at this document more carefully within the context of the broader discussion so as to help us fine tune this document as we finalize this document. The comment period is ending soon and we would have received comments from industry, and the second meeting becomes a basis for discussing those comments, fine tuning this draft guidance and so forth.
Dennis provided some avenues of the "make your own SUPAC" concept that can be part of this comparability protocol. One way of looking at the comparability protocol is a mechanism to make your own SUPAC possible.
But I think the challenge will be -- and this is where I think significant discussion needs to occur -- to alleviate certain concerns and fear industry has expressed with respect to sharing information, but unless they share information, how can we improve our efficiency and be more science-based. I think that's the dilemma we'll have to grapple and come to some understanding on.
Development knowledge. I purposely chose development knowledge and reports from that perspective because now in a post-approval scenario, the fear or the perception that industry has that this may delay an approval is not there. So you actually can start thinking more rationally in terms of what information can be brought to bear on managing changes without having to have a prior approval supplement and the traditional way of doing it. So how development knowledge can or should be used to optimize regulatory scrutiny, starting with the post-approval change scenario.
Type, format, and evaluation of development knowledge. How do we ensure a win-win? How do you evaluate development knowledge in terms of a particular change is quite specific enough, and we can address that more in a focused way. Development knowledge in an NDA I think is much broader and it's much more complex. So I think this allows us to start the dialogue without the fear of all the concerns that have been expressed.
Current and future technology transfer. I purposely chose the word "technology transfer" because that is a well-established terminology. I put that in quotations. The reason for that is I think if you really look at the review process, the inspection process on the FDA side, and the development and manufacturing process on the industry side, there's a technology transfer model there. You're translating the science know-how to the other side to make sure the work is done on a routine basis. So technology transfer is a term that really fits well.
Technology transfer is also a term -- and there's also document floating out there from ISPE on technology transfer. So there is a possibility of connecting all those things together.
Also, in the change scenario, risk-based approaches, failure mode/effect analysis, HACCP, other models. What can we learn and adopt for pharmaceuticals? Again, starting in the change scenario, how can we do this?
Role of interim specifications. There is a current definition of interim specification in the ICH. That has a certain meaning, but within the context of this discussion, I think we will have to go back and evaluate what that term really should mean or would mean in the change scenario. For example, I think in a PAT perspective, you may start out with traditional controls, traditional testing as a means for controlling your processes. One could consider that as interim. As you get more process knowledge, more understanding and you go on-line, essentially you're replacing that. So one could think about that as a continuum from one type of controls to a different type of controls.
One interpretation of the ICH definition of interim specification may be too narrow, saying that controls are not part of that. So there is some discussion and debate there. So what is the current definition? Need for a broader definition of what are we talking about would be a topic.
Process understanding as a basis for optimal specifications, including in-process controls. Again, when you read FDA documents, especially the drug product guidance document that we have released in a draft form, it is based on ICH Q6A. There is multiple interpretation possible. What is a control? What is a specification? The lines between the two are not clear, and I think there is a gray area and I think we need some clarity in terms of what are we talking about there.
One important aspect is connecting annual product reviews and annual reports. I think this is a missing element right now and sort of reflects the divide between review-inspection and development R&D. Annual product reviews are held at the company and they deal with failures, complaints, and so forth. So that's one part of the information about how well the process is doing. Annual product reports are submitted to the agency. They contain a lot of the clinical information and this and that. There is a disconnect, but there is possibly an opportunity here to make the connection from a systems thinking perspective.
So I just want to repeat a couple of my slides. I think the advantage of building on past experience is helpful because I think our discussion would then be focused and would have the proper context, and that's important because I think we are talking about risk management, quality system, process understanding and so forth. Clearly we have been doing all that, and keeping that context within the post-approval change scenario will help us I think. And that's my proposal, to keep the focus on our discussion and also to make progress more effectively.
So this is the example I showed you. FDAMA, the Food, Drug and Cosmetic Act, actually includes a definition of risk as a potential to have an adverse effect on identity, strength, quality, purity, or potency of a product as they may relate to safety and efficacy. We already have a qualitative model of risk categorization. Clearly I think there's a desire to move to a more sophisticated model for risk categorization, and how will we do that.
One of the proposals that I presented -- and Gregg Claycamp in his presentation elaborated further on that -- was to take the SUPAC as an example where we only look at high, medium, low or minor, moderate, and major changes in terms of that, but then think about how development reports, knowledge, information can be brought to say what is the risk likelihood. I think it brings the second component of risk which we have not utilized in a formal way within the SUPAC structure. It is there. It's embedded, but it's not sort of a formal recognition of that likelihood of an event.
How will quality by design and systems approach and how will in-process understanding bring us that? But if we're able to do that, a site change, a ZIP code change for modified release, which is a high risk level 3 change now, if you understand the process and the risk likelihood is minimal in our assessment, then what is level 3, a prior approval supplement, could be justified as an annual report maybe.
Similarly, I think the previous slide showed a way to reduce the risk classification. Now, with risk mitigation strategies, which are your controls, your process controls, your process understanding, and your quality system in general, if there is a likelihood of a fault, if we increase the probability of detecting that fault, that should have a bearing on reducing risk because now you have information to say yes, there is a fault, but we can detect it better, and once we detect it, there is a decision to be made. So how do we use the process knowledge, development reports, or the entire systems thinking to not only reduce the risk classification, but also recognize that increasing the probability of detection as a way for further reducing the risk and what again might be a high risk could be classified a low risk when you take that into consideration.
So clearly I think this is again a summary. I'd like to sort of engage the committee in saying as we move up, let's see how we can recognize a company, where they are in this knowledge pyramid. What do we need to do to distinguish companies which are on the high end of this knowledge pyramid versus at the lower end of this knowledge pyramid and reward companies that have improved understanding?
The challenge will be -- this is a statement that Gerry Migliaccio made yesterday, in particular with reference to G.K.'s slide. Where are they in the five steps that he said, level 1, 2, 3, 4, 5? Depending on the product, Gerry said, they are on every part of the curve. That poses a challenge in the sense in systems thinking how do we recognize that some products are better understood than others. So what does that mean from a quality systems perspective? And I think that needs some clarification and that needs some discussion.
So we have to start looking at connecting the dots here, development-manufacturing and review-inspection. This is a slide that I've used for many years now. If you look at the top most bar, discovery, development, review, and marketing, the majority of the focus in regulatory discussions is focused on that. I think more recently, say, the last 5 to 10 years, we have started focusing on the second bar, which actually supports. Without the second bar, the first bar will not happen.
So preclinical development, clinical phase I, II, III studies, submission of NDA, review and assessment and approval, phase IV commitment, adverse event reports. Clearly that's a sequence of events that occurs, but to make that possible, you really have to develop a product. You have pre-formulation. That's part of the development. You have formulation development for clinical testing and its optimization. So that's again part of the development report. But more and more, because of the development crunch, we see either optimization is not feasible in the time available during development. So optimization is either post-approval or may not happen.
And then you have scale-up for market, manufacturing changes. And manufacturing changes are driven by many different reasons. One, to have process improvements, to avoid deviations, to avoid out-of- specification results and so forth. That's one category. Others are market-driven. Others are technology-driven, and there are many, many reasons for that, consolidation, and so forth. So why manufacturing changes occur, how they occur I think has some bearing.
But the system works because I think the FDA review process is supposed to be asking the question, was quality built in in an IND and an NDA? This is not only with respect to product quality, but in terms of the quality of the protocols used in clinical testing. So the concepts of quality applied are universal. So was the protocol designed right? Was the clinical study done right? So all are quality issues. So we are supposed to be asking the question, was quality built in, and we do.
In the chemistry world, without development reports, how are we supposed to ask that question effectively? And that has been a challenge. Many of the problems we see today I think are based on the lack of that knowledge.
So in the future I think the question really is the FDA assessment process in an IND or ANDA or an NDA has to really be was quality by design, and you saw some of the challenges we face there. Once we address that question -- and that question is an important question to address because if you have quality problems with your clinical material, then you're essentially confounding a very expensive database, safety and efficacy, with quality problems. There are a few cases where that has occurred, at least to my knowledge, that really created a problem where the entire safety and efficacy database was in question because of some quality concerns. But more often it does not occur because quality is built in. So how a company designs the clinical trial material for clinical testing that becomes the basis for approval I think needs to be examined as we set specifications.
But once we have this information database, on the clinical side the question that we often ask is what is appropriate labeling and then what risk management strategies are needed depending on the risk-benefit ratio that is assessed based on the information submitted.
On the quality side, the question really becomes after an NDA comes in, are the controls appropriate for the manufacturing process, especially when you have scale-up, what the specifications are.
I would like to point out, is that the right time to ask that question? All those questions have already been addressed when we dealt with the clinical trial and material formulation process and specification.
The reason I'm asking that question for you to ponder on is what Pat DeLuca's discussion was yesterday, which is as the process capability improves, we keep tightening the specification. That's the current thought process.
But what is the basis for that? If we continue to do that, the companies may stay with the current specification and the product remains on the market. That's perfectly fine. But now, if a company wants to improve the manufacturing process, the fear is FDA will start tightening the specification or acceptance criteria. So why would you do that? Does it serve public health in any way for doing that? I think that's the question because the basis of decisions has to be scientific data and information and that data and information has already been collected. What clinical studies would be needed to answer that question? We don't have that.
So actually how we set specifications is a key factor of this discussion. We won't discuss that at the next subcommittee, but I think subsequently we'll have to address that. Whether at this committee or the main advisory committee, we'll have to make that decision.
But clearly, I think at the time of approval, the manufacturing knowledge for a particular product can be limited, and we go through a process validation, and then we often see problems with respect to the ability to manufacture that product. And you have post-approval changes occurring and so forth.
So one of the thought processes about an interim specification is to think about, yes, these specifications are based on a limited number of clinical lots, possibly validation lots, and some questionable aspect with respect to whether the development report was really useful in that decision making or not, then to say, all right, at the end of, say, a year after manufacturing or manufacturing several hundred lots or whatever that might be, can't we get back and say, all right, this is the manufacturing history? This is what the specifications are. This is the link to safety and efficacy. What should the final specifications be? That is one way of looking at interim specifications, but that look at interim specifications is slightly different from what is expressed in our ICH Q6A with respect to what interim specifications truly are.
Therefore, I think final specifications is the link between annual product review and an annual report where you actually have several lots or hundreds of lots of manufacturing experience, and then you can base your final specifications on that. That could be a technology transfer model from review to inspection on the FDA side so that subsequent to that, when you bring in the right development report, the "make your own SUPAC" concept is together with that, then we actually eliminate most post-approval supplements. So we transfer the know-how from review to inspection and everything else is managed on the inspection side after that. The product specialist can help translate that information and so forth. So that's the model that we have to think about.
I'll stop here with that as a backdrop. PAT essentially is process understanding. That is the term we are using, but how do we integrate and get to this is the key issue here.
So what I would like to propose -- you don't have this as a handout. I just made it up this morning. I'll be here flipping through the slides if you want to see. What should we focus on for the second meeting? The proposal is let's build on this concept so we can structure the discussion. We already have a draft guidance that we can get your input and so forth.
So I'll stop here. I would appreciate discussion, feedback on what we should do to make this meeting a most successful meeting in terms of what information we should bring, some consideration of who we should invite to speak from industry and so forth. It would really help.
DR. BOEHLERT: I'll solicit comments from members of the committee. Has Ajaz presented us with sufficient information here for us to answer his question? This is a fairly high level discussion. Do we know what it is he's looking for, and if you do, I'd appreciate your comments. Tom.
DR. LAYLOFF: I have a question, and that is, are in-process changes based on annual reports generally covered in the development knowledge? In other words, does the development knowledge cover the domain of all process changes? Is it that robustness level?
DR. HUSSAIN: Well, I don't have a clear answer. My hope is it is. The reason I hope it is is because that's part of the validation. That's what should have been done. The second aspect is that it is considered a low or a minor change to start with, and our change guidance has defined it as a minor change based on the past experience, based on consensus at meetings such as these. So the knowledge base classified as a minor is the basis of that.
DR. LAYLOFF: My concern is that maybe some of the hesitance about development knowledge is that it's not as complete as one might expect.
DR. HUSSAIN: That has been expressed at the PQRI meeting. The fear was FDA might see that quality may not always be what it could be.
DR. GOLD: Ajaz, don't we get back to some of the issues that Pat, Gary, Garnet were asking yesterday about what is the motivation, the driving force to tighten specifications in many instances? Don't we need to roll that into our discussion as well?
DR. HUSSAIN: I think we will need to discuss that, but I'm not sure the next meeting we want to devote to that. That's the reason I selected the post-approval world with the specifications already set sort of a thing to work our way because that is a very complex issue, and I think we really need a broader audience to discuss that.
The motivation essentially is I think that is the current paradigm. That's the current mind set, saying that if you can make it with that specification, make it. That's one way of thinking about it.
The other way of thinking about it is something Colin Gardner reminded me of yesterday, and I think Toby Maza in his presentation has mentioned on and on again that industry today actually designs specifications so that they can actually fail 5-10 percent of the lots. That is by design. If they do not do that, people will come back and say you have too loose specifications. That's what I have heard and that's what people have said. So if identifying failures is a test that our controls are working, if that's the mind set, then if you reduce variability, you will not see any failures, and therefore we have a tightened specification. That probably is a paradigm out there, at least in some people's minds, but that I don't think is the correct way of thinking about it.
DR. GOLD: If Colin Gardner said that, I certainly did not hear it. My experience may not be as broad as Colin's but we have other people here in the business. I don't know of companies deliberately trying to fail 5 percent of the batches during the development phase in order to get broad specs.
DR. HUSSAIN: No, no, not in the development phase at all. I still remember that discussion. The simple matter is when we set a dissolution specification, now if you have several lots and if we wanted to fail some lots, all the lots tested in clinical are acceptable and you have a range of dissolution profiles for that. If you choose a dissolution profile which fails certain lots as a means for establishing specification, that's the general trend. We set specifications based on capability of that process at that time, and that may not be the right way of setting specifications is what I'm saying.
DR. GOLD: Well, you do set specs based on the capability of the process, but I don't know of deliberately setting specs or looking at the 5 percent --
DR. HUSSAIN: I would like to challenge that paradigm. I think you should design a process of the right capability to meet your design specifications, not the other way around, because specifications are to be linked to safety and efficacy and are part of the design aspect. So you think through what the design is and then choose a process that is capable of delivering that specification.
DR. GOLD: I would certainly agree with that, but I thought that's what we have been doing all along. Any of the other members have any comments on this point?
DR. BOEHLERT: I think Efraim was ahead of you G.K.
DR. SHEK: I don't know about any systematic approach where -- I would assume we in industry, at least from my personal experience -- we go and design a 5 percent failure or whatever it is. I think at the end once specs have been agreed upon mutually between the regulatory agencies, whether here or in Europe, and the industry, you might end up there because you present, I would assume, your experience and data, and that's where the negotiation is going. So I don't believe it's purely by the sponsor designing, but the end result might be, Ajaz, what you are talking about.
DR. GOLD: Perhaps the end result is because you always negotiate some room based on the fact that you have limited experience to that point and you're going to expect some variation as you scale up and you move ahead. But I'm not aware of a deliberate failure. Okay, perhaps the same result occurs, but maybe we're using different terminology.
DR. HUSSAIN: Could be.
DR. BOEHLERT: G.K.
DR. RAJU: Let's go back to yesterday's discussion and try to connect it to today's. I personally believe that as far as possible -- I know it's difficult in this industry, but it's difficult in many other industries -- that specifications should only be about the voice of the customer. That is it. Process capability is your capability of your process to meet the voice of your customer.
So if you look at the definition of process capability, on the top it will be upper specification limit, minus lower specification limit, divided by 6 sigma. The top specification should come from the customer ideally, and really those are the only specifications that make sense. And the bottom is the sigma that comes from your process.
You should never ever set your specifications based on your process capability because your process capability was supposed to measure whether your process capability meets specifications. So you should also start with the voice of the customer.
But there are different voices and there are different customers. The customer for safety and efficacy has a very broad voice, which is at the bottom level of the pyramid. The customer for process understanding and cGMPs has a tougher voice. I want to look at your capability to meet specifications rather than did you meet specifications and is it safe and efficacious. That is now better connected to the upper control limit and the lower control limit. That is not a specification. That's a control limit, and the control limit always comes from the process. It has nothing to do with the specification.
So when I look at it, I want to consider us really separating out interim specifications from specifications versus control limits and bring in the vocabulary of control limits which are about the process capability and do our investigations around that. And because we have kind of combined the two, we get this dysfunctional system, and in my experience I've seen almost every single company that I've worked with complain of the situation where when we don't clearly translate safety and efficacy into something that's connected to our process, which is the mechanistic understanding that's missing, then what do we do? We take all of our data and then we set specifications to be plus or minus 3 sigma. We set our specifications to be 3 sigma rather than 6 sigma, for example, and when you do, you will get a percent or so of failure. And you use that percent or so of failure to prove to the investigator and to yourself that you can investigate when you're outside your upper and lower control limits.
But fundamentally I actually would question the whole idea of interim specification in an ideal sense. It is really about the specifications should only be changed when you combine it with some market data from phase IV, knowing more about your customer or your recalls, which are voices of your customer. Specifications from a legal and safety point of view should not be driven by the process in the ideal state.
The reality of it, similar to what Judy said, in the case of impurities, for example, when you may have done your clinical trials at 5 percent, but you could have done it at 1 percent. You want to keep it minimum. Now, you don't have a database to set up a specification. You've had even lower impurities, but when you go down into manufacturing, you'll have a much broader variation. So you have a somewhat difficult situation to deal with. So I can understand dealing with that difficult situation.
But I want to start with the ideal state of saying it's about specifications that are only about the customer. Let's define the customer and let's bring on a vocabulary rather about specifications instead about control limits.
I've heard a lot of companies complain that they get into this catch 22 because that's the situation. I think when people investigate, they want to know if you're investigating you're out of control, in many ways out of trend, and that's the mechanism where this 3 sigma situation actually makes sure that the system does work. But it's not about safety and efficacy. It's about do you have a mechanism in place to try to understand what you don't understand, which is about the control limits.
So there are two different things, and I think there's the ideal state and then there's the practical state. I want to move the discussion to try the ideal state a little longer before we jump to the practical state.
DR. BOEHLERT: Tom.
DR. LAYLOFF: I'd like to support that discussion. The specifications for the product should be what is pragmatically required to meet the safety and efficacy. The control limits are set whether it's technologically feasible for your process, but I have seen many times where reviewers and FDAers always to move the specifications to the control limits, to what is technologically feasible, rather than what is pragmatically necessary to achieve the objective of getting a safe and effective product out there. There is that tendency, though, to go to what is technologically feasible rather than what is pragmatically necessary. I think if we could address that, that would help out a lot. And separating out control limits and specifications is a very good approach.
DR. BOEHLERT: Gary.
DR. HOLLENBECK: Judy, I'd like to go back to the 10,000-foot level for just a minute. I thought yesterday perhaps the biggest risk was I was never going to understand risk assessment.
DR. HOLLENBECK: And then Dr. Claycamp came in and gave what I thought was a beautiful presentation today. It at least gave me the impression that there is an approach that we can take. I got to his slide on pilot scale which I think is another term for demonstration project, or at least, that's what I hoped it was. Is that what you think, Ajaz?
DR. HUSSAIN: Yes.
DR. HOLLENBECK: And I would strongly encourage that we try to start at that level with that system and do a demonstration project. Perhaps then the choice is what is our assumption around which we should do this demonstration project. And it could very well be chemistry manufacturing controls instead of compliance, as was the example. Perhaps that's what you're saying, Ajaz, is that we back up to the 10,000-foot level instead of talking about the end product, specifications, and we consult with folks who can help us establish the x and y axis in that model and that we proceed in that direction.
DR. DeLUCA: Yes, I think I like your slide there. It says a future topic is to look at encouraging continuous improvement of the manufacturing process. I'm just wondering if there are any examples of this where we can invite people from the industry who are embracing this concept and are actually working in that direction. If we could get some examples of that.
I liked the idea when you were talking about specs. You mentioned interim specs and what would be the value of having interim specs because sometimes we're not ready maybe to propose specs for the finished product at this stage.
DR. HUSSAIN: Right. Pat, I think that's an important point and that's the reason I brought the discussion up for that purpose. I think when you approve a product for safety and efficacy that is safe and effective, you already have established the product specification that is linked to safety and efficacy. So personally I don't see those as part of the interim spec. That has been established because there's no other mechanism to establish that unless you have other clinical data and so forth.
So the interim spec in my mind controls more of your other aspects that need to be refined as you go through scale-up and so forth. But the language that is in the ICH Q6A and so forth I think blurs that thing up, and I think we need to clarify that language.
DR. DeLUCA: That's what I was talking about was improving the process, not with regard to the safety or efficacy. We've established that.
DR. BOEHLERT: Tom?
DR. LAYLOFF: I think we're hitting on it, that over the course of experience, the control limits will improve but the specifications should not. A concept of interim specification belongs in the same box with interim safety and efficacy. So if you demonstrate safety and efficacy, you've demonstrated specifications. If you have an interim specification, then you haven't demonstrated safety and efficacy.
DR. HUSSAIN: So calling that a specification may not truly -- and we will clarify that through our discussions so that we say this is what ICH said, this is what this is, and so forth.
DR. RAJU: And there are different kinds of specifications. There are specifications for safety and efficacy. But as you go up, you might find that -- and this is a point that Gary had mentioned -- instead of 95 to 105, that if you have a narrow therapeutic range for this drug, if you can get it down to 99 to 100, you can get your patients ecstatic. So you set your own business specs to make them ecstatic. It makes competitors very difficult for you to compete with, but that's not a safety and efficacy spec. That's a safety and delight spec now.
DR. RAJU: But that's a spec. That's a whole different dimension to it.
DR. BOEHLERT: That may be a concept that's hard to sell. It needs to be meaningful to the patient, that 99 to 101.
MR. FAMULARE: So in addition to the strength, you would put how close you are to it on the label?
DR. RAJU: That's your own business proposition. When you say specifications, if we are talking about the legal specifications, then I want the legal specifications as far as possible to be about the customer, and the investigations and the burden of do you investigate what you don't understand to be about the control limits. And then because it's very difficult to separate them out, we've combined them, but at least expanding the vocabulary might give us another chance of separating them out. We might have to combine them in some cases.
MR. FAMULARE: I think a lot of this, as the discussion has gone on, is in the terminology. The specifications, if we leave them with safety and efficacy, as Tom said, just stay there and we probably shouldn't call them interim. The next thing is to establish what are the optimal control limits that you can put towards this process to not only meet but exceed that specification, and then in terms of the regulatory paradigm, how do we approach that.
DR. SHEK: I would like to maybe follow up the discussion we had and looking at what's up there and again looking at the maybe second part of the major bullet there, which is talking about encouraging continuous improvement in manufacturing and quality assurance. I think we were talking here yesterday and today about maybe a change of paradigm shift where we really build a situation to encourage improvement. We can start, and I think that's right to start with the safety and efficacy, which is number one, which allows you to put a product on the market which is safe and efficacious. So it has a purpose for it.
The other part should go maybe with what this country was built on. You start now building out and trying to improve your product on the market and you let the business world, to some extent, make those decisions. But you build the system where you encourage the industry to do that. At least today some of us are complaining that you don't have the incentive there. It's very complicated and very complex to try to bring improvement. So let's build a system where companies will be encouraged to do it, so you have the basic.
What we have to resolve in practical terms is how do you translate the safety and efficacy specs to a manufacturing environment where you have insurance that each unit, let's say, that you manufacture is meeting those requirements. But I will advocate to really work as a committee and advise the agency how we can build this environment where companies will go ahead and improve their products.
DR. BOEHLERT: I think somebody mentioned that it would be helpful to have some concrete examples, some presentations from folks that are really involved in doing some of this, and I think it would help the committee to sort out the issues to see where they think this applies, separate process controls from final specifications and understand what all of that means because right now we're struggling with some of the minutiae, if you will, and specs or interim specs or in-process control specs and what they might mean. If you could get some industry people that have actually done some of this, the role of development information in filings and how the agency might use that so we can begin to understand what this all means. I don't think anybody disagrees with the slide that's up there. It's all in the details.
DR. HUSSAIN: Right. I agree. Starting to build on the detail, the SUPAC experience I think would be nice to capture that in a nice summary because what were the concerns. Why doesn't the agency allow continuous improvement? You would drift away from the safety and efficacy database is the major concern. So a few years after approval, the product out there and the product approved -- the safety and efficacy gets disconnected. That's the major fear of that, and then that impacts on the generic program and so forth. So that's the other part of it. How do we manage that process is the key issue here.
So what I took from this discussion is I think what we will do is capture in a brief summary the SUPAC experience and then actually bring the ICH Q6A, clarify the terminology with respect to control specifications and so forth.
What I would like to do is actually maybe bring somebody from the bio side because they have a number of examples on comparability protocols from a company perspective how they have used development data and so forth. So maybe construct a comparability protocol concept of what can be accomplished from that perspective and actually have maybe some case studies from that and maybe a case study from companies which have managed continuous improvement, maybe in the "don't tell" scenario but they have done it. Pfizer was one example. I think we will request Pfizer to come back.
DR. BOEHLERT: I think that was very helpful on the PAT initiative to have those case studies. It was something that we could see and react to, and it's more difficult when you're talking about concepts.
DR. HUSSAIN: One of the major themes of that will be questions that we will pose to you. We'll sort of deal with the comparability protocol because that's a very concrete term. It's already a draft guidance and so forth. So we will definitely keep this as a major theme for discussion and seeking advice from this subcommittee, but then we'll build up case studies and so forth around this.
DR. BOEHLERT: Also, I think some presentations on the kinds of comments that are received on that comparability protocol because you'll have those by the end of June. Right?
DR. HUSSAIN: Right.
Now, I think the risk aspect is also important and I think we do want to sort of start thinking in a more sophisticated way about risk models. I think Gregg did a wonderful job of explaining that.
But I think now we need some pharmaceutical examples. I have seen some actually good publications. Rick actually sent me some recently. So there are examples of, say, failure mode/effect analysis, say, from aseptic manufacturing and some of the examples out there. So there are some case studies. We'll see whether we can actually find a speaker to talk about taking the existing models, say, HACCP or failure mode/effect analysis, and see how we can marry that with the rest of the discussion.
DR. BOEHLERT: Gary?
DR. HOLLENBECK: Ajaz, it does seem to me that there are two separate playgrounds here.
DR. HUSSAIN: There are.
DR. HOLLENBECK: There's the post-approval change where there is some sort of metric, you know, an approved product. I know in the SUPAC era, we were more comfortable considering risk because at least we had that buttressed by a product that had gone through the approval process.
It seems to me that as you're receiving submissions based on PAT and things, that's a different arena, and I'm hoping that some of the case studies or some of the examples that you'll bring back to us focus on approvals, as well as just post-approval changes.
DR. HUSSAIN: All the submissions we're getting are in the post-approval also. People are more comfortable in the post-approval world to do this. I think it will take some time before we'll see an NDA based on PAT. A long time. At least that's what Tom says.
Gary, I think that's an important point. I think you have the comfort zone of the safety and efficacy evaluation to work there. In the absence of that, I think you always have those challenges. So that was my reason for proposing that we start our discussion also on topics of development reports and so forth in the post-approval world because I think that's manageable and I think we can make progress there.
DR. GOLD: Ajaz, I'd like to make another request. If we bring someone in from one of the companies that's involved in continuous improvement, I'd like to hear something about the economic drivers that they see in following this route of continuous improvement.
DR. HUSSAIN: G.K. can give you that. No, I'm just kidding. No, I understand.
DR. RAJU: I just wanted to comment on the last three slides. The first one was kind of high level. The one before this, this one I think is very powerful. I would strongly support almost all the conclusions that you came up with.
First, is try to bring somebody from the biotech side because they've done this comparability because they've had to because of the complexity.
And second, connect with a couple of people. I think you have a large number of people who presented and were part of your PAT presentations. And now as that's that case study, it can be a beautiful case study to bring them back here. Judy and I were there before, but it wasn't necessarily shared with everybody.
The process understanding bit can nicely connect with Dave Rudd's presentation on the next slide.
One more point to bring up is at the PQRI meeting there was a lot of discussion on the prior approval and connecting CMC and review and the no prior approval, and there's a whole bunch of information that's been put together. I wonder whether they might --
DR. HUSSAIN: The prior approval inspection, the PAI.
DR. RAJU: Yes. I wonder whether some of the summaries of those meetings might come up here in some way.
DR. HUSSAIN: We actually distributed the summary, so you have a hard copy of the summary slides. We are still waiting for the summary report to come. At some point I think we will pick that up. I'm not sure at the next meeting we're ready for that.
DR. RAJU: But the "make your own SUPAC" has got so much support.
DR. SHEK: Ajaz, just a comment, something for consideration. When you have the picture of connecting the dots, there is a big chunk there that you are talking about, pre-formulation. One part which is something to consider -- I know it's a high level -- but whether to add, there is the API, the drug substance, which you have now two processes going on. You have the development of the synthesis and the characteristic of the API. At the same time, you have the development of the formulation. It's not too distinct. You do this one first and then that one. I don't think they should be.
But there is a strong influence of what's happening on the API, and as we look at the system we're trying to improve, we shouldn't forget this part. Now it's lumped together and there is something happening before, you know, pre-formulation and so on, that will affect the quality of the product at the end. As we go through the process, I believe we should keep it in mind there that that's going on.
DR. HUSSAIN: Maybe the slide reflects my pharmacy background.
Yes, I totally agree with you. The Bristol-Myers example actually will be a very good example, the case study they presented to the PAT Subcommittee, connecting the API to the drug product manufacturing. Thank you for bringing that up because I think challenges for BACPAC II have been that in the crystallization process, the last final steps where physics start coming in, is a great challenge. One of the professors I heard give an excellent talk on this was Allan Myerson from Illinois Institute of Technology. Maybe we can bring him in, sort of connecting that API to the drug substance because the challenges of BACPAC II is when there are changes, particle size and so forth, we have to think about doing a biostudy. Can you manufacture that drug product and have it bioavailable? So that's a very complex scenario, and I think that fits in quite well with the comparability protocol and change scenario too.
DR. BOEHLERT: Garnet?
DR. PECK: Ajaz, you mentioned under the role of interim specifications the concept of connecting annual product reviews and reports. I had a strange feeling here. There's something within the Office of Compliance that has been very important, especially for field investigations, and it's a thing called complaint files. These are very interesting files of information. They frequently are related to a product and the resulting product and how it's been performing. It serves a number of different audiences. I'm wondering if out of that, maybe at some later date, this kind of information could be used to help the element of risk and whether we could glean something from this kind of information to aid us in this risk assessment.
DR. HUSSAIN: I'd like Joe to jump in and I'll have some thoughts too.
MR. FAMULARE: I think the reason that Ajaz focused on the annual product review is because that's a compilation of that data, complaint data, recalls, field alert reports that go into the review divisions through the district offices -- and they actually go back there through the district offices -- and drug quality reporting system type issues. So that is true. And it's an issue that I saw was brought up by Dr. Claycamp on one of his slides, how that information post-approval feeds back into the risk determination, the cycle approach. So that is a good point but I'd say annual product review is sort of a catch for much of that data, even beyond the complaints.
DR. HUSSAIN: Judy, if I may just add to that.
I think this is an important element. Any quality system needs feedback loops and connecting that loop. This is part of that. At least from my perspective, because of my TIACC, Therapeutic Inequivalence Action Coordinating Committee, I have my eyes out on it. This is a major issue because we get complaints. The program right now is focused on the generic program, but I think you want to extend that to include all products. That's our look at it. The Office of Compliance looks at the broader aspect on everything, but we look at bioinequivalence or therapeutic inequivalence issues that are reported.
But the point I would like to make is I think we need to improve the data capture methodologies and make it more useful. I think the data cores or the information cores that we have and the type we capture I think can be improved to make this more effective. Right now it's a very difficult task to go back and see really can we get to a root cause or not. It's very difficult to do that. But at some point I think we want to improve that process, and Compliance is actually doing that as a separate division I think right now.
DR. BOEHLERT: Do you think you've gotten enough information?
DR. HUSSAIN: I think so. What we will do is structure the next meeting focusing on some quite focused questions, and those questions will be directed toward comparability protocol, "make your own SUPAC" to get that guidance finalized. But then we'll structure the discussion with examples, case studies from companies and so forth, but also start addressing the quality, risk, and so forth within that context because I think that's a good starting point. That's where we have done some work, and that will lead to a broader discussion on risk in a broader sense at some other meeting.
DR. BOEHLERT: Thank you, Ajaz.
Now we're going to go back to the top of the agenda for this afternoon, and that's an update on the aseptic manufacturing. Joe, I think you're first.
MR. FAMULARE: Last October, the Pharmaceutical Science Advisory Committee held a meeting to discuss the concept paper, which it had issued beforehand on aseptic processing, called the sterile drug products produced by aseptic processing. And this was to update the 1987 aseptic processing guidance. We received a lot of useful input through the committee, as well as in subsequent interactions with PQRI's Aseptic Processing Work Group, which was formed subsequent to the advisory committee meeting. So today Glenn Wright, who chaired that subcommittee, Rick Friedman, who was one of the FDA members, and myself will recount the history and objectives of the revision, with emphasis on the key role that PQRI played.
In looking at updating the original Aseptic Processing Guide, in 1978, of course, the GMP regulations substantially as we know them today were published, and it was accompanied by a preamble that talked about addressing the finished dosage forms of many drugs, with many unique and critical variables associated with them, particularly those for sterile drug manufacturing.
It actually said in that preamble that we were going to do additional regulations for SVPs and LVPs, but over the passage of time, you have probably come to realize that FDA only proposed regulations in the LVP area, which were not finalized, and in lieu of those regulation on both the SVPs and the LVPs, FDA drafted the Aseptic Processing Guidance, which went out in 1987 in its final form.
The original draft of that 1987 guidance actually started around 1980 in the division of manufacturing and product quality, and most of the work of that finalized 1987 guidance reflects that time period in terms of technology, etc. But, at least in terms of the guidance route, it was put there in a sense that it provided latitude, but now that a significant amount of time has passed, we've seen the need to update that GMP guidance.
In terms of the purpose for updating the guidance, we wanted to make sure we reflected the knowledge the industry and FDA had, which had evolved with respect to aseptic processing, and at least it's intended, in terms of this new guidance, to communicate FDA's latest thinking to incorporate the latest well-supported scientific principles.
Some the information, as it exists now in the original guidance, is obsolete. New manufacturing technologies that have emerged that are prominent ones and analytical technologies such as sterility testing equipment have seen changes. While the original guidance reflected aseptic processing policy of the early to mid-80s, there were some meaningful gaps in that guidance. By providing written guidance on certain manufacturing matters, we hope to improve our communications on that current thinking.
There was also a need to update our minimum expectations in terms of facilitating industry compliance with the GMPs so that we could be on the same plane with both industry and FDA. Many industry organizations, PhRMA, PDA as examples, and other industry representatives had requested issuance of updated guidance on an expedited basis to address areas where there was significant confusion as to what the minimal GMP standards are.
We have also heard from industry that proactive communication of expectations for firms building or modifying facilities saves money over time, and there's certainly a number of GMP questions that come up that certainly need clarification. A lot of that we heard in a general way over this past day in terms of the 483 and in other venues that that's communicated.
Many of the recurring and significant manufacturing problems we've seen hopefully can be resolved or averted through this guidance. Through improved clarity in the guidance, we would hope to reduce the incidence of time-consuming regulatory problems and these problems and how they impact on both FDA's and the industry's resources. So we hope that the updated guidance will enhance our ability to meet public health goals and will make the daily interactions much better, particularly in terms of a theme we've heard pretty loudly from industry: predictability and consistency.
In the case of sterile drug products, failure to adhere to cGMPs can impact safety and efficacy, and we've recognized the high risk nature of sterile drugs. As was explained in terms of our overall risk management approach by David yesterday, one of the initial things, what we've done in our work planning for GMP inspections, was to put sterile drug process inspections on top of our public health risk assessment in terms of giving priority to those inspections. So they're the top priority of our inspection program right now. This guidance, we hope, helps emphasize risk-based GMP approaches in terms of actually performing aseptic processing operations. One example where we've tried to do that is to, in the guidance, apply those risk-based approaches, for example to environmental monitoring.
Updating the aseptic processing guidance. In the concept paper we've acknowledged improvements that exist through more modern facility equipment and designs, automated processes, and well-conceived layouts, air locks, ergonomics, et cetera that were not conceived when the 1987 guidance was written. These new technologies, in a sense, reduce direct personnel involvement in aseptic operations and also, through such examples of the technology such as barrier/isolators that have come in today, have really reduced the personnel contact with the product, which is a major source of contamination.
We are liberalizing some of the old standards where we know more about them, such as velocities and microbial air quality as stressed there. And one specific example, as it relates to blow fill seal operations, we have a specific section which explicitly acknowledges the class 100 particulate standards may not be able to be met in certain instances, but that microbial standards, of course, should be met. We are focussing on the effect on the product and would, of course, have to assure that the design keeps particulates away from the product, even though in this type of blow fill seal operation there may be digressions from that class 100 type of environment.
In terms of updating the aseptic processing guidance, we see advantages here that will probably be most beneficial to those firms who include increased automation and enhanced product protection under their design concepts and those that follow sound GMP operating procedures and define good metrics.
And that's kind of a theme that we've been talking about here in terms of our overall approach on GMPs. Enhancing product protection and safety through the use of automation, barrier/isolator concepts is of course the one primary example of this. We hope that there'll be quality and business synergies here, another thing that was brought up -- you know, what's the business impact of that -- that will come together and make this a win-win for both FDA and the industry.
MR. FRIEDMAN: I'm going to talk about some of the details of the revision briefly in terms of the mind set from a risk-based point of view, as well as just a review of the contents and the format of the guidance.
Our revision of the aseptic processing document began by asking this basic GMP question: What are the potential sources of contamination in a aseptic process? First bullet: causes of contamination. In an effort to answer this question, the concept paper focuses on selected aspects of the aseptic process and facility that, if not maintained in a good state of control, can lead to the contamination of finished units of a parenteral drug.
We also asked the question, what measurements are most valuable in indicating sterility assurance? While cognizant that some factors in the manufacture of a drug are more influential than others, we acknowledge what so many before us have acknowledged, that if an aseptic processing operation does not remain in control throughout processing, contamination may occur that is unlikely to be detected by the end product test of a very small number of units for sterility. Consequently, there are a number of personnel, environmental, and mechanical variables that must be considered in order to make a reliable assessment of whether the aseptic processing operation is under control.
We also concluded that aseptic processes should be measured using scientifically sound and sufficiently representative sample plants so that meaningful data can be used to evaluate whether a batch was produced under adequate conditions. And we felt that we should focus on monitoring those variables that can be a signal of an emerging or existing route of sterile drug contamination. In short, our concept paper addresses areas of good manufacturing practice that, if not controlled, can impact on drug safety and efficacy.
I believe many of you have read the concept paper, so I'll just use this slide to provide a brief overview of its content. We've mentioned in previous forums that when the original committee started its work, Jimmy Carter was the President of the U.S. and the original draft guideline was typed on a typewriter by Chuck Edwards, a national expert who still works with the FDA. It was eventually put into ASCI format on a computer that had no table of contents and the headings were rather spare.
So our first task was to improve the format of the '87 guidance. The first thing we did, we have added a table of contents. It's hyper-linked actually from the contents to whatever section you want to go to electronically. More headings and subheadings. So now it is much easier to read and follow. New definitions have also been added. Among the new definitions in the current revision are air lock, colony forming unit, dynamic, endotoxin, gowning qualification, barrier, and isolator.
It is interesting to note the way the industry has changed in 15 or 20 years. There was no mention of either barrier or isolation. That word didn't appear at all in the original aseptic guidance.
We've also now included the metric system for ease of use alongside the English system numbers. Before it was in cubic feet and stuff like that. Most science, as you all know, is in the metric system these days. So we made the conversion in the aseptic guidance to metric numbers.
The old sections have been updated. For example, we are updating the sterilization section, which consists of the filtration efficacy and equipment sterilization subsections.
We also added new sections, including one addressing the role of personnel. One of the biggest criticisms by many organizations and industry professionals and PDA of the original guidance was there was inadequate guidance on personnel. Is there a more critical control point in aseptic processing than personnel?
In addition, the guidance addresses isolator technology and early processing. The latter, early processing section, addresses the upstream steps about which the biologic industry often has had questions and the Center for Biologics drafted a new annex to the guidance to address those frequent questions.
As Joe mentioned earlier, on October 22, 2002, we presented the concept paper to the advisory committee and we received a lot of helpful feedback from the advisors and expert panelists. Here are some of the major issues that we distilled from the transcript.
There was broad consensus from the industry organizations, companies, and task forces that appear before the committee that there is a pressing need for the draft guidance to be published.
The use of latitude phrases in the guidance was discussed. Are you allowing too much room for interpretation sometimes? I know that Dr. Boehlert brought up that question at the committee. So the dilemma discussed at length was that the guidance could use some more detail in certain places. There was general feeling that in some cases too much latitude can mean too little guidance. While we agree that more detail is needed in some instances, there was also acknowledgement from the group that too much detail is not desirable either. We don't want this guidance to be constraining. So we are trying to effect the proper balance.
Regarding the media fills, there is consensus that enhanced guidance was needed in certain parts of that section, especially acceptance criteria, number of units to run, et cetera.
A comment also repeated at the October 22 advisory meeting a number of times was that the positive language in the guidance regarding isolators is appreciated by the industry.
The panelists also recommended that we include acknowledgement of the use of appropriate rapid test methods as alternatives to traditional methods, a lot of them developed in the 1880's, culture methods. More sensitive, accurate, reliable methods are out there, and there was a sentiment that we should reflect FDA's open-mindedness to these new methods.
There was consensus that the term "action limits" may connote a specification as what is being discussed in environmental monitoring contexts. This is not the intent of the environmental monitoring programs and there was general agreement that the word "levels" should be substituted for "limits."
Finally, PQRI was recommended as the venue for more in-depth discussion of certain issues of concern. It was a five-hour advisory committee, but there are a number of issues that were identified for much more in-depth and exhaustive discussions through PQRI.
And that is where I turn it over to Glenn Wright, the chair of PQRI Aseptic Processing Working Group.
MR. WRIGHT: Good afternoon. I am Glenn Wright, Director of Global Regulatory Affairs for Eli Lilly & Company. Today I am not representing Eli Lilly. I am representing PQRI as the chairman for the Aseptic Processing Working Group.
The Aseptic Processing Working Group was approved in concept in November of 2002, and I can't be grateful enough and really commend the FDA for bringing the aseptic processing concept into PQRI.
The PQRI Aseptic Processing Working Group was really formed to provide a scientific basis for input into the FDA's concept paper on aseptic processing. The working group's activities targeted specific aseptic processing topics, so the group did not try and handle entire concept paper. It was very selective at what it was targeting. It was comprised of members from FDA, industry, and academia.
I really have to thank the entire working group of experts. It was very large working group. The group was really dedicated. We were meeting every week for some very long teleconferences, flying into Washington for some meetings. As we all know, I love to travel to Washington in the winter because of its very mild climate. Well, this would be the winter of exception. So the trips were interesting and maybe we should have had them in Indianapolis. It was much more enjoyable climate this winter.
So I really am appreciative to all the task force members. I would like to point out a few very key members. Rick Friedman, of course, a very key member early on, helping us as we were thinking about this whole concept. Brenda Uratani, extremely helpful. From an industry standpoint Russ Madsen, really from a PDA standpoint, was very helpful as we started to think about what something might look like for this. The last one I would like to mention is Richard Johnson, who again, was essential as we started putting some concepts together about how a working group might address this, especially a working a group of this size.
We came up with some very clear and specific goals for the working group, which I think led to its quick execution and I will call success. Really, the key goals were to develop, execute, and compile an industry survey to pull current industry practices on aseptic processing. This had to be done extremely rapidly and we did achieve that. We're hoping as a byproduct of the working group to publish the findings of that survey hopefully in the August time frame. We've got to clean it up a little bit in regards to format, make it submittable to a journal, and also we had some late surveys that came in and we want to go ahead and incorporate that data after the cutoff date. So that was the first one.
We were also charged with developing redline clarifications for eight text areas within the concept paper. These were areas where we really felt that there was probably a baseline agreement, that really we were talking about changes in language, some subtle changes which would really make the guidance more clear. There could be less issues in regards to interpretation. So there were eight areas targeted for that.
Then really the meat of the working group was to come up with information for development of recommendations on 10 specific topics, and these were much larger topics which would require much more discussion.
The challenge for all of this was we really needed to complete our activities by February 28, 2003.
So the basis for the recommendations that we made to FDA were: the collective expertise of the working group. And I think if you look at the 41 members, you'll find in there some of the best industry experts that we have from FDA and from industry and from academia. Data from the survey was also used, scientific publications, journal articles, and other references such as the PIC, and other regulatory documents.
From an actual process time line, or from an administrative time line, I think we did a very good job in regards to completing our task. The working group was formed, again, or the concept was approved November 20. Just 110 days later, we issued our final report to FDA, actually 76 days from our first meeting to our last meeting. So we actually concluded on March 6, our last meeting, and then just finalized the report. So we were one week beyond our planned completion date. And that really was due to weather. I was very happy for the fact that if it had not been for the snowstorms, we would have actually completed on time and on target for that.
Now we're going to break this up a little bit. We are going to go through three examples of the clarifications. We're not going to go through all eight. We're going to try to spend most of our time on the recommendations. Rick is going to go through the three clarifications that we're going to look at. Then I'm going to take the first five recommendations, followed by Rick taking the following five. So Rick I will turn it over to you.
MR. FRIEDMAN: Thanks, Glenn. The first clarification regarded media fills, and the reason for the clarification was to acknowledge flexibility and study design for media fills. For example, every six months a firm might propose to incorporate a three shift aseptic operation into two media fills by an appropriate overlapping approach or other suitable study design. Shift changes and other time related events would be among the important factors in any such study design, and it does put more stress on the study design to make sure that those are incorporated in less media fills than conventionally done. But such alternate approaches are possible and this recommendation was meant to make that clear in the guidance. And that was the slide.
The group recommended revising the document to be less specific with respect to how the suitability of an active air monitoring device is gauged. It's not like chemistry. There is some imprecision in microbiology, just like in bioassays, very allied methodologies, that is not there with chemistry. You get like some .1 percent precision or some .5 percent precision of HPLC and you're not even near that with the microbiology methods. So the means of validating and comparing two different methodologies or devices are not going to be the same in chemistry and microbiology and we wanted to acknowledge that approach via this recommendation. And when I say we, I'm speaking as a member of the work group.
There was concern regarding the imprecision of the term "atypical microorganism." So the work group approved the language here to reflect that the environmental monitoring program should be attentive to significant changes in microflora.
And that is basically it for the clarifications. You'll find that the recommendations were quite layered. There were a number of points that came out of each recommendation, but you could go through these fairly briskly. Recommendations will take a few more minutes.
MR. WRIGHT: Okay, for the recommendations, these slides are going to be very busy. We thought it was very important that, as we talk about the recommendations, we provide the exact language so that there can be no confusion.
The recommendations are formed around a question that the working group was asked. So for each recommendation there is a question at the very top.
The first question is, what is the appropriate number of units to be filled during a process simulation or media fill? When you boil this down to the real scientific question, it's really not that difficult to understand what we're really after.
The number of units to be filled should be sufficient to accurately simulate activities that are representative of the manufacturing process. Such activities include, but are not limited to, aseptic manipulations during setup and during production, interventions, type and appropriate number, the typical and routine interventions, as well as the atypical and the non-routine, staffing levels, staffing changes, gowning changes, multiple day fills, and this is not a complete list. A generally acceptable starting point is between 5,000 and 10,000. For batches under 5,000, the number of media fill units should equal the batch size.
Where the technology is such that the possibility of contamination is higher -- and this would be an example of manually intensive filling lines -- a larger number of units generally at or approaching the fill batch size should be considered.
So in this recommendation we're saying that the number of units, when we get to 40,000 and 50,000, which we see in the industry today, is not the important piece. The important piece is have you actually designed your media fills to incorporate a number which allows you to do those interventions and all those activities that you are trying to represent in that media fill. Really, a number to start with is somewhere between 5,000 and 10,000 units. You may be able to complete all of your activities within that number; you may need to add some to that. But the real important factor is the actual design of the actual media fill.
Recommendation 2. What is an acceptable temperature range for the incubation of media fill units using TSB and FTM? If alternative practices are used, what type of justification is required?
Again, when you get down to the principle of this question, these medias are extremely well understood. We know that they are wide spectra medias. They are great for mesophilic bacteria, as long as they are incubated within that temperature range, which is the largest grouping of bacteria. So incubation temperatures should be suitable for the recovery of the bioburden and environmental isolates. Incubation conditions should not be less than 14 days, with either a temperature or temperatures between 25 and 35 degrees C. If two temperatures are used for incubation of the media fill units, they should be incubated for at least 7 days at each temperature.
Again, this is a very well-known media type, both of these. The incubation temperatures are well known for the type of bacteria that we are going to be seeing, as well as for the fungi we are going to be seeing. So the incubation range suggested meets that requirement.
The incubation temperature should be maintained within plus or minus 2.5 degrees C of the target temperature, and at no time be below 20 degrees C or above 35 degree C. So again, we really looked at the basic science of this, and what's the question we were trying to answer, and we came up with the recommendation based on good science.
Recommendation number 3. What is an appropriate limit for the contamination rate in a process simulation media fill? What is an appropriate target for contaminated units in a process simulation media fill?
I think everybody in industry will agree that the target is zero. That is really what we are targeting when we do a media fill. Any contaminated unit indicates a potential sterility assurance problem. All contaminated units should result in a thorough, documented investigation.
Now, as we went through the discussions with the group on this, it was amazing as the group quickly realized that statistics were very difficult to apply. I think the best example of this is if we were to use a .02 percent contamination rate, if you have a media fill of 40,000, does that mean you are guaranteed the right to have 8 positives in your media fill? As you see as you apply that statistic, as you get large media fills, it really becomes a question of have you really met what you are trying to meet?
So as the group worked through this, it became clear that we really needed to look at the target being zero. But in aseptic processing, that is the target. It's not the achievable number in all cases.
So we recommended the acceptance criteria should be established for media fills. When filling less than 5,000 units, no contaminated units should be detected. When filling from 5,000 to 10,000 units, 1 contaminated unit requires an investigation and a determination if any further action is needed, such as a repeat of the media fill, and 2 contaminated units are considered cause for revalidation following investigation. When filling more than 10,000 units, 1 contaminated unit requires an investigation, 2 contaminated units are considered cause for revalidation following investigation. The concept behind the two tiers, is that as you fill more units, you do have a greater chance of picking up that one stray positive.
Then reoccurring incidents of contaminated units for media fills for an individual line, regardless of the set acceptance criteria, should be a signal that action should be taken. So it would not be acceptable if you have a repeat of 1, 1, 1, 1, 1, 1 in your media fill. Really you should see that sporadically, at best, in your media fill processes.
Recommendation 4. When should critical surfaces be monitored, and what are appropriate expectations with regard to results obtained?
From a scientific standpoint, I think we would all agree that monitoring of critical surfaces can be scientifically valuable. It provides a good stream of data to look at. The challenge we have is the processes we use to actually obtain those samples. So it is well understood that the sampling and incubation methods used in surface monitoring are manual operations that, due to personnel involvement, result in a low rate of false positives. And for this reason the detection of microorganisms on a critical site should not necessarily result in batch rejection, but should be investigated.
The other EM data and procedures that support the operation should be reviewed to determine if the positive result is supported. If the review does not support the positive result and there is no negative trend for the critical surface site, there is a strong case for not rejecting the lot due to a positive result.
And this is extremely important. Unlike sterility testing, which has built in controls, is done in a very, very controlled environment, when we talk about surface monitoring, and you actually have operators taking RODAC plates, for the non-microbiologists in the group, and working their way around to actually stick that on a surface to take the sample, then taking that back to the lab and then putting that into an incubator, there is a chance for a low rate of false positives. So while the data gleaned from the exercise can be very valuable, you do have to weigh the false positive rate or we end up with a de facto sterility test which is non-valuable to the industry and to the regulators.
The second part of the recommendation is that the selection of sample sites should be strategic in an environmental monitoring program. This should include consideration as to when or if a critical site should be monitored.
What we're saying there is that you really need to think before you set up your program, what you're trying to get out of that program, what sites you should be monitoring, what the risk is to that before you actually go in and do those, so you know what kind of data you will have, and how you're going to apply it.
The next part of the recommendation is each manufacturer should review each type of process and the points of risk for product contamination. Consideration should be given to the level of contamination risk based on factors such as difficulty of set up, length of processing time, and impact of interventions. Again, you really need to think about how you are going to select the sites.
PQRI strongly supported the concept discussed on line 993 of the concept paper that, when performed, critical surface sampling should be performed at the conclusion of the aseptic processing operation to avoid direct contact with sterile surfaces during processing. There seemed to be some miscommunications in regards to how folks go about doing this type of monitoring. It does have a negative impact on your line, and so it should be always be done only at the conclusion of the aseptic processing operation.
Recommendation number 5. What data should be considered when initially establishing monitoring limits? What is an appropriate frequency for re-evaluating monitoring limits?
Initially published data and/or historical data from similar operations should be used to set action and alert levels. Historical data may be derived from areas of similar aseptic operations or represent a homogenization of company monitoring levels by room class, across lines and facilities.
For aseptic areas where the allowable levels are less than 1 cfu, consideration should be given to the use of count incidence rates as an indicator of an unfavorable trend.
And alert and action levels are generally re-evaluated and reset, if deemed necessary, on an annual basis using primarily the previous year's data for setting monitoring levels for an upcoming year. Published data should be considered when re-evaluating the action level.
So those are the first five recommendations and a little bit of insight into how the group achieved those. And I'll turn it over to Rick for the last five.
MR. FRIEDMAN: Recommendation number 6 addresses Table 1 of the concept paper and that table summarizes clean room air classifications. The working group agreed that ISO designations should be incorporated into the document and that all expressions of microbes per unit of air volume should use metric units, the way the EU does.
Also recommended was replacing the word "limits" with "levels" which echoes what we heard at last October's advisory committee meeting. So there was a consensus between the advisory group as well as the PQRI work group that "levels" was a better term than "limits."
Settle plates were added to Table 1. They were not previously discussed in terms of numerical expectations in the 1997 aseptic guidance. But the settle plates were added to Table 1 in order to align the table with that found in EU Sterile Annex 1. And the significance of environmental monitoring trends, the last bullet, was stressed over that of individual data point excursions.
Here is the chart. If it looks strangely familiar, there is a very good reason for that. As I have indicated, the work group achieved consensus on a table that harmonizes the microbial expectations with the EU and incorporates the ISO particulate air cleanliness classifications. That's quite an accomplishment.
Recommendation number 7 addresses the issue of what type of air flow is acceptable in a closed isolator. The working group concluded that while unidirectional flow can often be appropriate for open isolator designs, closed isolators can normally be operated reliably under turbulent air conditions. Also, further explanation of the distinctions between an open and a closed isolator was recommended, perhaps by including definitions in the aseptic guidance, in the glossary.
Recommendation number 8. What's the appropriate recommendation for air handling systems in isolators?
The group felt that there was not a need to specify type or configuration of filters used in isolator air handling systems. Filters are already discussed earlier in the concept paper and the consensus was that the air handling system needs to be appropriately designed to maintain required environmental conditions in the isolator interior, so there is not a need to specify HEPA, ULPA, a membrane filter, or whatever.
Recommendation number 9 covers a number of isolator decontamination issues. Firstly, the group notes that isolators should be decontaminated using a sporicidal agent, and this process should be qualified.
The group also recommends that a 4- to 6-log reduction of a suitable BI, biological indicator, can normally be justified depending on the application, and product contact surfaces should be rendered sterile. A 6-log reduction was specified for those surfaces.
The group also concluded that while chemical indicators and fraction negative studies can be used to help develop a decontamination cycle, demonstration of suitable kill of BIs is the ultimate standard.
There is agreement that uniform distribution of the decontaminating agent should be optimized and addressed as part of cycle development work, very much in line with what we've heard about leveraging your understanding of processes as much as possible at the development stage.
The group endorsed the language found in the concept paper with respect to the degree of relevance of fraction negative approaches for decontamination methods. Essentially the concept paper states that fraction negative type approaches are useful in cycle development, in estimating what the cycle parameters might be. But the ultimate test is more in the total kill analysis type of approach.
The group endorsed the language found in the concept paper with respect to material effect except that it wanted more of a stress to be put on texture and porosity rather than composition. There have been a couple of papers in the PDA Journal on this topic. One came out right toward the end of our proceedings on recommendation number 9, and it indicates that there is material effect, the latest one by Sigwarth and Stark, I think. And yet, it also replicates the past experiences, I think, by Dr. Akers where there are also porosity or texture or organic or inorganic material effects on D-values that also confound the issue sometimes in this respect. That means that you have to prepare your BIs right, firstly, and secondly it means looking at the materials for material effect, hopefully looking at that comprehensively during development and then lessening the validation burden.
Recommendation number 10 is our last recommendation and the group's final recommendation regards the fundamental sterile drug process development choice of terminally sterilizing a drug in its final container or aseptically manufacturing the drug.
The working group concluded that a clarification on adjunct processing should be made in the aseptic guidance and that no further detail was needed on process development choices in this guidance.
Instead, the group strongly felt that the question posed here, what's the most science-based and risk-based flow chart for process development of a sterilization process, should be explored and addressed via formation of a new work group within PQRI or another organization.
The PQRI final report states that "since terminal sterilization is far better understood, a firm should not default automatically to aseptic processing, but should explore terminal sterilization during product development." That was also concurred with by 86 percent of the respondents to our poll that we sent out to the industry. 86 percent of respondents agreed that a firm should not automatically default to aseptic processing, but do some sort of flow chart that explores terminal and/or adjunct processes before going to an aseptic process, or choosing an aseptic process.
And it's back to Glenn for the summary of the PQRI effort.
MR. WRIGHT: Let me add a little bit more onto recommendation number 10 because it is easy to get confused with the term "adjunct processing." Adjunct processing really looks at the ways that you might treat an aseptically filled product, after its being aseptically filled, to increase its sterility assurance level. And the PQRI group really found this to be an interesting concept of what kind of things could you do post-aseptic filling to increase your sterility assurance. As we got into this conversation a lot of ideas came around such as pulse light, heating, partial irradiation, lots of really distant ideas.
I think what the PQRI group stated was that we thought it was interesting. We're not a point in time where we really feel we can give much guidance on that because there needs to be a lot of development work completed. And so we would recommend the formation of a group to look at what that might look like. Some of the challenges are things such as what type of indicators would you use for a sub-sterilization adjunct processing step. You certainly cannot use a normal bacillus type of organism you would use for a sterilization. So when you think about all of the things that would need to come into play, what would be the regulatory expectations in regard to it, it really spurred a great amount of excitement within the group as far as really reaching into their science minds and saying what's possible.
So it is an area I think we would recommend further work to be done, and at some later point, it might be something you would want to include or the FDA would want to include in guidance. But today it's just not at a point where it would appropriate.
So I'm going to summarize quickly the PQRI working group. The Aseptic Processing Working Group has completed the activities as specified in the work plan. The PQRI process entails an approved work plan. We've now competed that activity, so the group's work is complete. The final report is available on the PQRI web site, and that's at www.pqri.org. You can also find a copy of the work plan, the final reports, all together on that web site.
The principal reason for the success of the working group was the expertise of its members and the strong work plan. I can't emphasize enough the expertise of the members. When you look at the member list, it's readily apparent that there was no come-up speed, learning curve speed with these individuals. They are well established in the industry, many in the academic world and FDA, with an understanding of aseptic processing. So it really led to some really good discussions, some very interesting discussions, and some very thought-provoking discussions.
The PQRI process clearly demonstrates that when we bring together true experts and base our decisions in science, we can work together to develop guidance that is good for the regulators and the industry and the consumers.
The one final thought or comment I have is I want to make sure that industry understands its responsibility. In this process I really do feel we were lucky to have the concept paper come out to have an initial reaction to what some of the FDA's concepts were around aseptic processing. As the FDA moves into the draft guidance, the industry has yet another chance to comment through the actual docket. And it's really up to industry to make sure that if they have issues with the guidance, that they comment on it. The way to develop good guidance is through good communication. So I would highly recommend that the industry comments on the draft guidance once it's issued.
And with that I will close my presentation and turn it over to Joe for the final slide.
MR. FAMULARE: There's one last slide, if you'll put it up there in terms of the status of the guidance revision.
Before I get into that slide, I just wanted to add that one of the main successes of the group was the chair who really kept the group very much on task and focus, and you can imagine, with a group that size, the amount and divergence of opinion. But Glenn went through that seamlessly and now probably bears on his office the post office motto, neither rain nor sleet has prevented him from his appointed task. I can't remember the middle part.
MR. FAMULARE: In terms of the concept paper, that's still remains up on our web site. The first three steps here we've actually been through. We had the advisory committee meeting which gave us very valuable input, and now the PQRI group, through its efforts, has been described in detail, and the data that was brought into this has really helped us in terms of being able to take that data back and help us in terms of formulating what, as Glenn said, will be the draft guidance.
We're now at the step of taking our reaction to that, our concept paper, putting it through the regulatory and legal review process we need to go through to get a draft guidance published. Then we will publish the draft guidance for public comment, as Glenn says. We certainly have a tough pace in keeping up with that to meet up with the aggressive time frame that PQRI came through on, and we're going to try and hold up our end and get that out as soon as we can. Anytime I give a date like that, I always have to retract, but we hope, indeed, to get that out this summer. So that gives me a three-month leeway there. Hopefully, I don't have to call October a summer month or something.
MR. FAMULARE: We are definitely pointed towards getting this out as quickly as possible in the spirit that PQRI did a job in a very intensive, quick turnaround.
DR. BOEHLERT: Thank you, gentlemen.
Are there any questions or comments from members of the committee?
DR. GOLD: I have a few comments and questions. I too add my kudos to the committee and to Glenn. What Joe did not say was not only was the committee composed of 40 individuals, but many of those 40 are very strong in their positions, and bringing peace to this diverse group required, obviously, a very strong and firm hand. So I do congratulate you.
MR. WRIGHT: Thanks.
DR. GOLD: But Glenn and the others here, there are one or two points that I would like to clear up. Your recommendations went a long way to clear up many of the really troublesome issues, but there's one that was not covered in the recommendations, or at least I didn't see it covered. There's been a question raised about whether an isolator needs to be placed in a controlled environment. Did your committee discuss that, and if so, what was the conclusion?
MR. WRIGHT: We didn't discuss that. It wasn't part of the formalized work plan, and the challenge really was to stay as close to the approved work plan as we could. I think it's a very good question, but unfortunately there was just not enough time to add any topics and it was not in the approved work plan, so we did not get into that topic.
DR. GOLD: So we may see the statement that was in the original document on that matter.
MR. FAMULARE: Well, you'll have to --
DR. GOLD: I'll have to wait.
MR. FAMULARE: -- realize also that this is a summary of PQRI primarily, and Rick did have one slide about advisory committee comments as well, which is also a summary. But there was comment at the advisory committee about that, and so that comment has been taken in. We just don't have the results of all that published for you yet.
DR. GOLD: All right. Did you want to add something, Rick?
MR. FRIEDMAN: No. I think Joe just basically said what I was going to say.
DR. GOLD: I have another question. Recommendation number 3 on the slides talks about when filling from 5,000 to 10,000 units, 2 contaminated units are considered cause for revalidation. And then when you go beyond 10,000 -- and you were talking about doing as many as 40,000 and there are firms that are doing a great many units I know -- it says the same thing. Two contaminated units are considered cause for revalidation following investigation. So the recommendation of the committee is that once you get up to a number above 5,000, 2 is the failure rate that requires investigation?
MR. WRIGHT: That's what the group concluded. That's correct.
DR. GOLD: What was the rationale for that? If you fill 10,000, that's quite different than if you fill 30,000 or 40,000. What would be the rationale for that?
MR. WRIGHT: Good question. There were really a couple things we looked at. First, we went back historically and really asked the question, how did we ever get to .1 percent? Where did that number come from? How did it evolve? What we've seen is that initial setting of that number came out of the fact that firms were filling about 3,000 units, and that really when the WHO came out with their recommendation, it was not less than .3 percent. Companies were filling a small number of media filled units, and they were looking at not more than 1. That's really what they were looking at. They didn't want to see more than 1 out of those small fill units.
As we went through time, we started using this percentage and we got to .1 percent. Firms were filling about 3,000 units, and we were talking about a 95 percent confidence level which really puts you in that, again, 1 category.
As time has evolved, that number was extrapolated. I can't imagine that the idea was ever that you would be allowed a large number of failing units based purely on statistics. So that's one of the rationales as we looked at this.
The other one really is the limits based on statistical calculations. We know that they're flawed when we look at aseptic processing. It's in part because it is not appropriate to apply the statistics of large scale populations to smaller ones, and a statistical approach makes faulty assumptions that the distribution and frequency of potentially contaminated units are the same in these populations. The statistically derived contaminated rates are, therefore, not reflective in setting acceptance criteria for the process simulation.
So, again, with the target being 0, which is where we really want it targeted at, the idea that as you fill more units, you should be allow more positive starts to fall apart. You really are trying to target 0. The idea that you're going to have an occasional 1 positive because this is aseptic processing is understood, but when you get above that, there's certainly concern in regards to the processes being performed.
MR. FRIEDMAN: I could also add to that. One of the reasons why PQRI ended up being such an ideal, I think I could say, venue for addressing these very intricate, technical issues was because PQRI is data driven. And the first stop that PQRI made was at the data and then researching the journals and using the collective experience, which was tremendous, of the 41 working group members. And starting at the data, we found that there were 606 media fills that we got back from industry, 606 in the last year that were run, and 54 of the 606 runs had contamination, meaning that 552, to be exact, had no contamination. 91 percent were not contaminated, 91 percent of the media fills in the last year. 66 percent of the 54 that were contaminated had one contaminant, so two-thirds had only single. 6 percent had two contaminants, and three contaminants were found in 7 percent, and so on.
So the data was a very important cog in this process because we were struggling with this issue. I've mentioned a number of times to people -- and I think it really is a tribute to Glenn that this was such a success from managing the process, as well as from bringing everybody's technical opinions in. But in my eight years at CDER -- I was in the field previously -- I would go to conferences each year and we would hear the same questions over and over, and we'd leave the conferences with a lot of food for thought but without ever reaching resolution on these pressing major technical issues. What was done here was we used the data, the journal papers, and the collective experience of the foremost experts in the industry from I believe 10 organizations, including USP also, to come to a consensus on this issue.
DR. GOLD: Rich, on those media fill runs that you quoted, what were the size of the runs? How many were over 10,000?
MR. FRIEDMAN: We actually did a lot of data crunching. Glenn did a lot of Excel work.
MR. WRIGHT: But I actually do not know that offhand.
MR. FRIEDMAN: We could share it with you but it's in a big database.
DR. GOLD: Can you share it with the committee or you're not ready to do that?
MR. WRIGHT: I did not bring all of our data reports, so today I'm not able to share it because I don't have it. But we certainly can get that.
DR. GOLD: Two other comments. One comment is that I'm glad you finally resolved the issue of fallout plates with the EU. This has been a contentious issue for a long time, so I finally will not hear the arguments about that going back and forth in the future. That's good to know.
But the last point I would like to make is that when I read the concept paper, I noticed that there were many areas where specific not suggestions, but almost indications of what should be done -- for example, flow rates of air to achieve unidirectional air flow -- where those numbers were taken out. Now, for first world countries, that's fine, but this guidance is going to be used worldwide. And I wonder how we can deal with this and make those numbers known to areas of the world where they don't have that type of expertise. There is a tradeoff always in getting too specific and not being sufficiently specific.
MR. FRIEDMAN: Effecting a balance between specificity and general principles to allow latitude is one of the most difficult things I think in CMC guidance, in GMP guidance, in anything that we write at FDA and in the technical literature that's written by the organizations, though they have more of a chance to be specific than we do. So it has been a struggle at times to try to figure it out because you could find people at absolute extremes of this debate, and it is a timeless debate. That debate will never go away.
So, what we've tried to do -- I'll just mention one more thing and then I'll let my boss address this in a more lasting way because mine is just a technical opinion on this.
What we're doing is we're trying to find a way to address including numbers like that, but not in a stifling way, but instead mentioning it such as 90 FPM, mentioning it perhaps as a footnote or something like that. Those are the types of things we're considering.
DR. GOLD: You could mention it in a footnote or you could mention it even in the text and just indicate as one possible way of achieving unidirectional air flow is to use a number such as. There are a lot of ways of doing this. Yes.
MR. FRIEDMAN: And mom and pop shop, small drug companies too.
DR. GOLD: Well, it's mostly for third world I think that we --
MR. FAMULARE: Dan, that's what I wanted to address here with you. Really in the context of Q7, I think you had that very much in mind in terms of the work group there, but in terms of the aseptic guide, it's generally directed towards U.S. companies and those that ship to the U.S. So that audience isn't in mind, and that doesn't mean that there shouldn't be a venue to try and address those issues for those countries that may not be as knowledgeable in that, and there are additional ways and venues to do that.
DR. GOLD: Joe, I can assure you that your guidances are used worldwide regardless if they ship to the U.S.
MR. FAMULARE: That will be beneficial, but in terms of putting those types of limits in for U.S. and firms that ship to the U.S. would probably run counter for the overall purpose, and we need to seek other venues to get that guidance. Just getting this will probably set a lot of paradigms there that aren't available right now.
MR. WRIGHT: Let me add one more comment to the question on the limit for contaminated vials. I think one thing that's important to remember about the survey is the survey was a voluntary response survey. So while we think we've got a pretty good diversity of responses, we can never be absolutely assured.
The other thing I really enjoyed about this PQRI process and the concept paper is that there is, again, one more round that this guidance will go through. So as firms begin to look at that acceptance criteria and struggle with themselves whether that is acceptable or not, they again will be able to come back and comment to the docket. So we probably will go through this discussion point again as questions start coming in, or the FDA will go through this again as comments start coming in to the docket on the draft guidance. So we need to keep in mind that there will be one more round for the industry to comment, and I really am hoping that the industry will comment on the parts of the guidance that they are having challenges with so that in the end the guidance will be as strong as possible.
DR. BOEHLERT: G.K., did you have a comment?
DR. RAJU: Sure. Two classes of comments. One is on the guideline itself. Just like Dan, I think sitting here on the committee, we have to say well done and congratulations because many of them are volunteers who get together across organizations to do it. It sounded clearly that you did it quite quickly and it got a lot of people together and you did it quite well based on the answers that you were giving us. So that's a thought on the guidance itself.
But if you're now going to try to connect it to the broader cGMPs for the 21st century, I guess I have a set of comments first for Joe and then for Ajaz.
If you look at the cGMPs for the 21st century and you ask people around in the industry who are part of the cGMPs, if you don't touch the 210 and 211 for now in the C.F.R. and you say it's about the guidances to some extent in terms of the regulatory process, on the two ends of it, they'll say that the two least favorite of their guidances are the old versions of the C.F.R. Part 11 and the aseptic processing guidelines, and their favorite one is the SUPAC guidelines and they want to do more. So there are two ends of the spectrum that you hear.
If you look at what the FDA has been able to do is to really look at the C.F.R. Part 11 and make some major clarifications, and maybe you took us from the 19th or 20th century to the beginning of the 21st century and that was inspiring.
In terms of the SUPAC, you laid the foundation to make "change is good" rather than "change is bad" and take us to the 21st century.
But if you look at the aseptic processing guideline, you made a big start forward. In many ways, if you look at the basic sterility testing, it's from the 19th century. In many ways if you go back to the fortunate and maybe unfortunate time when Fleming had a cold and sneezed into a petri dish, the good news is that we got penicillin as a result, but the bad news is that most of us have been testing with our senses being pretty much the eye and pretty much being about whether a cell can grow based on what Fleming did many years ago in this petri dish.
You've clarified the guidelines and brought them forward from the 19th century to the 20th century. Now let's go back to the questions about mechanistic understanding. I mean, I like the fact that you brought in the isolator piece, you brought in the typewriter, no table of contents to the table of contents and a structure clearly in the 20th century. You've laid in the isolator, which is a technology for building sterility in. I understand that. And you put a note saying you were going to encourage new technologies to measure sterility. So that's when we're trying to overcome Fleming here.
There's a number of technologies which we believe can have a mechanistic view, just like we said, our desired state of sterility. That is not necessarily wait for it to live for 14 days but to be able to measure it immediately because some things are general.
As you take this guideline forward, because biology is more unpredictable than chemistry and physics sometimes, should this guideline wait? And maybe as I'm asking the question, what is the next step with aseptic processing, you may have a huge step forward. How does it get integrated into the 21st century? Does it wait until we finish the physics and chemistry and then the biology comes later?
And then the question for Ajaz is, do you see this stopping here as kind of aseptic or do you see a connectivity back with all the things that we were talking about? Because you said PAT was the benchmark and the example. It seems like this might be another way to bring it in.
MR. FAMULARE: Well, to start off, I'd say we do see that we're just at the beginning stages of getting to the 20th century, and admittedly a lot of what we're putting here is catch-up to close the gap on things we haven't addressed going back to 1987. As we look going forward, we need to keep the current thinking on these ideas going forward even more frequently and with greater intensity in terms of the technology that's going to be coming in going forward.
So we agree with that concept and we agree that we need to be putting into place those guidances as necessary that address emerging technology or be flexible enough with the guidance -- that's back to the previous question -- that those things will just come along.
I think many of the issues we've been dealing with, in terms of our actual cases that come to the Office of Compliance -- you're saying do we have the path forward. Many of the issues, in terms of our compliance issues, are 20-year-old technologies still being used to make sterile products today. So improving the current state as much as we can is a major leap forward and especially since these products represent many therapeutically necessary products, and every time we have a compliance issue with a sterile process product, we generally associate that with medical shortages, supply problems, and patch-ways to make sure that the product is still being manufactured and additional monitorings to get them forward. So this is a bigger leap than you may think.
But I do agree. We have to keep thinking forward as to the next steps in line with the cGMP for the 21st century, giving rewards where we can where you're bringing the better technology. This is really just the first step. So we have to keep the momentum going now. In fact, PQRI is busily thinking of the next subtopics to take on.
DR. HUSSAIN: G.K., I think that's a very good question. In many ways we are catching up not only in this area, but even I would look at stability testing and we had to really catch up on that. We have a guidance 12 years and running, and it's still in draft form. So I think there are many aspects.
But in the case of microbiology, in terms of the PAT discussion, we devoted a significant portion of our third PAT meeting to rapid microbial methods. I'm happy to share with you that we are moving in submissions in that area. So that has already occurred and is occurring in rapid ways. In fact, we are getting ready to put some training programs in that area, working with Joe and others, to move forward very quickly in that area also.
So the guidances shouldn't be looked upon as waiting for any technology. I think Joe is right. The guidance is flexible enough to make new technology come through without, quote/unquote, perceived or real regulatory hurdles. So that's the process.
DR. RAJU: I think similar to what Joe said and Dan said and everybody said yesterday and today, I think this C.F.R. Part 11 case and this aseptic processing case -- really the fact that you made so much progress really gives a lot of credibility to the cGMP initiative. There's no reason for us to say that. We're not from the FDA. It's really, I think, very impressive.
DR. BOEHLERT: Pat?
DR. DeLUCA: Yes. There's certainly a need for science in this area and research. But I agree too, it's been a long time. 1987. It's hard to believe it's 15 years since we drafted the guidance. Actually it started in 1980. This is a great step forward. There's a need to bring it into what the technology has proved and what we've learned has improved enough. And they have the data to show what can be met. So I think this is a great step forward.
I'd like to ask a question on recommendation 4. This dealt with the critical surfaces to be monitored. I don't see in here a requirement for a drawing, a layout of the locations where the monitoring would be done.
MR. WRIGHT: Yes. I can comment on that. The recommendations are really meant to be used and incorporated by the FDA as they see fit. There's a realization that there may be more surrounding that. The real question we were working to answer again was what do you do with the data and should you be monitoring these surfaces. I think the realization that you would want to have a map of where those critical processes are -- I think that certainly would be an expectation, but the working group did not get into that detailed portion of it. They really were working on the question of when should we and, again, how should we look at that data. I think it's a very good question.
DR. DeLUCA: I just thought the map would help in constructing a history.
DR. BOEHLERT: Tom, you had a comment?
DR. LAYLOFF: Yes. I was going to say I think it was really an outstanding job of pulling together the industry and the experts to define what is pragmatically reasonable in the current environment. I think that it's important that we keep our eye on that, rather than trying to force the industry to move to what is technically feasible. Certainly the rapid microbial testing, we heard a lot of the advantages and disadvantages of it, but this is the practice of the art, the good practice of the art at this time, and I think it's wonderful it came together that way.
DR. BOEHLERT: Any other comments, questions?
DR. BOEHLERT: Ajaz? I think we're reaching the end of our meeting. Helen was to do a summary and conclusions, but she had another commitment. Yes, Tom will do it since he took her seat.
DR. BOEHLERT: But Ajaz has volunteered to play that role or was volunteered to play that role.
DR. HUSSAIN: I think this has been a very good start to this committee. I think the two days of discussion have not -- although we presented this information before in other places, it really helped me through your discussions to really focus in on a number of issues. I was very pleased to see the level of participation and involvement of the committee members. So I think both Helen and I discussed this and we were quite pleased with the level of participation.
As we move forward, I think the key aspect would be to keep the focus on topics and the scope of the topics in such a way that we can start making progress. I think it will be nice to see if we can repeat the success of the PAT Subcommittee in terms of getting clearly defined goals and objectives and laying the whole program out and coming to consensus and moving forward very quickly. It is important to do that because we have a time line with respect to the drug quality system for the 21st century initiative. We have a two-year time frame and I think we are almost at the midpoint of that. This committee's activities would really need to be at a very high level of efficiency to make sure the input is captured as we finalize our plans and strategic plan for this initiative.
So I really thank all of you, and we will take all your recommendations and plan for the next meeting in a way hopefully you will be excited and we'll get more information out.
Joe, do you want to say something?
MR. FAMULARE: I could just quickly second Ajaz's comments that the group was very interactive and helpful on having us focus our ideas. We are certainly, in a way, pressed for time to make sure we get to the point of what we want to study further in depth. This process has been very helpful to us in trying to narrow that down. Just from this meeting, I can see that the future meetings will be very productive in giving us feedback on how to proceed.
DR. BOEHLERT: I'd just like to add my thanks to all of the speakers who presented the last few days. I think it has helped us as committee members to understand the issues.
I thank my committee members for their input. I look forward to working with you in the future and really appreciate the open and candid discussions we've had. So thank you.
DR. GOLD: Madam Chairman, we have, I believe, a tentative date in September, one day. Is that to be a one-day meeting and is that date firm so I can get it on my calendar?
DR. HUSSAIN: No. We felt, I think, we wanted to grasp exactly how we want to structure the next meeting. I think the tentative date is September 17th, if I'm not mistaken.
DR. GOLD: That is the date.
DR. HUSSAIN: What we will do is soon confirm that on e-mail to you guys, whether it's a one-day or possibly two-day meeting.
DR. GOLD: Will you be able to do that within a few weeks at most?
DR. HUSSAIN: Yes, that's the plan.
DR. BOEHLERT: If there is no further discussion, thank you and have good travel, whatever your final destination may be.
(Whereupon, at 2:47 p.m., the subcommittee was adjourned.)