8:31 a.m.

Wednesday, November 28, 2001

  Conference Room

5630 Fishers Lane

Food and Drug Administration

Rockville, Maryland 20857



VINCENT H.L. LEE, PH.D., Acting Chair

Department of Pharmaceutical Sciences

School of Pharmacy

University of Southern California

1985 Zonal Avenue

Los Angeles, California 90033

NANCY CHAMBERLIN, PHARM.D., Executive Secretary

Advisors and Consultants Staff

Center for Drug Evaluation and Research

Food and Drug Administration (HFD-21)

5600 Fishers Lane

Rockville, Maryland 20857

GLORIA L. ANDERSON, PH.D., Consumer Representative

Fuller E. Callaway Professor of Chemistry

Morris Brown College

643 Martin Luther King Jr. Drive, N.W.

Atlanta, Georgia 30314-4140


University of Puerto Rico

School of Pharmacy

4th Floor, Office 416

P.O. Box 365067

San Juan, Puerto Rico 00935-5067


President, Boehlert Associates, Inc.

102 Oak Avenue

Park Ridge, New Jersey 07656-1325


Professor Emeritus of Pharmacology and

Toxicology and Therapeutics

University of Kansas Medical Center

3901 Rainbow Boulevard

Kansas City, Kansas 66160-7471


Professor of Pharmaceutics

Department of Pharmaceutics

School of Pharmacy

State University of New York at Buffalo

Buffalo, New York 14260

ATTENDEES (Continued)



Chair and Professor

Department of Pharmaceutical Sciences

Nesbitt School of Pharmacy

Wilkes University

176 Franklin Avenue

Wilkes-Barre, Pennsylvania 18766


Professor, Chair and Associate Dean

for Research and Graduate Programs

Department of Pharmaceutical Science

University of Tennessee

847 Union Avenue, Room 5

Memphis, Tennessee 38163

NAIR RODRIGUEZ-HORNEDO, PH.D. (participating by telephone)

Associate Professor of Pharmaceutical Sciences

College of Pharmacy

The University of Michigan

Ann Arbor, Michigan 48109


Department of Pharmaceutics

School of Pharmacy

Medical College of Virginia Campus

Virginia Commonwealth University

Box 980533, MCV Station

Room 450B, R.B. Smith Building

410 North 12th Street

Richmond, Virginia 23298-0533




Charles B. Jordan Professor

Head, Department of Industrial & Physical Pharmacy

Purdue University

1336 Robert E. Heine Pharmacy Building

West Lafayette, Indiana 47907

ATTENDEES (Continued)


PATRICK P. DeLUCA, PH.D. (participating by telephone)

Professor, Faculty of Pharmaceutical Science

401 College of Pharmacy

University of Kentucky

907 Rose Street

Lexington, Kentucky 40536-0082


Professor, Department of Neurological Surgery

University of California, San Francisco

350 Parnassus Street, Room 805, Box 0372

San Francisco, California 94143


5 Thomas Court

Granite City, Illinois 62040-5273


Associate Professor of Biometry

University of Texas School of Public Health

University of Texas

1200 Herman Pressler Street

Suite E815

Houston, Texas 77030


Professor of Applied Pharmaceutical Sciences

University of Rhode Island

Kingston, Rhode Island 02881-0809




Purepac Pharmaceutical Company

200 Elmora Avenue

Elizabeth, New Jersey 07207


Pfizer, Inc.

Eastern Point Road


Groton, Connecticut 06340

ATTENDEES (Continued)



Vice President, Biopharmaceutics

Eon Labs Manufacturing, Inc.

227-15 North Conduit Avenue

Laurelton, New York 11413


Divisional Vice President

Pharmaceutical and Analytical Research and Development

Abbott Laboratories

Dept. 04R-1-NCA4-4

1401 Sheridan Road

North Chicago, Illinois










Umetrics, Inc.

17 Kiel Avenue

Kinelon, New Jersey 07405


Generic Pharmaceutical Association

1620 I Street, N.W.

Suite 800

Washington, D.C. 20006




8 Garland Court, Suite 207

Fredericton, NB E3B 6C2






by Dr. Nancy Chamberlin 8


by Ms. Helen Winkle 10


Introduction and Overview-Science Board Update

Objectives for Subcommittee

by Dr. Ajaz Hussain 23

Committee Discussion 47



by Dr. Ajaz Hussain 87

by Dr. Christopher Rhodes 95

by Dr. Chi-wan Chen 104

Committee Discussion 113


by Mr. Christopher P. Ambrozic 144

by Dr. Nancy Mathis 153

by Mr. Steve Lonesky 162


Introduction to the Issues

by Dr. Ajaz Hussain 164

Data Presentation

by Dr. Garth Boehm 172

by Dr. Thomas Garcia 181

Committee Discussion 203



Nonclinical Studies Subcommittee

by Dr. John Doull 247

Drug Safety and Risk Management Subcommittee

by Dr. Martin Himmel 255


(8:31 a.m.)

DR. LEE: Good morning. I am Vincent Lee. I'm the acting chair of the Advisory Committee for Pharmaceutical Science, and I'm calling the meeting to order.

I would like to go around the table to have everyone introduce herself or himself, and then I will turn it over to Nancy Chamberlin.

DR. SHARGEL: Good morning. I'm Leon Shargel at Eon Laboratories, representing the generic industry.

DR. SHEK: Efraim Shek from Abbott Labs, representing industry.

DR. HUSSAIN: Good morning. Ajaz Hussain, Office of Pharmaceutical Science, CDER.

MS. WINKLE: Good morning. Helen Winkle, Office of Pharmaceutical Science, CDER.

DR. LAYLOFF: Tom Layloff, SGE with FDA, and with Management Sciences for Health.

DR. MEYER: Marvin Meyer, former faculty member, University of Tennessee, now emeritus professor.

DR. VENITZ: Jurgen Venitz, Virginia Commonwealth University.

DR. CHAMBERLIN: Nancy Chamberlin, Executive Secretary.

DR. LEE: I should identify myself. I do have a real job at the University of Southern California.

DR. KIBBE: Art Kibbe with Wilkes University, School of Pharmacy.

DR. BOEHLERT: Judy Boehlert. I'm a private consultant to the pharmaceutical industry.

DR. ANDERSON: Gloria Anderson, Callaway Professor of Chemistry, Morris Brown College, Atlanta.

DR. BLOOM: Joseph Bloom, University of Puerto Rico.

DR. DOULL: John Doull, KU Med Center.

DR. JUSKO: William Jusko, professor at the University at Buffalo.

DR. LAMBORN: Kathleen Lamborn, University of California, San Francisco.

DR. LEE: Thank you very much.

Nancy, are you ready to read the conflict of interest?

DR. CHAMBERLIN: The following announcement addresses the issue of conflict of interest with respect to this meeting and is made a part of the record to preclude even the appearance of such at this meeting.

Since the issues to be discussed at this meeting will not have a unique impact on any particular product or firm, but rather may have widespread implications with respect to an entire class of products, in accordance with 18 U.S.C., section 208(b)(3), all committee participants with current interests in pharmaceutical firms have been granted a general matters waiver which permits them to participate in today's discussions.

A copy of these waiver statements may be obtained by submitting a written request to the agency's Freedom of Information Office, room 12A-30 of the Parklawn Building.

We would also like to note for the record that Leon Shargel, Ph.D., Eon Labs Manufacturing; Efraim Shek, Ph.D., Abbott Laboratories; Garth Boehm, Ph.D., Purepac Pharmaceutical Company; and Tom Garcia, Ph.D., Pfizer are participating in this meeting as industry representatives acting on behalf of regulated industry. As such, they have not been screened for any conflicts of interest.

In the event that the discussions involve any other products or firms not already on the agenda for which FDA participants have financial interests, the participants are aware of the need to exclude themselves from such involvement and their exclusion will be noted for the record.

With respect to all other participants, we ask in the interest of fairness that they address any current or previous financial involvement with any firm whose product they may wish to comment upon.

DR. LEE: Thank you very much, Nancy.

I would like to point out that a number of committee members are not here, and I don't know whether or not they are listening. How can I tell? Because there's lots of background noise. They're not on yet. And the three members are Mary Berg from Iowa, Nair Rodriguez from the University of Michigan, and Patrick DeLuca from the University of Kentucky. So, they'll be joining us by audio throughout the day, or whenever they are available.

Next, I would like to call Helen Winkle, Acting Director of OPS, to introduce the meeting.

MS. WINKLE: Good morning. Before I start with my introduction, I do have one little presentation I wanted to make, and that's I wanted to present Kathleen Lamborn with a certificate of appreciation. This is going to be Kathleen's last meeting with us, and I wanted to let her know how much we've appreciated all her input over the last few years.

DR. LAMBORN: Thank you very much.


MS. WINKLE: I also want to welcome Vince as our new chair of the advisory committee. We've already been working some with Vince in the past on various things, and since we've asked him to be chair, he's been extremely full of ideas on how we can work with this committee and help make improvements, and we've just loved every minute of it. So, we know we're going to really enjoy working with Vince and we appreciate him taking on this additional task.

I also want to welcome some of the new members to the committee. First, I want to welcome Art Kibbe. We really appreciate Art participating with the committee. Art and I happened to run into each other last year in Indianapolis and got to talking about the committee, and he showed his interest in being part of it. So, here he is and we're really happy to have him here.

Also, Lem Moye isn't here yet. I don't know if he's stuck in traffic or what, but I also want to welcome him. He's being processed and is a member of the committee. And also Pat DeLuca from Kentucky, who is supposed to be on the phone eventually today. He also will be a new member of the committee.

Now that Steve Byrn has arrived, I have another presentation to make. Steve has been the chair of this committee for several years now. We've worked a lot with Steve. We've really enjoyed it. He's contributed a lot to the committee and to the various scientific issues that we've addressed during the years. And I want to present him with a little certificate of appreciation as well.


MS. WINKLE: I've actually put this chart up here for three reasons basically. I wanted to just remind the committee and the new members especially of what the Office of Pharmaceutical Science looks like. Basically it's broken up into four offices: the Office of New Drug Chemistry, the Office of Generic Drugs, the Office of Clinical Pharmacology and Biopharmaceutics, and the Office of Testing and Research. Most of these groups, except for the research obviously, are doing parts of the review of new drugs and, of course, of generic drugs. I think this is really a very important part of what the Office of Pharmaceutical Science does, but I think there's a lot more to the office.

I see the office as really being the underpinning of the science base in CDER, and I think that's important for all of us to remember as we work toward the future on the various scientific issues that we have because I think this is where we want to be able to answer a lot of the questions and also look toward the future to scientific and technical issues we may have and may need to resolve. So, that's just one reason I have it up here.

The second reason is that I wanted to point out Dr. Ajaz Hussain. I think all of you here know Ajaz. He's been working in the Office of Testing and Research for many years now and in other parts of the center, but he recently joined the staff of OPS as the Deputy Director for Science. And in that role, I see Ajaz basically helping to instill science throughout OPS and the rest of the center. I think this is a very, very important role. I'm not saying that science hasn't been in the center. Certainly. Don't take me wrong, but I think that it needs to be better infused into our daily activities, and I think we need to look at how we can best improve and focus on scientific issues. So, Ajaz is here to do that.

As part of that, he is overseeing major scientific issues which are arising in OPS and the center.

He's also coordinating many of the science issues that we have with outside groups. So, he's working with various groups outside, the trade associations, with PQRI, and other such scientific groups that are doing research or doing some type of collaborative work to help in sort of laying the basis for the OPS.

He's also overseeing the activities of this advisory committee. I think he worked with many of you and with the speakers in preparing for today.

And he's also working with the coordinating committee.

So, you'll see a lot of Ajaz. You'll talk a lot to Ajaz on scientific issues. So, I just wanted to sort of bring that up today so you'd have a good idea of what his role is going to be.

Thirdly, I put this up just so you would see who in the organization does what. I think it's important to see who the various people in the offices are and who you will see from time to time as far as various dealings on scientific issues.

Next, I just wanted to put up the organization of this advisory committee. Basically it's just a reminder that this advisory committee continues to grow. We currently have two subcommittees, the Nonclinical Studies Subcommittee and the Orally Inhaled Nasal Drug Products Subcommittee. But we see several other subcommittees coming on line. The possibility of a Clin/Pharm Subcommittee, the possibility of the Drug Safety and Risk Management Subcommittee, and also we'll talk more today about the Emerging Technologies Subcommittee.

I think, though, the important thing is not to look at the structure, but I could take this chart and superimpose it on the organizational chart of OPS because I think the two groups have worked and will continue to work extremely closely together in basically laying that foundation for good science. And I'm really depending on everyone sitting here at the table, as part of the advisory committee, to help in that endeavor.

As I was putting this together, it sort of reminded me of a story, and I'm not the best storyteller. But it just seemed to fit right in. These two men, Frank and George, were going out hunting for deer and they got out there and there was this great, big herd of deer out there. Boy, they were really excited. They got their guns up, ready to shoot. All of a sudden George says, I've good news and bad news. And Frank says, well, what, what? And he says, well, the good news is there's loads of deer out there; the bad news is they're being chased by a grizzly bear.

So, they looked and all of a sudden the grizzly bear was after them, and so they started running. Finally Frank just stopped, pulled his tennis shoes out of his backpack, and he put his tennis shoes on. George says, what are you doing? And he says, everything says you just can't outrun a grizzly bear. He says, I don't have to outrun the grizzly bear. I only have to outrun you.


MS. WINKLE: I think, though, there really is a purpose behind this story, and that's the fact if any of us take off and don't help the other, we're not really going to have the best foundation for science. And I think this often happens. We're all sort of trying to get ahead of the other, and I think it's really important for us to work together with members of the advisory committee and with others outside of FDA so we can ensure that we are providing the best science for the regulatory aspects of the pharmaceutical industry and for FDA and basically for the public that we can. So, I think that's an important point that I just wanted to make.

Quickly let me go through what we're going to talk about today, and then I'll hand it back to Vince.

The first thing on the agenda is process analytical technology. Basically Ajaz is going to tell you a little bit about the meeting that we had on November 16 with the Science Board. We made a presentation to them. We had several people in to help with the presentation. Dr. Woodcock was the initial speaker at the presentation to talk a little bit about where we are going with process analytical technology. And he'll give you an update on that.

We'll also talk a little bit about the process and forming of the new subcommittee. I think it's important that the advisory committee brainstorm about the objectives of this subcommittee and sort of define what we or the advisory committee expect from that subcommittee. So, that will be the first thing on the agenda today.

Next, we're going to talk about stability testing and shelf-life. The purpose of this particular topic is just basically to make the committee aware of some of the directions we're going, to let them know about the DOD shelf-life program that goes on in FDA, and also to talk a little bit about issues related to physical stability. I think this is important for us to talk about and I think some of the current issues of the day make it even more important that we at least look at this program and have a better idea as the committee may have to deal with future types of issues in this area.

Next on the agenda, I just put up quickly the PQRI organizational chart. I'm sure most of you are familiar with PQRI. I think in the past, even at the advisory committee, we've talked a little bit about PQRI. Basically you can see that various trade associations and FDA are part of the steering committee of PQRI, and PQRI is set up with technical committees and working groups that are focused on a variety of scientific issues, and they're basically issues to help improve or to enhance the guidances in FDA, guidances we already have out there, to actually provide information on new guidances that may help us better regulate, the idea being, across the board, is to reduce some of the regulatory burden on industry.

The first project we have under PQRI is basically blend uniformity. When we were meeting on PQRI, we decided this was our low-hanging fruit. It was something that we could get a win on easily. It didn't work out that way. It's taken us several years to get where we are today.

But one of the things we wanted to do was to be able to discuss the proposal that PQRI is developing and the emerging recommendations from that proposal. We have two members from the Blend Uniformity Working Group of PQRI. Tom Garcia is going to talk. He's actually the chair of that working group. So, we would really like some input form the committee. When PQRI provides these emerging recommendations to us, when they send these recommendations, we in FDA want to be prepared to act on them. So, I think it's important that we go through what these recommendations are and again, as I said, get your input so that we're prepared when the time comes to receive these recommendations. And basically we'll talk a little bit about what the next steps are.

Nonclinical Studies Subcommittee. I thought Jim MacGregor was going to be here today. He's the one who basically started this subcommittee when he was at CDER. He's now at NCTR. Dr. Doull is going to give us an update on the subcommittee and the next steps.

I think before I had mentioned to the committee that we were looking at possibly transferring this subcommittee into NCTR. We had a lengthy discussion at the last subcommittee meeting, and we're still exploring how we're actually going to handle this subcommittee. So, you will hear more about the future of this subcommittee. It's possible it could stay under this committee. There were so many things brought up, we've backed up and are reevaluating what we want to do.

Next, at the end of the afternoon today, we're going to have a training session, and I just wanted to mention it so that everyone would know what this session was set up for. Basically we're not going to discuss any scientific issues. We wanted to look at ways that the committee could interact in the future. Dr. Lee and myself and Ajaz have had several conversations about this. Dr. Lee has proposed several ways that we could improve on the process, including having principal reviewers, and we want to talk with the committee a little bit on how we would do that, what the expectations of these reviewers would be. So, we will be spending an hour or so later this afternoon in closed session basically training on this.

Tomorrow we only have two topics, but they're both very important topics and areas that we've been working on in CDER for quite a long time. They're issues that we really feel we need to go back and revisit.

The first being dermatopharmacokinetics. Basically we will talk a little bit about the background. As I've said, we've been working on it since the early 1990s. We have a draft guidance that was issued in June of 1998, and we've had several joint meetings with the Derm Committee of the center. I think some of you actually were at the last joint committee. There are still lots of questions about the methodology and how it should be used. We're going to present some study data, and we have three issues for discussion which are on the agenda.

Then I think really what we want from this committee is some advice on where to go with the draft guidance. We have talked about it internally within the organization. We feel like we probably need to withdraw that draft guidance because there are still issues that we need to resolve in this area and possibly even look at other ways of doing methodology for bioequivalence for derm products. But we'll talk more about that tomorrow morning.

Last is individual bioequivalence. I think here this is an issue that we've discussed before this committee a number of times. We've issued a general BA and BE guidance that includes IBE. After a year of having that guidance issued, we'd really like to step back and reevaluate the use of IBE.

We want to talk about replicate design studies. We've found some real advantages to replicate design, and we want to bring that before the committee and talk about those as well.

We are also going to share with you the opinions of the scientific community. We have Les Benet. Actually Les was going to be here, and at the last minute he could not attend, but he will be on the telephone tomorrow during this discussion.

There are four discussion topics here, which are also included in the agenda. And basically what we would like to see from this committee is where do we go from here. I think this is really important. We're at a time where we have to make some decisions.

So, basically that's the agenda for the next two days. It's a pretty full agenda. We've actually debated a lot internally as to how many topics to put on an agenda for two days of discussion. I think all of today's discussions will be fairly easy to come to some conclusions or at least, as I said, one of them is awareness. But I think tomorrow's discussions may even continue some. We had hoped to get some decisions in both areas, but we'll just have to work and see where we get. And that may be something we want to talk about this afternoon when we meet on training, really how much we should bring before this committee, because we really do want your input and we want to be able to get enough information to you that we do get adequate input. So, we can certainly discuss that later.

So, with that, I'll turn it back to Vince. I look forward to a really good two days and to coming to some conclusions. I appreciate it. Thanks.

DR. LEE: Thank you, Helen. I would like to thank Helen and Ajaz for the opportunity to chair this committee, and also I look forward to the opportunity to learn from everyone.

The reason I'm losing my voice was that I was staying up until 1 o'clock this morning watching the Lakers game.


DR. LEE: I don't know why. When I turned it on they were about 10 points behind. So, that's the story.

We have two very exciting days. You know I'm the chair without a tie, and I did come with a tie but it was confiscated by the security.


DR. LEE: No. I'm just making it up.


DR. LEE: So, the next item is on process analytical technology, and Ajaz is going to tell us what he has in mind. Those of you who were here at the last meeting might remember a presentation by the MIT representative, and it was very exciting.

DR. HUSSAIN: Well, good morning. I did send you the slide presentations from the Science Board. The Science Board essentially is analogous to an advisory committee for the Commissioner's office. I have included some of those slides in my presentation, but I'll go through quickly to give you an update of how that presentation went.

The primary objective here for this discussion is to essentially develop the goals and objectives of the subcommittee we're ready to form now and also to essentially list or enumerate the expectations you have in terms of what the committee should be doing and how should it be reporting back to you and some sense of time lines and what time frame would be acceptable for defining that process.

So, the outline I have here is to provide you an overview, some background information. We have some new members on the committee, so I'll briefly discuss our July 19th discussion on this topic, and then share with you the discussion that we had at the FDA Science Board meeting on November 16th and share with you then what we think that process analytical technologies can do in pharmaceutical manufacturing, a vision for the future, and propose or suggest some responsibilities for the subcommittee and time lines, and open that up for your discussion and your input at that time.

When we met on July 19th, many of you were present for that meeting, but some of you were not. That meeting was designed to initiate public discussion on the science of pharmaceutical manufacturing. The focus of the presentation was modern process analytical technologies. My interpretation of the discussion and the feedback I received from you was extremely strong support to move forward with that program and the recommendation to form the Process Analytical Technology Subcommittee.

We also discussed a related topic on rapid microbial testing and we had discussed on forming a separate subcommittee on that. I'm not reporting any progress to you on that topic at this meeting, but we will bring this topic back to you in the next one with some plans for moving the microbiology testing forward also.

The Science Board presentation was an important milestone in this project for two reasons. One, the project that we are about to undertake has the potential to essentially change the whole system of manufacturing and change the whole system of how we regulate. It has that type of potential, and how we manage that is very important. And we have to build consensus within the agency, outside the agency as we move forward here. That was one of the underlying themes of taking this to the FDA Science Board and getting their consensus on moving forward also.

The other aspect was this project is somewhat different. FDA is finding itself in a position that it has to lead the scientific aspects on manufacturing, which is somewhat difficult. I think generally we tend to be in a reactive mode, responding to things being submitted to us. Here we are changing that paradigm and saying we want to move forward in this. So, there are two aspects that led us to take this to the Science Board.

We had invited Doug Dean and Frances Bruttin from Pricewaterhousecoopers to look at the cost issues and the productivity issues in the pharmaceutical sector. Dr. G.K. Raju made his presentation that he gave to you on July 19. In addition, we had invited Norman Winskills, who's the Vice President for Global Manufacturing Services at Pfizer, and Steve Hammond to share their views from an industry perspective. Dr. Woodcock obviously introduced that and I sort of summarized some of the discussions we had before.

Now, the response was actually very, very strong. In fact, one of the comments was this is a no-brainer. You have to move forward. So, there was strong unanimous endorsement of the proposal, and also the Science Board offered to help support the initiative through their own talks, seminars, and so forth. Also, I think one aspect is they would like to be involved in this process and would like to receive updates and progress reports.

Following this meeting, I had a meeting with the Office of the Commissioner and the message there was we have to move forward quickly on this.

I expect some questions from you on this presentation, and to keep the presentation short and leave you more time for discussion, I'm just going to flip through some of the slides that we used which I felt were key slides. So, I'm not going to spend much time on those slides.

Dr. Woodcock's presentation focused on the aspect of efficiency of manufacturing and efficiency of the associated regulatory processes. We think the quality of products is high, but I think the way we go about ensuring that quality can be improved and efficiency can have a tremendous improvement there.

There are problems in the manufacturing sector. These tend to come in the form of manufacturing related problems that we have seen over the last several years. There's an increasing trend. Low manufacturing and QA process efficiency is one of the major driving forces here. Innovation, modernization, and adoption of new technologies appears to be slow, especially in the U.S. sector, not in the European. Most of these get applied in Europe, not in the U.S. And there is a high burden on FDA resources also.

Dr. Woodcock essentially shared her view of how did we get here. Pharmaceutical manufacturing this committee is well aware of. We have moved from an art to science and continue to move in that direction. But we tend to have a lot of empirical approaches on how do we define GMP standards and so forth. So, that is one contributing factor.

We have moved towards harmonization of a lot of our guidances and so forth, but these have been consensus-based and I think the science tends to be secondary in those discussions. The focus tends to be building consensus across continents and move forward.

And also industry is risk averse and does not want to take any risk in bringing new technology in if they feel FDA or regulatory authorities are going to be not receptive to such technology.

So, the challenges that we face are how to encourage innovation while ensuring high quality, how to successfully shift from empirical to more science-based standards, and how to decrease reliance on pre-approval review and physical evaluation, and how to recruit and train the scientific work force that we'll need for the shift.

The questions she posed to the Science Board are: Are you able to support this? What resources would you suggest that we have to draw upon? And what other aspects of quality should be considered?

Quickly, I think it was very good to see the presentation by Pricewaterhousecoopers in terms of the production cycle times and the efficiency numbers that they have, essentially matched with what Dr. Raju presented to you. And there are similar trends. There is lots of room for improvement and cost reduction by improving the technology of manufacturing. In fact, some of the numbers seem to support that. In some of their experience, a 10-fold reduction in time and a significant reduction in cost has been achieved in other sectors.

The other aspect which is truly a win-win aspect is you not only improve quality but also you improve the efficiency at the same time. With the world-class standards being at 5 to 6 sigma, I think if we move in that direction, you not only improve compliance, but you improve productivity at the same time.

Quickly going through the presentation of G.K. Raju, he provided his analysis of the CAMP consortium members and the typical cycle times that he has seen in the pharmaceutical industry. For example, a tablet manufacturer can take, after API has been screened and validated, about 60 days. But in reality, these numbers can be much longer. It can take half a year to get a batch of tablets out.

One of the reasons for the slow process is the off-line nature of our test. We complete a unit operation, stop, take samples, test before we go to the next step, and keep this process going. And the time spent is mostly in the paperwork, transferring material, and so forth, not in the process, not in the testing itself. It is the off-line nature that does that. But that's one contributing factor.

The other major contributing factor is out of specifications, when you have an exception. When he presented this -- or he did not present this to you -- he took the y axis off this slide. The reason is it's extremely sensitive. I know the numbers. The top is a year. I mean, you're looking at 300 days to get some of those batches out. That's what's happening.

One exception leads to investigation, leads to a paperwork trail, and so forth, and that leads to very long cycle times. Average cycle times for two products that he analyzed in detail were about 95 days and the standard deviation is more than 100 days. So, the productivity is getting products out is compromised and capacity utilization is low. So, really there's a need for fundamental technology and a fundamental shift in the way we think about manufacturing.

This is new. I think the Pfizer presentations really hit the mark in many ways. Pfizer has been using a lot of these technologies for the last 20 years, and they have implemented this in many places but not in the U.S. Less than 15 percent of applications in U.S. sites. And they have been applying this to all of the drug product manufacturing from raw material testing to packaging, blending, and so forth, but not in the U.S.

They summarized their thoughts in two scenarios. There's a "don't use" scenario. They would not use it because of the uncertainty, regulatory risk associated, which leads to waste of resources, duplication of test methods for different sectors. This is unnecessary.

And the other aspect is "don't tell." They'll do it in parallel. In addition to the regulatory test requirement, they'll do it in parallel and rely on their methods and then provide the data to FDA to support the regulatory requirements. So, why would we really need such a duplication?

And they proposed certain aspects in a win-win scenario. This was to start bringing modern process analytical technology in through a process which improves our understanding in industry as well as in companies. They suggested that we sponsor joint forums and a discussion, develop an effective process to evaluate new technology, and participate in dummy submissions in a sense. Because of the risk associated, they don't want to delay the approval of drugs. They proposed dummy submissions. My thoughts are why dummy? We could work in real time and make it happen. So, there are other aspects to this.

But let me now sort of shift gears. What we are talking about really is shifting the manufacturing paradigm here through a process, stop, test, process, stop, test to continuous monitoring of attributes which are related to product quality and performance on line and in a continuous fashion. I mean, that's the shift in paradigm that we're talking about. So, it really has to bring about rethinking of our current way of review and inspection.

The issue that I presented was that really we find ourselves in a position that we have to lead or we have to facilitate introduction of new technology. We do that for two reasons. One is from a public health objective you want to have the most efficient system from an economic and quality perspective. But the other underlying theme is, five years from now, if you don't do this, who will get blamed? We will get blamed. FDA did not allow this to happen. Obviously, that's not the case and we haven't seen one submission at all. So, we have to break that barrier and move forward.

Some of the challenges are with new technology, you have new questions. You have old products. If you have new technology, you have new concerns. You see some things which have already been. How do you address those problems? And really the mind set is that FDA will not accept it.

We presented this as a win-win opportunity to improve quality and manufacturing efficiency, reduce the likelihood of scrap and recalls. But in my mind I think it really adds value in terms of bringing more science and engineering into the process, as well as improving the scientific and engineering basis of our debates. We will continue to debate, but I think hopefully we'll do it more on scientific grounds.

So, here was our proposal. What should FDA do to facilitate introduction of PAT? We need to eliminate regulatory uncertainty, and the official position FDA has always had is FDA will accept new technology that is based on good science. The key here is defining and building consensus on what good science would be. So, development of standards for process analytical technology in terms of method suitability and validation.

One key aspect is multivariate statistical and computer pattern recognition. We have traditionally addressed quality issues as univariate statistical criteria, but you're dealing with a multivariate system. You have to think of new tools, and all pattern recognition tools have to be sort of brought in and discussed.

We have to redefine and rediscuss critical process control points and how do you establish specifications for these.

Changes. How will you manage changes in this, and how will you deal with out-of-specification results? So, we have to reexamine and rethink the whole scenario.

In order to define clear science-based regulatory process, I think one aspect which we believe in is the current system is adequate for intended use. So, I think that becomes the floor that gives you a platform upon which to build. So, introduction of process analytical technology will not be a requirement. It's an option. It is based on the scientific and economic drivers that a company may have. So, we would support that from that perspective.

And we really need to define conditions under which process analytical technology may replace current regulatory release testing because if you keep adding new tests and keep holding on to the old tests, you will not accomplish what you're trying to accomplish.

And we have to develop a process for addressing existing invisible problems in the marketed products, which will become apparent when you bring new technology on.

We need review and inspection practices based on science, and eventually we'll have to deal with international harmonization issues.

We have limited institutional knowledge and experience at FDA in this area, so we have to seek input and collaboration. We did that with you on July 19th, and we are ready to form the subcommittee. We are looking at aspects of collaborating with individual companies if we're ready to bring this on line, and clearly we'll work with academic pharmaceutical engineering programs and process analytical chemistry programs, and PQRI.

That was sort of an update and some background information. I just want to sort of position the rest of the talk to help your discussion in terms of defining the subcommittee's objectives.

A perspective on process analytical technology. In my mind it is one piece of the puzzle. It's not the entire system. So, we're discussing one piece of the puzzle. In my way of looking at it, I think here is an opportunity to go from "I know it when I see it" -- that's the current system. You have to test for blend uniformity before you know it's uniform. Vision 2020 is "I can see clearly now" which entails quality and performance by design, plus continuous real-time monitoring of quality, specifications based on mechanistic understanding of how formulation and process factors impact product performance, leading to high efficiency and high capacity utilization. But also I think an important aspect that we have to deal with and plan for is real possibility of real-time review and inspection and do that from sitting in our offices. I think this technology opens the door for that possibility.

One of the presentations in the open session at the Science Board was from AstraZeneca. Bob Chisholm from AstraZeneca made a presentation on their plant that they have actually put on line right now in Germany, and that real-time inspection is a possibility. So, that production facility is on line for German products.

So, the key elements that I think we need to consider for this emerging program and our initial thoughts are -- this is a draft. What I feel is we really need to start defining general principles guidance on process analytical technology. We need to articulate an FDA position on process analytical technology. By that I mean the acceptance and definition and terminology. We are introducing a whole host of new terms, and I think we really have to start from scratch and say here is this common language that we'll speak in this program.

Outline a regulatory process for introducing process analytical technology. Here there are two aspects: pre-approval phase and post-approval phase. What I hear from companies is it's unlikely that a company will introduce this in the pre-approval phase because of the pressures of getting the drug approved and potentially delaying or raising questions with new technology in an NDA or ANDA application. So, many may opt for bringing new technology in in a post-approval phase. Unfortunately, you really have to build the quality in. So, data has to be collected throughout. But we'll have to work around those things.

Addressing existing invisible problems I mentioned before, and creating a team approach for review and inspection. In our minds, the process would be a total team approach, but our review chemists from the center will actually visit and be part of the inspection program to bring the folks together.

From a science perspective, the type of experimental evidence and justification that will be needed, the thoughts are as follows. There are two ways of bringing this technology in: as an alternate or as the primary control or test. I'll explain that in a minute.

The other aspect is you will have, in some cases, direct measurement of attributes of interest and in other cases, you have a correlation-based control of that attribute. Again, I'll explain that in a minute.

We would need to have an appropriate level of redundancy or backup systems to make sure we cover failures, if any.

And we will have to debate on-, in-, and at-line release testing. The concept of parametric release comes in and I'll explain that in a minute too.

Types of test and controls. Alternate control and test. A process analytical technology tool may be validated by comparison to a traditional in-process test using development data and/or data from routine production for a period of time. Number of batches. And traditional in-process test discontinued after sufficient data has been collected to support the validation. So, that's one approach.

So, an example of that would be on-line blend uniformity using, say, for example, near infrared analysis that would be validated using data from blend samples obtained using a sampling thief. Once you compare that as acceptable, that may be one way of looking at it. But it's not an ideal situation. When you compare a modern, more efficient, better technology to something which is problematic, that's not the solution.

So, we really have to create new gold standards and a new way of looking at it. I think we have many ideas of how it should be done, but I think the subcommittee should start thinking in those terms. It could be a primary control/test, and a process analytical tool is developed and validated on its own merits, moving to its first principles maybe. Here I think we'll have to reassess how we define that in terms of accuracy, precision, specificity, and all the terms that we use for validation of an analytical test.

Continuing on types of tests and controls, I just want to share some thoughts on correlation-based controls and tests.

Many times I think you have an option of looking at an infrared spectra or a fingerprint and deriving information about attributes indirectly. So, you may have to build a correlation model for that. Here use of chemometrics or pattern recognition methods to identify and develop a correlation between measurement and product attribute would come in. An example that I'll share with you is prediction of tablet hardness or dissolution rate from, say, near infrared spectral fingerprints. It's not a direct test of dissolution, but it is a correlation to dissolution.

And from a validation perspective, there are two options. Validation based on predictive performance only. For example, our in vitro/in vivo correlation guidance that we have for dissolution is a validation based on predictive performance only. That is, you develop three formulations, you establish a correlation, and the correlation gives you prediction within plus/minus 10-15 percent, it's okay. So, that's the way we handle in vitro/in vivo correlation.

But here I think there's an opportunity to improve upon that. What I'm suggesting is validation based on predictive performance of a correlation, plus mechanistic justifications owing to its causal links. Let me explain that.

The data I have here is percent dissolved versus time for seven experimental formulations. The drug is metoprolol, and you're looking at the USP dissolution test here. The data is from our University of Maryland research project. All of those products, by the way, are bioequivalent.

But now how do we establish a dissolution test? We simply say, if 70 percent dissolves in 30 minutes, everything passes. So, that's how we do it.

With simple experimental procedures, here is the half-factorial experiment that we did. We actually know every factor that affects dissolution. In fact, percent dissolved at any given time could be predicted with very high precision, and it was related in this case to magnesium stearate, microcrystalline cellulose, and sodium cromoglycolate.

So, what I'm showing here is at different time points we have ability to predict dissolution based on formulation and process components. So, you have established and defined the critical variables and developed an empirical but mechanistic causal link saying these are the reasons why dissolution changes according to this, and so forth.

Now, with technology on line with imaging and others, you can actually measure magnesium stearate. You can actually measure microcrystalline cellulose, and all those attributes separately. So, in addition to a correlation, you have the ability to monitor all those excipients and all other attributes that affect dissolution. So, that's what I mean by correlation plus a causal link approach.

Let me share with you some thoughts on parametric release. This term has been hotly debated and I think in Europe it's widely accepted but not in the U.S. What is parametric release and release test? Parametric release is used in the U.S. only for parenteral dosage forms that are terminally sterilized.

Let me explain what this is from a USP perspective, and the quote there is from USP. The information there is directly from USP. Let me read that. When data derived from the manufacturing process sterility assurance validation studies and from in-process controls are judged to provide greater assurance that the lot meets the required low probability of containing a contaminated unit, any sterility test procedure adopted may be minimal or dispensed with on a routine basis.

Suppose it is a sterilization process that uses steam, autoclave. The parameters that you validated would be temperature, pressure, time. So, if you have confidence in those parameters, then you don't wait for sterility test to approve the product. The logic is simple in the sense if you have 5 percent contamination in the thing, to identify and to find out that level of contamination, you actually have to test three lots of the material. So, the limitations of sampling, the statistical limitations -- it does make sense to do that test.

So, that what is parametric release in a parenteral sense. But I think what we can do with process analytical technology is far superior, far better, and I think we have to redefine parametric release in this context.

The European guidance on parametric release defines that term as follows. It's a system of release that gives assurance that the product is of the intended quality, based on the information collected during the manufacturing process and on compliance with specific GMP requirements related to parametric release. This is sort of a broad, regulatory definition.

But this guidance, which is now effective since September of this year, extended the concept of parametric release to other dosage forms, including tablets and capsules.

So, building on the example I showed you with the dissolution, what would parametric release for dissolution look like? Simply creating a hypothesis, visualizing what this might look like down the road.

One way of defining parametric release, or whatever we call this term when we define this term, if that method provides a greater assurance, compared to current dissolution test methods, that lots will meet established typical dissolution specification, or we can think out of the box, forget the routine dissolution testing. Just link it directly to the bio if lots can be assured to meet the bioequivalence criteria. That would be one way of saying this is a better test.

What data would be needed for that? I think processes that utilize in-process controls that can measure and control all critical variables that affect dissolution -- we will need to have those test methods. And we would need appropriately designed manufacturing process validation studies, such that validation based on predictive performance, plus mechanistic justification, could be the foundation on which this could be based. So, moving towards all the critical variables, moving towards understanding of the mechanisms of dissolution, moving towards more science-based.

Let me share with you an example. How do we do dissolution testing, lot-release testing now? Under USP conditions, you take 6 tablets out of a lot, do the dissolution test, meet your one point, and you're done. That's 6 tablets. It could be a million lot, 2 million, 25 million lot of tablets. That is what it is today.

Here's an example from a major company which sent this to me. It's also linked to blend uniformity. They were having dissolution problems. When they first marketed, there was no problem. There was a sampling issue because non-homogeneous distribution of magnesium stearate was the culprit here, and if you look at the dissolution as a function of the production itself, the box number itself, tablets being collected as the production is ongoing, you can see dissolution failures either early or late. Using technologies, such as near infrared on line, or other such technologies, laser-induced fluorescence maybe, you can actually do this and have a homogeneity with respect to magnesium stearate, all excipients, and so forth. So, looking at 6 tablets and being happy with that and looking at the entire lot, which would we prefer? So, that's the message.

I'm going to skip this I think and share with you that on the 25th of October, we had a Federal Register notice on Process Analytical Technology Subcommittee. We're requesting names of qualified individuals in the area of process analytical chemistry, pharmaceutics, industrial pharmacy, chemical engineering, pharmaceutical analysis, chemometrics, pattern recognition, expert systems, IT, and statistics. So, we know this is going to be a multi-disciplinary approach. We want to bring all these talents together to help all of us work together. So far we have received 27 applications. I have not fully gone through those applications, but we have 27. The deadline for submitting is the 30th of this month.

The Federal Register notice stated that this subcommittee would report on scientific issues related to application and validation of on-line process technologies such as near infrared. I keep repeating this. Near infrared is just one example. I tend to use it more because I'm more familiar with it. But this is not the only technology. In fact, at the back of your handout, I have a list of all the technologies, and the list is two pages long. So, near infrared is only one example in my mind. I'm just using that for presentation clarity. But the whole host of technologies available is mind-boggling.

We have requested focus on both drug substance and drug product manufacture, also asked for feasibility of parametric release concepts, potential risks and benefit analysis of this, and as I said, applications are due the end of this month.

My proposal to you is what should the subcommittee report to you on? The proposal is this. If we can have the subcommittee focus on the following: current status and future trends in process analytical technology, especially in pharmaceutical development and manufacturing, not just manufacturing, but starting from the development aspect itself. Provide information on available technologies, capabilities, advantages, limitations. Also application in U.S. versus non-U.S. plants and why the difference. Perceived and/or regulatory hurdles.

General principles for regulatory application. Principles of method validation, specifications and out-of-specification, but general principles, not in terms of getting at the nitty-gritty at the first stage, but in the long run, we will have to.

Appropriate use and validation of chemometric tools.

Feasibility of parametric release concepts, also to redefine this in this context. Parametric release is actually less of a standard compared to what we are doing here.

Case study. Should the group use a case study like vibrational spectroscopy, near IR? It's a question mark. I don't know whether we need to have the subcommittee focus on one tool or have a much broader look at the situation.

And also some input on research and training needs within the FDA and in industry.

One of the concerns which I expressed at the Science Board was the pharmacy schools -- the erosion of pharmaceutics/industrial pharmacy programs and pharmacy schools -- may not be there to help bring the people we need for this. We have to think of going outside pharmacy schools. What the trend has been is the Michigan chemical engineering program now has a pharmaceutical engineering program. Rutgers has one. So, there are a number of pharmaceutical engineering programs that are coming up and somehow support that through the National Science Foundation. Steve has one program in his. We have to build the pharmacy programs and refocus some of the industrial pharmacy programs to help meet the needs of the individuals that we'll need in this area.

With that, I'll stop and give it back to Steve. Hopefully, that presentation was not too long and was helpful to initiate the discussion.

DR. LEE: Thank you very much, Ajaz.

Any questions for Ajaz?

Before I do that, I'd like to welcome a prospective committee member. Dr. Moye, would you please introduce yourself?

DR. MOYE: Of course, good morning. My name is Lem Moye. I am a physician and a biostatistician from the University of Texas School of Public Health. I have served on one advisory committee before this and that was the Cardio-Renal Advisory Committee.

DR. LEE: Thank you very much.

Questions for Ajaz?

DR. MEYER: Ajaz, what's your sense of the best way to move forward? If you try to, let's say, reinvent industrial pharmacy in an academic setting, it will be 10-15 years before you have any progress to show.

It seems to me, as you noted, some firms are already doing this in Europe and would, therefore, easily be able to adapt it in the U.S., if it weren't for the FDA constraints that they perceive. Is that correct?

So, maybe if you had some kind of mechanism like an RFP where companies could respond and say, we're willing to give this a shot, and you're willing to train a select group of FDAers in monitoring their progress, and you work together on a small basis, one product, one firm, two products, two firms, whatever, you might make some real progress that could then rapidly be disseminated rather than trying to solve all the problems all at once and make lists and so on and so forth.

DR. HUSSAIN: No. Marv, that actually was a message I got from the Office of the Commissioner also, a similar message. The folks from Pfizer shared with us their success with getting this introduced in Australia, and the question was raised, why Australia, and why not the U.S.? So, the technology, the SOPs, the regulatory aspect, review aspect, outside the U.S. is already there.

One option -- we have to discuss this internally more, but I think we have initiated the discussion -- is actually to have a parallel process to the subcommittee and invite companies who would like to do this and provide a means or mechanism whereby a review and inspection team could be formed and can make that happen starting today, if need be.

So, we would need some expertise in-house. We have a lot of expertise in-house in terms of analytical. We have to rethink in terms of on-line approaches in terms of control, and we actually will hire a few people also on the OPS level and strike a move forward in this parallel track also.

But I don't have the whole program laid out, but that's something they're looking at.

DR. LEE: Let me define the boundaries here, if I may. We have about an hour to discuss, and assisting us in discussion is Tom Layloff over there. I gather what you would like us to do, Ajaz, is to define the charge of this subcommittee. Right? So, it seems to me that this is a trend that is somewhat irreversible, and let's hope that we can accelerate the process and make it a reality.

So, Steve, since your name was mentioned, would you like to lead off the discussion?

DR. BYRN: Yes. I should say before we go too much further, though, that Purdue has been doing a lot of work in this area, and there's some intellectual property involved. So, you need to realize that the comments I'm going to make are in that context.

I wanted to comment on what Marvin said first and also Ajaz on the breadth of this area. I was just writing down, but I think the area involves at least four major components or educational or background. One would be pharmaceutics, manufacturing pharmaceutics, that part of pharmaceutics. One would be analytical chemistry, almost straight, which Tom would represent, almost straight, and Judy. One would be informatics because we're going to have to be able to deal with an awful lot of data. And the last one would be regulatory affairs, validation, all that kind of thing. So, it's really an interdisciplinary program area, and the educational part of it is extremely difficult I think to get people that can work in this area. It's going to require a special kind of interdisciplinary program.

So, I don't know whether we want to start on that, Ajaz. But maybe we should brainstorm the educational background that's going to be required to achieve this.

I put engineering in there with pharmaceutics and manufacturing.

DR. HUSSAIN: Steve, I understand the long-term educational needs, definitely. But instead of focusing on that, there's very little at this meeting we can do for that. I think we have to start brainstorming and developing this program.

But what we have at hand right now is the subcommittee is ready to form. In fact, we have set the date for the subcommittee meeting as February 22nd and 23rd. So, before that subcommittee gets started, I think we need to define the charge or the work plan and what you expect from that subcommittee so that we can move forward on that aspect.

DR. LEE: Okay. Since we're talking about technical issues, Art?

DR. KIBBE: Yes. I have just a couple of questions. From your presentation, we clearly have leadership in Europe on this issue, both the companies being willing to go forward with it partially because of the regulatory environment, and second, the regulatory bodies being willing to accept this and seem to be slightly ahead of the curve, if what you say is true.

Wouldn't it be prudent for us to have people from our side of the Atlantic in the regulatory field get educated by the regulators who are willing to accept this kind of technology in Europe so that they know the pattern of acceptance of that information? And then what we're really talking about, if it is ongoing in Europe ahead of us in terms of developing this process, transporting that technology here.

So, the first step in my mind is to get a clear understanding of how they go about validating these systems and accepting these systems in the European situation, and then just transposing that methodology here and modifying it so that we make it work easily here. That would, in my mind, move our time table up, rather than starting from scratch and trying to reinvent the entire process. I recognize the four areas you talked about are extremely important in terms of educating all of us on how to go about doing this, but to make the system work faster, I think importing the information is better.

DR. HUSSAIN: The information we have is what we have heard from the companies. We have not directly contacted the regulatory agencies, and I think we will. We'll try to get some information. So, I don't have any more information on the acceptance and then how that happened. I think it's very important for us to understand and capture some of that information. Definitely.

The MCA, our counterpart in the U.K., is very active in this area, and I have spoken to folks there and they have expressed frustration that nothing has happened in the U.K. It has happened in Germany and Australia for some reason. So, we will try to get that information and see how that has happened.

DR. KIBBE: I'd be more than happy to go with you and spend a few weeks in Switzerland to research this.


DR. BYRN: I just want to comment on that too. Of course, having done a lot of work in this area, we've been trying to look for public information. There's very little public, unless Tom knows of something that I don't. But if you search the literature on parametric release or any of these terms, there's virtually nothing published. So, all of this information in Europe is private sector information.

So, one of the things the committee I think ought to do is try to get as much information as they could. I think that's one of the charges probably, to try to figure out how much information there really is in this area.

DR. LEE: And at the same time, to find out what are the hurdles and the resources required, and items such as those. Right?

DR. BYRN: Yes, all of the above. I think Tom is going to say something about that.

DR. LAYLOFF: I was going to make a couple of comments.

First of all, in the pharmaceutical business, the manufacturing operation ends up as a control process, and most of the academic efforts have been in the discovery area. There's been a general trend in academics to move more towards discovery and less in the area of control. So, we see a decline in emphasis on any analytical process at all, anything that's concerned with control, and so the industry has had to bite the bullet and actually train their own personnel. And I'm sure FDA is going to have to do the same sort of thing for process analytical technologies also. I don't see a big bloom coming out of people who are in discovery shifting to control efforts.

As Steve noted, these issues are proprietary. There's significant investment in developing and validating them, and I'm not sure that anybody who is in business is going to be willing to give up their investment to other people. It's a proprietary advantage to have these things in place, and I don't think they're going to be willing to give them away. They're probably going to be more protective of the process technologies than they are of the development technologies.

DR. HUSSAIN: Vince, I misstated the planned dates for the subcommittee. It's February 25th and 26th.

DR. LEE: Are those better days?


DR. LEE: Ajaz, do you have any idea how soon you want to have a report back?

DR. HUSSAIN: I'm just looking at the people who have applied. There's a good mix of people from Europe who are willing to participate on this committee. So, that is a good sign.

At the same time, I just want to share with you the Royal Pharmaceutical Society has a new Technology Forum Section, and I participated with that. It's like their PQRI, but they are linked to MCA. So, they have been very active in this area. And some folks from there also have applied to be on our committee. So, I think there are linkages that are emerging which should be very, very useful not only in terms of learning and getting information, but also simultaneous harmonization efforts.

DR. LEE: I used the word "irreversible" very deliberately, but I have not defined the speed to get there.

I see that Efraim has his hand up over there, and maybe he's ready to speak.

DR. SHEK: Yes. The way I look at it it's a real revolution if we go this direction, and it's another impact of the technology of information being to utilize it. Since it's such a revolution, one aspect for the subcommittee to look at it is the implementation. If we don't look at what will be the end result, it might take a long process. It requires investment, resources, both intellectual as well as equipment. I personally believe that it's the right direction to go, but somehow from the subcommittee, as they deliberate, look how it's going to be implemented. For example, how do you validate those issues from an old technology to a new one? We have in other areas where you cannot show comparability because you compared two different aspects.

So, those details I think will be extremely important to move this process fast. If the subcommittee can deliberate sometime and learn from others or come with their own ideas how can we implement it faster, I think would be very, very important.

DR. LEE: Leon, you're sitting right next to Efraim. Would you like to make a comment?

DR. SHARGEL: Yes. I have sort of a question something really we all brought up in terms of proprietary methods. Many companies may have methods that are proprietary, and we also have to consider public standards, such as our friends up the street, USP, which often sets monographs and public standards.

Now, as this committee begins to approach new technology, it strikes me that the information needs to be disseminated publicly. I think that's a sensitive area and how that's done, that it can be publicly debated among industry, academics, and other interested people, to see whether this has applications that would be suitable in their own industry. I'm not sure how that would be done, but I'd like the subcommittee to consider that in how we can discuss this in an open forum.

DR. LEE: Good. I think what you're suggesting is that we need to identify the major players.

Other questions? Steve?

DR. BYRN: I think that's a really good idea. I think if you look at the proprietary, just in a general way, there's proprietary equipment, information handling software. All these things in this area are potentially proprietary. And if we can get private sector companies -- for example, no company can manufacture their own near IR equipment, validate it, develop the software to do the chemometrics, et cetera. That's not a feasible thing, I don't think, for each company to do.

So, we're going to have to figure out a way in this process to attract instrument manufacturers. That's the way to be involved in this, and they're going to have to be able to -- and this is just one example -- run their business somehow. You talk about high technology. This is really a critical issue, how we involve the highest technology, the best people, and have some system that they can feel like they can run their business with it, and yet we can advance drug product quality and all the things we want to do.

DR. LEE: Yes, Ajaz.

DR. HUSSAIN: Vince, there were several comments that came to my mind. I think Leon mentioned public standards. Let me just share with you my thoughts on that.

I'll again use the example of near infrared. In my mind that is not a tool which could be used as a reference tool for analysis of the content uniformity of all tablets. It's not designed for that. That's not the intention. That's the reason the focus is on line for control. So, what that does is it actually creates a dual system. For public standards, you still have to rely on HPLCs and other traditional methods. That would be your foundation for public standards, not infrared as an assay in uniformity.

So, that's the reason the focus is on line. So, these will be alternate, additional methods. You still have to meet the requirements of public standards. But since you would possibly be raising the quality so much, that public standard is actually not a concern anymore. So, that's one aspect which I think is important to keep in mind.

Because of some of these methods look at both the physical and chemical attributes together, formulation is very specific. This would be very specific to formulation. So, everything has to be specific to a given manufacturer. So, private standards are what we are focused on.

The second question I think Steve raised was the proprietary nature of this, and actually the same question was raised at the Science Board to the Pfizer folks. Pfizer's response was none of those instruments and so forth -- they're working with instrument manufacturers for commercial use. So, none of that would be sort of blocking anybody else from using those instruments. So, that's one aspect.

From the AstraZeneca presentation at the Science Board, the entire system that they have is based on commercially available equipment, not something that's proprietary to this.

So, right now, for example, infrared proprietary issues are not that significant. What is significant is how one applies it to their process and their use. The instrument, the calibration, the software are commercially available. In fact, the software is also part 211 compliant. So, it complies with the software validation and other aspects too.

But as new technology emerges -- this is just scratching the surface -- you will have so many new technologies that come in that we will have to deal with that issue.

DR. LEE: So, Ajaz, is it your sense that some industry is already moving ahead in that direction?

DR. HUSSAIN: Yes. I think the instrument manufacturers are quite active and many technologies are coming.

What is missing right now is the mechanical engineering aspect in the sense of where do you put the sensor on the blender, how many. There's a recent publication by Jim Drennen in the last month's J.PharmScience, and he argues that you need six different positions on the blender where you need a near infrared probe. That doesn't make sense.

But again, do you need one port or you need six of them? When you shine in a laser light and retrieve information, what amount of sample are you reading? So, it brings the unit dose sample. What is the size of the sample that you're getting? So, all that has already started. I think we'll have to deal with those debates.

DR. LEE: Tom?

DR. LAYLOFF: I would like to agree with Ajaz that the technologies that we're talking about on process control are actually consistency assessments. We're going to need an orthogonal public standard for assessing the quality of the product once it's released into public commerce because the base on which the release is going to be made is going to be proprietary and very closely linked to the configuration of the technologies and the information systems that are tied in there.

Also, it is true that there has been a lot of focus on near infrared and the technology. The companies there have moved in and basically made near infrared a COTS type system, commercial off the shelf. The software is validated and you just pick it up and use it.

However, there are other assessment technologies which are out there, acoustic, photon migration, which will give you more information on the consistency of processes, and those have not yet matured to the same level as near infrared, but they are out there and they are moving up quickly.

DR. LEE: Very well.


DR. BOEHLERT: I would agree with Tom that we need an orthogonal public standard. The company also needs that public standard because, indeed, you may get back samples from the field after it's released and need to test them and verify the quality of that material.

I think organizations such as USP can help in this process not on a monograph-specific basis, but perhaps some general chapters that deal with some of these on-line, in-line, at-line analytical techniques.

The other thing I would point out is back in the early 1990s I was at a scientific meeting where a number of companies made presentations on how they were going to do all of this good stuff, parametric release for solid oral dosage forms and all of that. As far as I'm aware, except for the European sites, very little has happened in this country, and why didn't it happen? Well, the expense, because very often they had to redesign their manufacturing process, the regulatory uncertainty. They weren't sure it would be approved. So, they haven't gone forward.

We have to get past that. We need to find some folks that are willing to take the risk and move forward in this regard because Europe is doing it.

DR. LEE: Joe, do you have any comments to make?

DR. BLOOM: Well, I think the issue to use new technology to improve all the manufacturing processes is good. But as Marvin was saying and taking the arguments of Dr. Kibbe here, we should focus on one or two technologies basically. These things are new and we're throwing punches in the air in a lot of aspects. What we should do is if we want to go forward, we've got to get some company that would cooperate with the FDA, and the instrument companies will cooperate too because if an instrument company knows that this new instrument is going to be used in a new technology, they might have a future in their production of the instruments. So, we should take one or two new technologies and try to implement them in a pharmaceutical atmosphere because this is going to take a long time. It's not going to be an easy process.

Actually the NIR that Ajaz was discussing, one of the things is the validation process. There's a lot of people talking about different ways of validating the technique. So, this is going to be a big issue.

The other issue is proprietary information. If this is going to be a setback, we should look into it because if we're going to establish this subcommittee and the proprietary issue is going to come about, that might be like a stop sign for the subcommittee to move forward. We should move forward all together, the industry, the FDA, and the instrument companies. We should come together otherwise we're not going to move forward.

We should get one of the issues to move forward. Just take NIR which is being used and try to get a company to establish it and validate it, and then get another new technology and do that.

The other thing is we cannot focus on photoacoustic and NIR and all other new techniques and take a whole bunch of new techniques and try to move forward. We should focus on one or two of them so the subcommittee should focus on that aspect.

DR. LEE: Ajaz?

DR. HUSSAIN: I think I'm hearing some concern with the proprietary nature and how that might interfere in the process. Somehow I'm not getting the same concern. At least I don't have the same concern for two reasons. One is most of the things that get submitted to FDA are proprietary technology. So, we handle that. So, it's not an issue from that angle. The proprietary aspect becomes an issue when you have to build guidances and science and so forth.

But in many ways, there are two things here. One is the reason we're focusing on on-line is you still have the floor defined by the current quality standards. So, you have the fall-back situation. So, you have a method that would improve on the existing quality, and that's the basis of justifying that method. All we need to do is understand the basic principles of how will we define that process. Then each company does that under the umbrella of those basic principles for their particular product. So, that becomes proprietary. The general principles should be fine. So, that's the reason I'm not so concerned with the proprietary aspects.

DR. BYRN: Maybe I could comment too on that. I didn't mean to set this off on a big discussion of intellectual property.

I think it's pretty simple. Let's think of a spreadsheet. Now, somebody writes a computer program and they have a spreadsheet and that's proprietary. But yet, that spreadsheet is made available, and they do that so that they can fund the development of it and the improvement. But that spreadsheet program, whether it's Microsoft or whoever, then is made available to everybody else, and they use that program to improve whatever they're doing.

In the same way, a company might have a proprietary technology of some sort that would be developed that they would then sell to everybody else. Each person would use that and operate on their system, but yet there has to be, I think -- I'm not an economist, but I think there has to be a way that we draw instrument manufacturers into this field so that they can justify significant investment in developing some of these technologies. So, we have to allow a system, which I think we've already got set up.

I think I completely agree with Ajaz. It's not any different from using particle size analyzers or anything else that are required by guidances. Those are still proprietary and people sell those. It's just that people need to realize that when we move into a new area like this, there's going to be a lot of involvement of proprietary companies in this. So, I don't think it will hurt us. I think it will actually help us if we just let the current system work.

DR. LEE: Kathleen?

DR. LAMBORN: I can't say I know very much about this area, but just listening to the discussion, it seems to me that we're bouncing back and forth between particular technologies and the concept of what rules should we set for validation. I think this is, Ajaz, what you were trying to say. If we set the rules for what constitutes a sufficient validation for an alternative process, then that would encourage everybody to use different technologies and to develop proprietary components because they would have been told what rules you have to meet.

So, it seems to me the place to start for the subcommittee is to be looking at the general rules that have to be met. This is where the concept of going to the folks in Europe, if they've approved it, and saying, all right, what rules have you had and what was the science behind the concept, and to start there and then go back and use specific examples to make sure that we would agree that the level of assurance that they put in place is one that we'd be comfortable with. But I think if we stay with that, then we would avoid this problem.

DR. LEE: Thank you.


DR. KIBBE: I agree with you. I think that's where I was trying to get to at the very beginning.

DR. LAMBORN: That's what I thought you were doing.

DR. KIBBE: There are going to be lots issues that we can't even face or can't think of out of whole cloth, but there have been companies who have gone this route with regulators in a highly developed situation. So, if we could identify the companies and the products and the regulators that have already gone this route in Europe and Germany and the company sees a monetary gain for being able to do the same thing here and we can talk to the regulators about their guidelines and modify them with our own concerns here and put them in place and partner up on those products, we will be able to glean from that general guidelines for everybody else regardless of the technology they're using. That's where I would start.

I was half kidding about going to Switzerland with you, but I think that's what you need to do is go and visit with them.


DR. KIBBE: No, I think you need to take a couple of people from the agency over there and find out or bring them here and have us meet with them.

DR. LEE: I would like to give another minute for discussion before summarizing the discussion. Jurgen?

DR. VENITZ: Yes. I'd like to add to the committee charge to deal with what you call the existing but invisible problems. In other words, if the current technology seems to pass a particular product, but then you use a new technology and all of a sudden there are some outliers that would lead one to believe there's a problem that we don't know about using current technology, I think that to me is a big hurdle for companies to even wanting to touch this. So, I think the committee should deal with that. What are you going to do in the circumstance?

DR. BYRN: I'd like to broaden that out because it's essentially the same problem just restating: trying to validate a really good method with a bad method. In other words, do you have a bad method that you're using now and let's say somebody comes up with a good method of blend analysis, how do you validate that new method, that better method, against a bad method? This is a fundamental analytical issue.

DR. LEE: Thank you.

Gloria, you have a point to make?

DR. ANDERSON: I'd just like to come back to the last slide, I guess it is, in your presentation. If I'm understanding this correctly, you have a question mark at the end of this, and that suggests to me that you're wanting some input on these three areas, whether or not we feel that these are areas the subcommittee should consider and report back on, as well as the subcategories under here.

I think that these probably summarize what such a subcommittee would want to do. It seems to me like we should start with what's available, what's being done, what the regulatory hurdles are, if such information exists, and then try to look at where we want to go from there.

I have a question about the near IR. I want to talk to you about it after this discussion because it may be that we might not want to just look at near IR. We may want to look at something that's complementary to that. I looked at the sheet that you have, and there are a lot of yeses and noes, and it may be a good idea to look at that matrix and see if there's something complementary.

Apparently NIR has been used I guess more frequently than anything else. It has a lot of good characteristics, but there may be something that's complementary so that we don't just look at one thing and in the end find out it doesn't do what we want it to do.

DR. HUSSAIN: Thank you. I think that's exactly what I was trying to do here. This is your subcommittee, and I can propose and hopefully you will accept it, but in a sense this committee will report back to you and the advice then comes to us. So, you really have to define for the subcommittee what the charge would be or the work plan would be.

There were two things which I just wanted to share with you. One is near infrared is about 20-year-old technology. It has been in application since 20 years ago in other sectors, not in pharmaceuticals. The petroleum and other chemical industries have used this, but it doesn't mean that something that's applicable to petroleum would be applicable to pharmaceuticals. We would have to go through that evaluation process.

But the commercial availability and all the other aspects have been worked out, and that's a leading technology in terms of on-line applications. Raman is close behind. So, vibration spectroscopy is a product term. I think mid-IR and others are ready for implementation. Acoustics and other technologies are in the research state, and I think they'll come about soon.

So, focus on general principles and then removing the uncertainty through general principles would be my way of moving the first step.

I just want to clarify what has happened in Australia and what has happened in Germany with AstraZeneca in Germany and Pfizer. These tend to be new plants. I think the AstraZeneca plant is a brand new facility which is on-line throughout. Putting something on line in an existing facility, I think there are more challenges to that. So, I think we'll have to look at that aspect also in the sense of what might work in a new facility may not work in an older facility. It may not be ready for that.

But just to summarize, I think what I've heard is essentially a lot of issues that I have laid out are also on your radar screen, and I'll wait for Vince to summarize that.

DR. LEE: Any other comments, input before I attempt to summarize what I have heard?

DR. DOULL: Vince, I have just one general comment. It's interesting. This is a new committee that hasn't really even been formed yet, and yet I hear some of the same kind of problems that Nonclinical Subcommittee has already encountered, problems about proprietary information and how that's presented, publication of the results, for example, funding. In the Science Board, you talked about where the resources would come from. And it isn't clear to me exactly what kind of resources.

I guess what I'm saying is that this committee needs to be sure that when we lay down guidelines that those are guidelines which are useful across the board because we have the same kind of issues, same kind of problems with all of our subcommittees. If we add three more new ones, we'll probably have the same kind of things. We need general guidelines is all I'm saying.

DR. HUSSAIN: Vince, this is not a research subcommittee. I think NCSS was created to do research and actually is supposed to be fact finding, but that was a PQRI sort of a model. We're not going in that direction with this committee at all. The research, the funding, PQRI, our own, company collaboration, and so forth. This is not a research subcommittee. So, that's the difference here.

DR. LEE: Bill, do you wish to make any comments?


DR. LEE: Anybody else?

(No response.)

DR. LEE: Let me attempt to summarize what I have heard, and then I would have the committee to formalize the charge to the subcommittee.

Obviously, this is a trend which is irreversible. We don't know how fast we're going to get there, but hopefully we would be on top of this process.

Steve Byrn mentioned a scientific foundation of this idea.

I also heard about the players in terms of institutions, ethical companies, generics, FDA, and USP, and maybe many others.

I heard about gathering information, learning from others who already have been there.

I also heard about the disseminating of new information, in other words, educating the stakeholders.

What else? And also, I think Gloria mentioned very nicely that perhaps the charge to the subcommittee is already summarized in some of the slides and we should take a look at that.

John mentioned about the resources that it would take. For example, if this subcommittee is going to go forth and do some fact finding, would they have access to the facilities.

So, those are the things that I heard. Have I missed something? Yes.

DR. MOYE: This is a very unique conversation for me because typically I'm in situations where the pharmaceutical companies are essentially overwhelming the FDA with new technology and its implementation. It doesn't appear to be the case here. I think I've heard two obstacles, and perhaps you were going to get to these.

One I heard was cost because the pharmaceutical company will have a great deal of early investment and they'll want to recoup that, naturally. I don't know if that's an appropriate purview for the committee, but that's going to be a very important issue for the pharmaceutical companies.

The second, of course, is the uncertain regulatory environment. The pharmaceutical companies will need to know, I think, clearly that if they can meet these regulatory stipulations, then they will not have a problem. Pharmaceutical companies oftentimes have their hands full. They have a product coming to market that itself may be controversial. Maybe the disease for which it's treating isn't well recognized. Perhaps the pivotal studies haven't been as persuasive as they had hoped. They often have an armful of problems coming into the FDA. I don't think they want to add to that the additional problem which would not have been a problem in the past, but the additional new problem of blend issues primarily because of a change in the regulations and stipulations. That needs to be lock- solid for them.

So, the degree to which the committee can address those two issues, recouping cost and easing the regulatory concerns, I think would remove the major obstacles from the pharmaceutical companies.

DR. LEE: Right. Thank you. I think this is exactly what Jurgen was hinting at about the problems and possible solutions.

Also, I think what you mentioned triggered thoughts. The pharmaceutical business is a global business, and therefore how the regulatory agencies around the world ought to work together is something we might want to consider.

So, I have a list of items, and now we need to identify somebody who's very good at crystallizing these thoughts. Art, you seem to be very good at that. And I had asked you this morning why is he sitting to my right?

DR. KIBBE: The greatest thing that can happen to you, of course, is great expectations. He says I'm going to crystallize all of this.

The thing that came to mind to me while you were speaking is why would the companies do this. That's the driving force of the regulatory agency anyhow. If the companies are finding that this is a way of an economic benefit, then they're going to move in this direction. And then the question is, is the agency prepared to accept that change and make it a viable change for the companies so that everybody moves smoothly forward? And that's really what the subcommittee is all about, to get the agency well enough educated about how it can be done and what are the pitfalls in regulating it so that when the companies are here, prepared to move in that direction, they don't move into a vacuum, because they're certainly not going to move if we're not ready to accept that information. The first step being then finding out how it has been done.

The second step -- if we had an economist here -- what is the economic benefit to an individual company and to the overall health care costs of the United States of having an agency lead the industry in this direction rather than responding to the industry? In other words, if we're going to, as an agency, put in regulations that encourage companies to move to this process of validating their products, what are all of these benefits going to end up with? So, I think the subcommittee needs to have a good argument because if we are, as you correctly point out, out in front of the companies on this issue, they're going to need a reason to move with us. I think that could come out of discussions with other regulatory agencies and companies that have gone in that direction. As you pointed out, Pfizer and AstraZeneca have done it with new plants, but are they willing to refit old ones to do that?

Something that hasn't been brought out, which I have kept in the back of my mind, is if we don't move, companies are international. Are they going to take manufacturing and put it someplace else because they can do it better there and not do it here? I don't know whether we need to be sensitive to that, but I think in terms of the United States' leadership in the development of new drugs, we need to be prepared for those kinds of things.

So, I think Vince has correctly listed some of the things. I think your last slide, as Gloria pointed out, really does it. And what my input would be is that we need to help you put priorities on those elements, and from my perspective, the first priority is the regulatory situation where it is working and then the second is to delineate how the benefits will pan out for the companies who are willing to step forward and jump into the water with us.

Does that help you any, Vince?

DR. LEE: Does it help you, Ajaz?


DR. LEE: Go ahead.

DR. HUSSAIN: One aspect is I think I had talked to some of you and some of you had expressed interest in being part of that subcommittee. I think one of the things which would be helpful is if you can identify who from this committee would like to be on that committee, and then we can build the rest of the group around that.

DR. KIBBE: If we're meeting in Switzerland, I want to.


DR. LEE: Are you happy with the names submitted in those 27? Not the names but the expertise.

DR. HUSSAIN: Yesterday, for example, I think we were missing a few areas. I think statistics, chemometrics was low, and I requested some names from the National Institute of Standards and others, and I think we have received some.

What the list is right now are people who have actually done it.

DR. LEE: I think, Ajaz, the Federal Register is not something that I read every day. You need to get a message out to another group that can help you, I mean, another forum.

DR. HUSSAIN: What we did was we used AAPS and the American Chemical Society and AIChE to send this to key individuals that share. Also, the National Institute of Standards and other government agencies which have done this in other sectors have personally sent e-mails out to a lot of the folks.

DR. LEE: Let me declare a conflict of interest and then I'll make a statement and request, perhaps that you might want to write an editorial for Pharmaceutical Research. I'm the editor of the journal.


DR. KIBBE: Share that with J.Pharm.Sci. and you could be in both places.

DR. LEE: So, we have 10 minutes left. Let me put forth some charge, and then the committee ought to be comfortable with the charge to the subcommittee.

Number one perhaps is to understand the state of the art, just learning from the people who have been there before.

Maybe before that, we need to define what is to be gained by embracing this new phenomenon. So, that's the first thing. What is the benefit?

Number two is the state of the art.

Number three is what are the problems, the hurdles, and possible solutions.

And perhaps number four is maybe the most important. How should the regulatory agency be prepared for this?

Now, these are very broad, not specific at all. We can fill in the blanks by going to some of the slides in the portfolio.


DR. BYRN: One idea is to see if we have anything to add to these and then fill in the blanks?

DR. LEE: Yes.

DR. BYRN: One thing we might want to add -- and I'm just throwing this out -- is educational issues. I don't know. In other words, if this is implemented, as Ajaz said, how do we educate people in this area since there are no existing programs, I don't think, anywhere that do this. So, how would the education be carried out? Would it be done with AAPS? You know, the whole thing.

DR. LAMBORN: Could I suggest that that could fit in two places under the existing list? One is education is a problem. Then when it comes to where do we propose people go, then any proposals for improving education would fit under that.

DR. LEE: I think education certainly is an important process, who to educate in the short term and the long term.

Other comments?

DR. HUSSAIN: I was told Pat DeLuca is on the phone in case he has a comment.

DR. LEE: Where are you Pat? Pat, are you there?

(No response.)

DR. LEE: I don't think he heard us. I think we need some new technology for this meeting.


DR. LEE: Can anybody read back those four things that I said?

DR. HUSSAIN: Let me try. To start out with, essentially defining the benefits and what we will gain with this. Defining the state of the art. Identifying the problems and hurdles and providing solutions. And then how should we prepare ourselves to move in this direction.

I just wanted to add to that. In terms of training needs, the National Science Foundation has established one center already at the University of Washington. This is the process analytical chemistry division at the University of Washington, and I think there are some other centers that NSF is going to form. We are hooking up with them right now.

DR. LEE: Is everybody comfortable with those four points? Should we add more?

(No response.)

DR. LEE: All right. The next thing is two or three other points.

Volunteers from this committee. Do we want to do that now or should we do that behind closed doors?

DR. HUSSAIN: We can do it.

DR. LEE: Where are we going to meet?

DR. KIBBE: I'm ready.

DR. LEE: Art, are you serious?

DR. KIBBE: Listen, if we're going to Switzerland, I'm ready.


DR. LEE: He has a Swiss account.


DR. LEE: How many people do you need?

DR. HUSSAIN: Well, I think traditionally a minimum of two and one consumer rep. That's how we have done it and then supplemented that from the industry and others.

DR. LEE: May I propose that those who might be interested -- well, you can do it two ways. You can either do it by a show of hands now or we can do that during the break. Traditionally, the chair of the subcommittee has to be from this committee. Isn't that right?

DR. HUSSAIN: Not necessarily. Tom is an SGE and I think he'll be part of that committee and sort of coordinate and manage that part. I think we're hoping we will accept him for that role.

DR. LEE: So, do you want some names now?

DR. HUSSAIN: It would be nice, but we can wait.

DR. LEE: So, who would be interested to be considered? Joe, Judy, Art, Steve.

DR. HUSSAIN: Steve, you're not on the committee anymore. I'm just kidding.


DR. BYRN: Yes. I'm not on the committee, so it doesn't really count.

DR. KIBBE: Well, you could serve on the subcommittee.

DR. LEE: Who else?

(No response.)

DR. LEE: Okay, good.

The next thing is the time line. Would two weeks be enough?


DR. HUSSAIN: What I was hoping is we'll prepare them and they should come with all the answers on February 25th.

DR. LEE: And 26th.


DR. LEE: How soon would the Science Board like to hear back from you?

DR. HUSSAIN: The Science Board meets every six months. So, if we can have some information to feed back to the Science Board, that would be a driver in my mind.

DR. LEE: It seems to me that this task, if focused, ought to come to some kind of a conclusion in 6 to 12 months, don't you think? I think as soon as we form the subcommittee, then the chair will recognize the scope of this task. Many issues that we have not talked about might emerge.

DR. MEYER: Vince, you might ask how long will it take you to get to Germany and Australia and back with the fact finding paper.

DR. HUSSAIN: I'm not flying. I have not taken the steps necessary to make the contact, but I will do so immediately and get back to you. I don't have an answer.

DR. MEYER: It seems to me that's critical. The technology apparently is there, although it's in Europe. And the regulatory information is in Europe. What we need to know is how to apply it here, but we don't know what we're trying to apply yet.

DR. BYRN: Marv, the general technology is there, but if you look at the last of Ajaz's slides, where you start looking at these new sensors, that's not there. In fact, this is a huge excitement I think of this field, the potential to develop sensors, better and better sensors that tell you more and more about what's happening. In a sense, it will be an evolving field. The goal would be to have a sensor, as Tom has said, right on this step that's absolutely critical, that if anything goes wrong, it senses it immediately and you stop the production or whatever and fix the problem. And that's going to be an evolving goal. So, I think what they have in Europe is the initial airplane, if you will, but they don't have the finished product yet. That's my impression.

DR. MEYER: That may be true, but I don't think this committee is being set up to develop technology or to enhance technology.


DR. MEYER: It's to use current technology.

DR. BYRN: I think all the committee is doing is trying to set up the regulatory environment that would allow this to happen.

DR. LEE: Tom?

DR. LAYLOFF: In defense of the agency, I would like to say that the FDA many years ago approved the use of near infrared as an alternate technology for the release of ampicillin trihydrate for the moisture determination, identification, and assay, many, many years ago.

DR. LEE: Ajaz, this is a feasibility question for you, being chair for the first time. Would it be reasonable to ask the subcommittee to publish the report in journals?

DR. HUSSAIN: I think that's an excellent idea. It definitely is a public document. It will be published through our transcripts and so forth. A version of that written by the chair or the group of the members that would be more in tune with the journal I think would be an excellent idea. So, I would like to see that happen.

DR. LEE: It doesn't have to be Pharmaceutical Research.


DR. LEE: We are at the point of a break. Are there any other questions, comments?

(No response.)

DR. LEE: If not, thank you very much.


DR. LEE: I'm sorry.

DR. HUSSAIN: Just to understand the 6 to 12 months, I think what we're hoping for that time frame is general principles and so forth. But in my mind once we have that, then more detailed aspects could be gotten into. So, the committee might continue in a different direction from that point.

DR. LEE: That's correct. I can only speak for myself that I have no idea what is the eventual scope of this project.

When we come back, we're going to talk about stability testing and shelf-life. Thank you.


DR. LEE: We have Dr. Pat DeLuca on the phone. I understand that he was on the phone but he was not able to speak. So, he heard everything that we talked about.

Pat, are you still there?

DR. DeLUCA: Yes, I'm here.

DR. LEE: Great. The reason you were not able to hear us was because we were too noisy.

Pat, when you have a point to make, will you please identify yourself? So far you are the only one on-line. There may be two others coming on-line. Pat, would you please introduce yourself, who you are and where --

DR. DeLUCA: Yes. I'm Patrick DeLuca. I'm at the University of Kentucky College of Pharmacy.

DR. LEE: Thank you.

We have a new, quote/unquote, member around the table. Dr. Chris Rhodes, would you please introduce yourself?

DR. RHODES: My name is Christopher Rhodes. I'm at the University of Rhode Island.

DR. LEE: Thank you very much, and welcome to this discussion.

The next session is on stability testing and shelf-life. Once again, Ajaz Hussain would like to define the issues for us.

DR. HUSSAIN: Thank you, Vince.

This is somewhat of a different discussion topic. We're not truly posing questions to you but we're presenting this as an awareness topic, an awareness topic from the perspective of opportunity, concern, together creates an awareness issue in my mind.

Stability is always a contentious debate that we always have and we continue to have debate. I'm not bringing those debates to you for discussion, but a topic on physical stability.

The way we are going to present different perspectives here are I'll introduce a topic, and I've invited Professor Chris Rhodes to share the scientific perspective on physical stability. Then Dr. Chi-wan Chen will provide an overview of current stability requirements and hopefully by then you'll have sufficient information for some discussion. I pose a broader question towards the end of my presentation. I'll come back to pose that after Dr. Chen makes her presentation.

Just to move on, the awareness topic. I think regulatory stability testing requirements are effective in minimizing stability problems. I think that's the general consensus and I think the data bears that out. So, why are we discussing this topic today?

There are lingering concerns that certain gaps exist with respect to ensuring physical stability, especially with more complex products such as parenteral controlled-release dosage forms. They are few in number, but their numbers are increasing. And changes in physical stability -- and if there is a recall, do we take the plants out? We have to struggle with those questions. So, as more dosage forms get more complex physical attributes, changes, and so forth comes on our radar screen as a concern that the current approach may have to be improved. That's the lingering concern.

At the same time, on the opposite of that concern, one could ask do such concerns contribute to excessive stability testing. We often get criticized for our stability requirements, but I think what I would like to show is that our stability requirements actually are doing an excellent job, and there are certain reasons for why they are what they are.

But also, is there an opportunity to further improve regulatory utility of pre-formulation and product development data to understand mechanisms of physical and chemical changes? So, that's sort of a broad introduction.

Let me focus on concerns from my perspective. Physical stability I believe -- and I think you'll agree -- is a critical quality and performance attribute. I'll use dissolution changes as an example. Changes in dissolution rate that occur in the absence of detectable chemical changes would be in my mind an example of physical changes for tablets and capsules. For other dosage forms, suspensions, resuspendability and other aspects of changes that occur. So, there are many different physical attributes which are important.

For the last six years, we have tried to track dissolution changes and recalls that occur because of that. And dissolution related recalls are even number one or number two quality related problems that we see. The numbers are small. The numbers are not big. I think this year we had 22 products being recalled because of dissolution failures, and many of those failures are class 3, not a significant safety and efficacy concern. But there are recalls that occur for certain products on a continuous basis.

Carbamazepine. Marv has done a lot of work on bioavailability dissolution failure on that in the 1980s. Those dissolution failure problems still continue. So, those problems have not gone away. So, it's a lingering problem.

The other concern here is accelerated stability test conditions are more reliable for identifying the potential for chemical changes. Essentially the basis of Arrhenius equation and so forth are for chemical changes and so forth. And if we don't understand the mechanisms of physical changes, how do we know that the Arrhenius type of equation would work for some of that? Is it even appropriate?

Just to give you an example for that, cross-linking of gelatin capsules was a significant issue 5-10 years ago, and the stability conditions actually induced that change, but it was not an issue from a bio perspective. So, in some cases, the test might be more sensitive to potential problems where there may not be a safety and efficacy concern. But having a test which is more sensitive and giving false positives or false negatives can be a problem.

Mechanisms governing physical changes are not well understood or characterized. Dissolution rate changes may occur due to a change in morphic form of a drug and/or excipient -- I think generally we ignore the excipient -- and a change in processing. This could be triggered by a change in processing conditions, packaging, and so forth. It's a complex set of variables that one has to deal with.

One aspect of recalls that tend to bother me at least personally is recall investigations often do not result in identification of a root cause. So, if you see a dissolution problem, now recalled, the same thing will be recalled again. So, that cycle perpetuates. In order to solve that problem, that problem keeps coming back again and again.

As I said earlier, increasing number of parenteral controlled-release products comes on my radar screen as something that we need to be prepared for because more protein peptide drugs are being developed and more of them are coming in microspheres, implants, and so forth. We can deal with recalling tablets rather simply, but what about something that's implanted and so forth? We have actually dealt with some of those situations the last couple of years.

At the same time, I think there is a concern, but I think there is also a sense of opportunity. We do know there have been significant advances in pre-formulation and material characterization aspects and optimization. I think new tools, x-ray diffraction, are more commonly used and there are many tools available for characterization and understanding of the physical attributes. So, we have improved ability to identify and eliminate problems, but are they being fully utilized? That's a question mark I have.

Can we use this information to reduce the need for stability testing and prior approval supplement process? That's a program you have heard about from Dr. Yuan-Yuan Chiu. That's our risk-based chemistry. So, there is already a thought process ongoing, and you have some presentations to that effect. So, I'll not get into that right now.

But let me bring an example in. I was planning to bring a couple of case studies, carbamazepine as a case study, and so forth, but it's difficult to do that in terms of the proprietary nature of some of the data that might have two products. So, I shied away from creating those case studies.

But I want to share with you some data from our own program that we have in collaboration with the Department of Defense, and this is our program, what we call Shelf-Life Extension Program. So, the stockpile that we maintain, we keep extending the shelf-life through testing. Let me share some results with you. This is a major cost-saving to the taxpayer. It's millions of dollars that we save by not throwing out the stockpile every year.

So, some results from that that we have. We have done analysis on about more than 1,000 lots of 96 products. And what do we see in this program? 84 percent of the lots were extended, the shelf-life was extended on an average of 57 months past the original expiration date. About 14 percent were terminated due to failure, but many are still active. 22 products showed no signs of failure at all. So, these are essentially solid as a rock. Nothing happens to them. But about 10 percent of the products that we have are unstable and have difficulty meeting even the expire date. So, extensions are not feasible.

But what is striking -- and I'll show you some data on this, which in my mind supports why we have to be very conservative with the shelf-life and why our stability requirements are the way they are -- is the stability period is highly variable from lot to lot. I cannot say the shelf-life of this lot is going to be this. Let me show you some examples.

Here is an example of an injection, diazepam injectors. On your y axis, you're looking at length of extension beyond what was established as the shelf-life, and we're extending beyond. So, you're looking at months beyond the original expire date and the extension. On the x axis, you're looking at different lots. There's nothing different. It's a different lot of that material. You'll see that it's so variable. Can one lot be extended to 120 months or 96 months? We don't know until we do the testing. So, this is an ongoing testing program that we routinely test the stockpile material.

But one of the aspects is you'll see chemical degradation, physical changes too, so pH, maybe a chemical, physical, or a combination. But look at the recrystallization and the problems with precipitation of these injections. That is a reason for not being able to extend certain lots at all. But it's so unpredictable. That means we have to do testing for every lot to maintain this program.

Why are there such big lot-to-lot differences? I don't have an answer for that, but I would like to seek some answers for that.

Here's one more example, tetracycline capsules. The lot designated H is still ongoing. It's beyond 120 months after its expire date.

Just to let you know, the expire date are under controlled conditions. This is not for in-use type. It was storage conditions.

And dissolution failure in this case is also quite apparent on occasions, and there are certain products on the stockpile right now which we know will fail dissolution. So, we actually have an ability to identify products that might fail dissolution, maybe becoming models to understand what the mechanisms are.

I was going to hold the questions after you heard from the other two speakers. I'll come back to these questions later on.

DR. RHODES: Thank you very much, indeed. It's a great pleasure to be here. I greatly enjoyed the discussion we had before break. I have promised not to mention the near IR.

Basically, my first point I want to make is looking back, I think we can rightly congratulate ourselves -- and by ourselves, I mean industry, regulatory bodies, and academia -- on the general progress that we have made in stability testing. Of course, stability testing is one of the areas where harmonization has been remarkably successful.

However, unfortunately, I still meet many people in industry, where I work as a consultant, who believe basically that the only role of stability testing is to test potency. If the drug meets label claim with respect to potency, that's their only concern. And that is something which all of us have got to do something about.

This morning we were heard and I as an EU pharmacist was flushed with pride to hear that the EU is somewhat in advance in some areas. I must say this. I spend about three or four months in Europe. I think -- I haven't got hard data -- that there probably is rather more understanding of the problem that physical stability cause in Europe than is here. And I'm hopeful that one of the results of the discussions we will have this morning is to raise the level of awareness of potential stability problems and then perhaps to decide what kind of action is required.

So, stability testing should take as its purview the quantification of any functionally relevant attribute that can change with time and that may modify the safety, efficacy, or patient acceptability of that particular product. It, therefore, certainly includes many stability problems.

As a consultant, I can tell you that some of the worst stability problems I have ever had to deal with or attempt to deal with are physical stability problems, and I'd like to endorse very strongly what Ajaz has said about batch-to-batch variability.

We all know that batch-to-batch variability can be a problem with chemical stability. Again, without hard data, my own personal impression, it is a much more serious problem with certain types of physical stability. I've worked with suspensions where every 12 or 15 batches run very well, and then one batch fails for some reason. And when you have that problem, I regret to say that there are some who would like to use the SUC, "sweep it under the carpet," and forget about it. I do strongly believe that physical stability problems for a number of reasons are less well studied, they are cases where in some instances we certainly have no understanding of the mechanism, and it may well be that all we see at the moment is the tip of the iceberg. The problems may be more significant than we realize.

Some of the possible adverse effects of physical stability clearly are modification in release rate, and that could be increasing the release rate or decreasing the release rate. We've already heard that this is relatively common. It can lead to certainly a class 3 and in some cases class 2 recalls.

Aggregation of proteins, aggregation of dispersed material in emulsions and suspensions can be very important. Adsorption on packs or infusion sets certainly can be a clinically significant problem. Deliquescence can lead to such problems as content uniformity difficulties, tablet weight difficulties. Migration of one or more molecular species either in a drug delivery system, in a pack or in a label can cause problems. You can get, for example, loss of adhesion in a transdermal. You can get loss of label adhesion on a plastic bottle, or you can simply get the ink running because of migration. Obviously, if the patient can't read what is on the label, it's very hard in my opinion to argue that that product is safe and effective.

Some of the other effects that you'll see are loss of back-off torque on a plastic bottle with a plastic cap. In most cases, when we put tablets in bottles, the resin we use for the bottle is not the same as that we use for the cap, and therefore, when the temperature rises, either the bottle or the cap expands more than the other component. And if you have this stress, eventually you can lose your back-off torque, t-o-r-q-u-e.

Similarly, certain physical changes, aging on plastics, can lead to loss of package integrity, and that of course, can affect the microbiological status of the product.

And then we come on with this lovely term, loss of pharmaceutical elegance. Now, you might say, well, is that really important? Yes, it is. If the patient sees or smells a perceived difference in a batch of tablets from the previous batch, very often they will not use it, they will take it back, they will miss doses. Therefore, when we talk about quality of products, safety, efficacy, and patient acceptability are all important.

Now, why don't we give sufficient attention to physical stability? Quite frankly, I think that in many cases ignorance is bliss because in many cases we haven't looked to see if there are any changes and we assume that everything is okay. The scope of the problem, the mechanisms of the problem are, in many cases, quite unclear.

One of the areas that I have published on is change in dissolution, and I would suspect that there are a number of different molecular mechanisms that can lead either to a premature release of drug, in other words, the dissolution is too rapid, or to slow a release or an incomplete release.

Some of the techniques that can be used to investigate this problem. One technique, which is -- now there are some companies developing equipment for pharmaceutical purposes, and in particular I think that technique has very considerable potential for evaluating prolonged-release pharmaceuticals, some of the new complicated dosage forms that have been referred to.

Unfortunately, I think insufficient attention has been given by regulatory bodies to physical stability. I think too often the test methods used for physical stability vary from company to company and you don't really know whether the data is comparable or not.

Universities are at fault. We already heard this morning about the decline of programs in industrial pharmacy. Steve, I'm now going to give a commercial, since I am also a boilermaker. There are still some universities which are giving this type of training and they are to be commended, but in many cases, it isn't getting the attention that it deserves.

Of course, as I've already said, very often these problems are intermittent in nature and we hope they will go away.

What are some of the common misconceptions about physical stability testing and physical stability problems? There are still some people who quite confidently assert to me, when I say to them, I looked at your protocol for evaluating tablets, and why are you only doing hardness? Don't you realize that FDA is interested in dissolution, and these people will confidently assert that if you measure tablet hardness and there's no change in tablet hardness, the good Lord said, there can be no change in dissolution. I've got news for them. That ain't true. Sometimes you get a change in hardness and there will be no change in dissolution. Other times you'll get a change in dissolution and there's no change in hardness.

And then something else that has been referred to, many pharmaceutical scientists implicitly or explicitly assume that since the rate of change of a simple chemical reaction is governed by the Arrhenius equation, there must be some comparable equation for any physical stress. For example, many people, in looking at the rate of aggregation of proteins or the rate of coalescence of emulsions or the rate of sedimentation, try applying very high g forces in the implicit assumption that you can use an Arrhenius type approach to predict stability from high g forces to normal g forces.

I know of no experimental data, pragmatic, which shows that it works. I know of no theoretical data which would require that it should work. Perhaps these companies who carry out this type of work really have got an eye to the future, and when they market material to be sold on Saturn or Jupiter, they will have already stress-tested their products. But it is very dangerous to make the assumption that the Arrhenius type relationship is obeyed by physical types of equation similar to that which can be used in chemical testing.

What we need is additional attention to developing validated methods of physical testing so that different units, different companies, different research labs can come up with data that is comparable.

We need additional attention to the stability of products in the channel of distribution. Now, many of you know that USP and FDA quite properly have been giving attention to this important matter. What is particularly important for physical stability is that very often a product will show no physical change when stored at controlled room temperature, in other words, isothermal.

Now, I am aware, of course, that the USP definition of controlled room temperature allows for occasional excursions to 30 degrees. That's a lovely expression, ladies and gentlemen. It makes me think that the stability manager takes the little samples out for a walk once in a month so they can benefit from the sunshine.


DR. RHODES: But seriously, I think there are many physical instability problems which you don't see with retained samples. You don't see it when you've had the samples kept at controlled storage room temperature. It's only when the products get out into the channel of distribution and you get substantial vibrational or temperature stress that these problems do develop.

I want to emphasize the point that has been made earlier. As long as our approach is purely empirical and we don't understand mechanisms, we will, to some extent at least, be groping in the dark. We do need to identify root causes. We need to recognize that many of these problems show significant batch-to-batch variability. We need to look at such factors as temperature stress. Also water activity is probably extremely important.

We need to be more knowledgeable on drug-excipient interactions. Now, certainly some of the techniques we've used in pre-formulation are very helpful. Some of them may have been excessive. For example, mixing a drug on a one-to-one ratio with an excipient when you know that in the tablet you're only going to have 1 percent of each of those two components, that may be overkill.

But the point I'd like to make to you this morning is this is an area that does require more attention. It requires more standardization, and what I would like to see is a triangle of forces. Many of you will recall from physics 101 the triangle of forces, and the triangle of forces I see here is academia, regulatory agencies, and industry working together so that we can move forward.

Thank you.

DR. LEE: Thank you very much.

Are there any questions?

(No response.)

DR. LEE: Thank you. A very clear presentation.

Dr. Chen?

DR. CHEN: Good morning. I would like to spend the next 10 to 15 minutes to give you an overview of our current practice which mainly is the ICH Q1A or Q1A(R) as it relates to physical stability of drug products.

ICH Q1A was originally published in the U.S. in 1994, and it was recently revised, expanded, and republished just three weeks ago on November the 7th. It's a guidance that provides recommendations on how to design and conduct stability testing of new drug substances and products, and it tells you how to assemble the core stability data package, so to speak, to support the approval of an original application for new drug products.

The kind of physical stability that has already been mentioned by Dr. Rhodes are the appearance, particle size, polymorphic forms as appropriate, dissolution as applicable to the dosage form, and suspendability of products that are to be constituted, and viscosity, for example, homogeneity for semi-solids, and so on. What I would like to do is give you an outline of how Q1A(R) addressed these issues.

Q1A(R) starts by -- sort of the first and foremost issue is the stress testing of drug substance. It's by doing this one-time testing of typically one batch of drug substance by subjecting it to very extreme conditions that are above and beyond the so-called accelerated conditions, which I'll touch on later. These conditions are at elevated temperature, elevated humidity, light, oxidation, extreme pH, i.e., acid and base, hydrolysis.

The purpose of this kind of stress testing on the drug substance is to get a handle on how the drug substance would behave in these extreme conditions and also a handle on what may be expected, in the worst scenario, of how the drug product may behave. It also helps the development of stability indicating methods.

Where the ICH Q1A(R) isn't very helpful is the stress testing on drug product. It's sort of vague on what the scope of the stress testing on a drug product should be. It only refers to photostability, which is in turn referred to another document under ICH auspices. That's Q1B dealing with photostability. Other than that, there's no detail.

So, I'm just going to jump over to the so-called formal stability testing on drug products. There are three basic elements that constitute the formal stability study. Number one, the batches, the selection of batches, the number, the type of batches. Number two, the tests or test attributes or specifications, and number three, the storage condition, and lastly the amount of data that is considered acceptable to support the application.

In terms of batch selection, the Q1A(R) says that a minimum of three batches, two of which should be of pilot scale, and the third one can be smaller. These batches should have the same formulation and packaged in the same container/closure as proposed for marketing. These batches should be made using a manufacturing process that simulates the process to be used for commercial batches. Lastly, the batches should meet the same specifications as proposed for marketing. In other words, under the Q1A(R) recommendation, the batches to be put on stability to support the marketing approval do not have to be production size batches, but they do have to represent the to-be-marketed production batches.

The second feature of the formal stability testing deals with the tests or attributes. Q1A(R) says that stability of the product should be monitored for those attributes that are susceptible to changes during storage and are likely to influence the product's quality, safety, and/or performance. These attributes encompass chemical, physical, microbiological and biological attributes, as well as functionality tests, for those products that involve a delivery system.

But Q1A(R) does not address what exactly those attributes are. Q3A/B does provide more guidance on how you select or exclude impurities or degradants. Q6A further expands on test attributes, including how to establish acceptance criteria for degradants, impurities, and physical attributes such as polymorphic form, particle size, dissolution.

The third aspect of the formal stability testing is the storage conditions and the amount of data. For products that are intended for "room temperature" storage, which is the most common situation, for products that are now packaged in semi-permeable or impermeable containers are as follows. For long-term, the temperature should be controlled at 25 degrees C plus/minus 2, with humidity control at 60 percent. There will be an accelerated condition that is expected, and that is 40 degrees C and 75 percent RH.

If the product fails at that condition after 6 months, it's expected that the intermediate condition testing is also carried out. This is to cover the upper bound, as Dr. Rhodes indicated, that the USP controlled room temperature allows excursion to 30 degrees. So, should the product fail at 40 degrees after 6 months, then demonstration of stability for 12 months at 30 degrees C is expected.

In terms of the amount of data at the NDA submission, for long-term testing, we expect 12 months from an ongoing study. For accelerated, it's 6 months from a 6-month study. Intermediate would be 6 months from a 12-month study.

I hope I'm not plagiarizing the definition of excursion. I don't know if my colleague, Carol Easter, from Merck, working with me on the ICH expert working group is here, but she has once defined excursion as a pleasant trip.


DR. CHEN: So, we can complement Dr. Rhodes' description of how one conducts excursion outside of the normal controlled storage condition as a pleasant trip.

I would like to skip this one. This is basically the other two conditions for the product that may be intended for storage. These are low temperature, including refrigerated and frozen products.

What I would like to draw your attention to is two aspects of the so-called significant change, which I mentioned two slides ago. There are five aspects under this definition of significant change, which is sort of a new concept introduced into this ICH document. It's mainly a trigger for intermediate testing, for those products intended for room temperature storage, but also has ramification on extrapolation and so on.

Those criteria that have relevance to our discussion today are the fourth and fifth bullets on the slide, one of which is the failure to meet, generally speaking, appearance, physical attributes, and functionality test. By itself, though, we've often considered that to be part of the physical attributes, is dissolution, and the criteria for that to be judged as a failure is failure to meet acceptance criteria for 12 units.

What is the consequence of significant change according to Q1A? If significant change occurs at an accelerated condition, then there's a need to conduct intermediate testing, as I explained earlier. And extrapolation of shelf-life beyond the real-time data range may not be appropriate.

If significant change also occurs at intermediate, then the applicant needs to consider one of the following options. Perhaps there's a need to reformulate. Perhaps there's a need to qualify higher impurity level or degradant level, maybe a more protective container/closure system. What I have neglected to include on the slide is the consideration perhaps that the product is not suitable for room temperature storage. Perhaps refrigerated.

Q1A(R) further discusses how you evaluate your stability data once they are collected. Among those is the allowance for extrapolation of shelf-life beyond the real-time data range, but that's predicated on no significant change at accelerated and also that there's a condition that you have either the relevant supportive data from developmental batches that don't quite fit the primary stability batch definition and/or statistical analysis. But, again, as Dr. Rhodes pointed out earlier, most of the physical properties, physical attributes, don't really lend themselves to the assumption that it will follow the Arrhenius equation, i.e., linear regression, which is the type of analysis that industry and FDA typically resort to.

This kind of shelf-life, if approved, i.e., if it's extended beyond the real-time data range, and/or if the shelf-life is based on less than production size, then the shelf-life approval is something we call tentative. And Q1A(R) also recognizes that. It recommends that post-approval, this kind of tentative shelf-life should be confirmed with data from three full production sized batches.

In the U.S., we also expect firms to put on at least one batch annually on stability as further confirmation monitoring.

Lastly, I'd just like to sum it up. If we follow the Q1A(R) guidance, can we reasonably predict shelf-life for all future batches? Well, we can look at this question from two aspects.

One is that we are relying on accelerated testing or less than full shelf-life long-term data to grant shelf-life, and is that predictive in all cases or for all attributes?

Secondly, we rely on oftentimes pilot batches made not at the intended commercial site. Are those data predictive of future production batch behavior? Well, I think a lot of that would depend on a combination of factors and probably not all factors apply in all cases.

But to top it all, I think it's the complexity of the formulation and/or dosage form because that would affect the reproducibility and perhaps the robustness of the manufacturing process, which is what I list as the second factor. The robustness of the manufacturing process would have an impact on how successful you will be in technology transfer and/or scale-up.

I think both Ajaz and Dr. Rhodes touched on the sampling plan. How sound the sampling plan is may affect the outcome of the stability testing results. You may question how representative it is when you see the result.

Reproducibility of analytical procedure. Certainly that will play a role in how reliable the data are and how you project from pilot batches to future batches.

And acceptance criteria need to be meaningful. If it's too tight or too loose, well, certainly if it's too tight, you run the risk of failing lots right from the beginning, but if it's too loose, it doesn't really serve the quality control purposes. So, from the regulatory perspective, we need to consider that.

Lastly, the experience that the firm has with the product based solely on the stability batches. This obviously includes the number of batches that have been made for the primary stability testing purposes as well as developmental batches. Obviously, the more experience you have, the more you can speak to the other factors. You have more knowledge about the product as a whole and various aspects of going into this product.

So, with that, I hope it will be helpful in our discussion later and how they relate to the concerns that Ajaz raised earlier. Thank you.

DR. LEE: Thank you very much.

Are there any questions for Dr. Chen before she leaves the podium?

(No response.)

DR. LEE: Thank you.

Ajaz, would you like to --

DR. HUSSAIN: I had some questions on my last slide, if we could put those back up again.

Vince, while that is happening, from Dr. Rhodes' presentation, I think one aspect which I think adds to the concerns that I have is physical attributes, how we measure these attributes and what actually are we measuring. There's a gap there.

Dr. Rhodes mentioned hardness. How do we measure? We just put a tablet in the thing and it's the crushing strength. It doesn't have the fundamental attributes from a material properties perspective that we really would need to understand what's happening from a mechanistic basis. Tensile strength would be a better approach. We don't measure tensile strength. It's the hardness values.

But that points to the very fundamental aspect of the method used to characterize that attribute. Transdermal adhesives. How do we characterize? We don't have good methods even to say that the system would maintain its adhesiveness in different populations or in different times and so forth.

I just wanted to point that out that I found very interesting in Professor Rhodes' presentation.

The questions I wanted to pose to you in a very broad way, as I said, we didn't get data and so forth to present to you at this time, and there is difficulty in getting that information to you. We have failure rates and this and that, but we don't have any explanations.

So, one aspect that I would like you to consider is should this topic be developed for a more detailed discussion by ACPS. Is there a need for that? I heard from Dr. Rhodes. I think he felt there was one. But in order to do that, we really need to have some database, some understanding of some mechanistic basis. I don't have that data right now that I can bring to you for a very rational discussion.

So, should FDA labs develop a research project to elucidate certain mechanisms so at least we have some examples?

The goal of that program would be essentially to provide information on how to prevent stability problems. My focus is prevention. We're doing extensive work there. But there are still pockets of problems that we see which need to be prevented. So, how do I push the focus on the prevention mind set?

Especially, my concern comes from the parenteral dosage form for one reason. I think I don't want to be in a situation where I have to look at a decision that there's a failure. Do we recall that product? And if we recall it, do we take that product out of the patient, which is already implanted? Things of that nature. There's a potential for those questions to be posed in the near future.

The aspect which I think I would like to add to the parenteral dosage forms is we actually don't have good dissolution test methods. If we look at the test methods we use for dissolution for parenteral controlled-release products, even liposomes, for example, we have in many cases adopted what we have learned from oral tablets and capsules. Those methods are getting translated into that area. So, we lack fundamental methods that we would need in that area.

But right now the initial next step is we have a research unit in our Office of Testing and Research. They have actually started collecting all the tools that we need to establish physical characterization and so forth. We participate in the Shelf-Life Extension Program to maintain the national stockpile. We have a recall database. I think a lot of those elements that we need to identify problems and learn from those are happening. So, I think we have an opportunity to actually start a research project, and would that be the right thing to do at this time would be my question to you.

Complex dosage forms. I had invited Diane Burgess to spend her sabbatical, and she went through and spent three months with us identifying some of the problems. There are certain high risk issues in parenteral controlled-release dosage forms that she identified for us.

So, this is sort of brewing. This will come up, and how should I bring this back to you if you want this back for discussion?

DR. LEE: Thank you very much.

Any response from the committee? I think Ajaz wants us to work harder.

DR. BOEHLERT: Certainly for complex dosage forms, sustained, controlled-release parenterals, there is no guidance now in my understanding. And the companies that are developing those products are pretty much developing their own in-house methods based on some in vivo/in vitro correlation. It would be helpful to have some guidance in that area, because nothing exists now, to ensure some consistency going forward. Right now everybody is doing their own thing and I think a lot of these products are fairly new. There aren't a lot of old-time products.

When it comes to conventional products, I think you talked about 22 recalls this year. Is that right?

DR. HUSSAIN: 22, right.

DR. BOEHLERT: That's not very many.

DR. HUSSAIN: No, it's not. It's a small number.

DR. BOEHLERT: So, I'm wondering what the problem is. I agree absolutely with Dr. Rhodes. There are a lot of issues with physical stability, particularly dissolution. The companies don't understand the mechanisms very well, but guess what? Very often they're able to produce a product that is safe and efficacious and does not fail.

So, I think you need to outline, for me at least, just what is the problem. 22 recalls out of all the products that are on the market are not very many.

DR. HUSSAIN: No. As I stated, in many ways the conventional tablets and capsules and conventional dosage forms -- I think we have been able to solve the problem. It's not a major issue. But there are lingering problems in that area. For example, I mentioned carbamazepine. Failure of dissolution is linked to biofailure and seizures and so forth. That problem has never gone away. We saw it for the first time in 20 years. It's still coming back again and again. So, it continues.

And some of them are older products. There's not much opportunity to change that, but there needs to be some improvement.

The problem is this. I think the mind set that is occurring is what we have learned from tablets and capsules in oral gets translated into other dosage forms, parenteral dosage forms. That's where the mind set is. The problem with oral stability problems are small. The total number of quality recalls we had last year was 243. Compared to all the products out there, that's a very small number. So, we're doing a good job from that perspective. So, I agree with that.

But the basic process by which we are achieving that is through testing, testing, testing. You saw Dr. Chen's presentation. We have extensive testing. But as we move towards the more complex, with liposomes and parenterals and so forth, we cannot translate the same system that we have learned from tablets and capsules to these. That's the problem I'm trying to illustrate. We have to start somewhere in terms of understanding the mechanisms.

For liposomes, we had a discussion with you some time ago. We still use dissolution testing for liposomes. What does that mean? What will that tell us?

One aspect with Dr. Rhodes' presentation was this. There is a possibility that unless we have good methods, physical methods for measurement what we see as not a problem right now may not always be correct. There may be problems existing because our physical method may not be even catching that.

DR. LEE: I would like to go off line and to see whether or not Dr. DeLuca is still there. Pat, are you with us?

DR. DeLUCA: Yes, I'm still here.

DR. LEE: Pat, you've been very patient. I would to make sure you have a chance to participate before something happens to you.

DR. DeLUCA: I think I've enjoyed the presentations, I think certainly an area that we have been devoting a lot of time to is the controlled-release parenterals and directing a lot of attention to in vitro release methods. As we get into looking at these long term, prolonged formulations, depot formulations where we're looking at even longer than 30 days, going out to 6 months and probably even longer, then we need to have accelerated methods for quality control purposes for in vitro release and in vivo performance to be able to, after a batch is produced, test that and then within a short period of time -- let's say, within a week -- to be able to say, well, this is a reliable form for a 6-month dosage form. So, I think there's a need there to come up with the methods to allow this to take place.

We're working and looking at accelerated methods and Arrhenius treatments and other mathematical treatments to allow one to be able to predict this. So, it's not from a stability standpoint for these forms, but mainly for the release characteristics, release performance, and then of course, the in vivo performance. So, I think this is an area that certainly needs a lot of attention.

DR. LEE: Thank you. Can you see us?

DR. DeLUCA: Yes.

DR. LEE: You can.

DR. DeLUCA: Yes. I can see you well.

DR. LEE: But we cannot see you.

DR. DeLUCA: No, you can't see me.

DR. LEE: Anyway, I think that Dr. Rhodes' presentation was designed for you with that in mind.


DR. BYRN: I'd like to go on with what Pat was saying. Even with simple dosage forms based on what Chris was saying with the potential failure of Arrhenius' equation and so on, it's very difficult to make predictions or do accelerated studies. So, a better understanding of that would be of significant assistance. I think we know more about, although there's always controversy even in chemical degradation, physical changes, and the predictability of accelerated methods is a major issue I think that would be worth understanding more.

And then the second thing, just to continue with Dr. Rhodes, in the experiences I've had consulting, problems are much more widespread than would be reflected by the number of recalls. I assume that people solve them prior to getting on the market, is my understanding. But it's not a well understood area, I don't believe.

DR. LEE: Let me interject here. Did I understand you correctly that the definition of shelf-life is mostly with respect to the chemical species?

DR. HUSSAIN: No. I think the shelf-life, as we defined, with all attributes, entire specifications that we would have. But I think what Professor Rhodes was mentioning is I think the mind set tends to be, in some areas, it's the potency only. But, no, our shelf-life would be based on the entire --

DR. LEE: So, let met talk about something that is very close to me. The ophthalmic area with their suspensions and what if the particle size changes and there's no sign of chemical instability. What do you do with that?

DR. HUSSAIN: If there is a specification for particle size, that would be part of the stability program. Chi-wan can correct me if I'm wrong, but I think we would go through the entire aspects of attributes which are critical for that and be part of that shelf-life program.

If I may, there were several points made. One was Professor DeLuca was mentioning about need for an accelerated dissolution test. I wanted to just share with you some information on that. That is a very significant challenge for us right now. Some of the parenteral dosage forms are designed for release over 6 months, a year, and so forth, and the release is extremely slow. So, how do you even establish a quality control tool? For a month you are doing a dissolution test or 6 months before you can release the product. There is a need to accelerate that test.

And my concern was the methods being used to accelerate. One example I'll give you is use a .1 normal HCL as a dissolution media. It's actually happened. That's one way of accelerating. The conditions for accelerating the release is nowhere linked to mechanism. So, what we have done is we have actually positioned a research contract with Diane Burgess on that very issue, more mechanistic basis for accelerating dissolution test for quality control, which eventually will be linked to in vivo performance. So, we are exploring that as a research project right now.

That's what I was hoping to sort of mention to you is when I get back, we'll have some data that we can present to you in that regard.

But just to continue that thought, we have conditions of tests for parenteral dosage forms with .1 normal HCL as a dissolution media. What will that tell me in terms of shelf-life? What will that tell me in terms of changes? I'm not sure. So, that's the underlying concern that's throughout my presentation here, is yes, we have tests which are measuring, but what are we measuring is the underlying concern that I'm expressing.

DR. LEE: Yes. It seems to me that you are just stressing the system in a different way. I think we've been tied to using temperature as a way. There might be others.

DR. DeLUCA: Yes. Just to add, I think we have to be careful, when we're stressing, we're stressing it and producing a chemical change that could affect the stability. Or is one affecting a physical change and trying to accelerate, whatever we're trying to accelerate, if it's the stability testing or if it's release from a prolonged-release form. I think that's another key issue. You certainly can accelerate the release chemically, but I'm not so sure that's a plausible way of going.

DR. LEE: So, Ajaz, I sense that you perceive there's a gap and you'd like us to address that.

DR. HUSSAIN: Well, I perceive that as a gap. I think the system is working fine, and I think we have a good system. But if there are such gaps as I perceive, if we don't fill those gaps, there is a potential for problems that we'll face, and we will be in a reactive mode. And we can prevent this from happening, especially for parenteral controlled-release for long term and so forth. In my opinion the risks are higher for failure and we want to avoid those failures.

Chi-wan had something to say.

DR. CHEN: I would just like to add to Ajaz's answer to your question, Dr. Lee. Yes, we do place as important an emphasis on physical attributes when studying shelf-life as chemical attributes, but it presents a great deal of challenge to FDA, as well as the industry, because of all the problems we outlined earlier. For example, the non-predictability of accelerated data and less than full shelf-life which is a recurring occurrence.

As the industry compressed their drug development time line, there is no chance to really work out all the bugs from their manufacturing process or have enough long-term data for us to set a reasonable shelf-life and acceptance criteria. So, everything that is approved is predictive for future production batches. So that's a real challenge.

DR. LEE: The committee has been pretty quiet so far. Art, do have any comment to make?

DR. KIBBE: I have something completely off the subject.

DR. LEE: Leon?

DR. SHARGEL: Given the fact that the number of recalls on solid oral dosage forms are very small, it is apparent the stability program, at least for those products, seems to be working in general, although in discussion --

DR. RODRIGUEZ-HORNEDO: I have a comment.

DR. LEE: Nair, would you please wait a second?

DR. SHARGEL: Given the fact that we have a small amount of recalls, although they are there, and many companies who have experience -- and one of the last points on Dr. Chen's slide was on predictability of physical stability expressed as a term, experience, number of primary and supporting batches. So, there are companies that have experience with the product. We've talked about risk management and other items in which we may not need to do stability on every batch, and yet there is apparent some issues that even a company that has experience with a product. So, I am interested that we find the mechanism for the instability of why we're having failures.

On the other hand, knowing the resources of FDA is not as large as we would like, would it be better to identify those products that you would feel most important to measure mechanisms for failure of stability, rather than having a very broad question that you have here? Is there some area that we really should be addressing as opposed to being so wide an area?

DR. LEE: Ajaz, would you like to respond to this before I go off line?

DR. HUSSAIN: One aspect which I will share from my previous presentation this morning was on the time to release a batch. Dr. Raju's MIT data set shows that exceptions lead to significant extension of release time. You know what happens most of the time? They throw it away. So, one way of keeping the recalls to a minimum is you don't even release a lot of those batches. So, that's sort of built into that. Anyway, I just wanted to make that point.

DR. LEE: I understand that we have Nair on the line. Dr. Rodriguez? Dr. Berg? Nobody is there. This is carried by what phone company?



DR. LEE: Yes, Nair. Would you please introduce yourself?

DR. RODRIGUEZ-HORNEDO: Yes. This is Nair Rodriguez from the University of Michigan, and I have been with you since the beginning of the meeting.

DR. LEE: You were pretty quiet. Can you see us?

DR. RODRIGUEZ-HORNEDO: Yes. I can see you. Realize that there is about 5-second delay from when I talk to when you when you can hear me. So, if you ask if there are any comments and you wait 2 seconds, you don't give us a chance.

Anyway, my comment goes along the lines of the physical stability in which I think the main focus of the guidelines has been on the physical stability during manufacturing and storage. And I am very glad to see that there is a recommendation or at least a concern regarding the possible changes, physical changes occurring during dissolution. I think right now we have basically relied on the endpoints which are measuring dissolution rates of batches and not realizing that there are some active ingredients that may be more vulnerable to physical transformations during dissolution, such as the example that Ajaz gave us this morning on the carbamazepine.

So, I think it's very important to consider maybe eventually classifying some of the active ingredients that we're dealing with and the dissolution media that we develop in the methods for dissolution to identify which products are more vulnerable to transformation during the dissolution process.

DR. LEE: Thank you.

Anybody else? Yes, Marv.

DR. MEYER: Ajaz, what's being done currently with those products that are implantable for many days or months that are already marketed in terms of stability testing, physical characterization, et cetera?

DR. HUSSAIN: Shelf-life is established the same way as we would do for any controlled-release dosage form and so forth. In many -- or at least four or five of those products right now, we have would be considered as an in vitro/in vivo correlation. So, in terms of knowing, through the entire duration of intended use, the system was stable, and that information, not only the safety and efficacy, but pharmacokinetic information has been translated into a correlation with an in vitro process. So, probably 50 percent of the products have that. The rest do not. There's a handful. About six or seven products out there. There are not many.

DR. MEYER: Do people who use animal models, implant these CR dosage forms into animals?

DR. HUSSAIN: The workshop we had on this very topic about six-seven months ago made that as a recommendation. Right now we do not really use animal data in that way. I think there's an opportunity that we might want to look at animal data.

DR. LEE: Ajaz posed two questions to us and I would like to address them. The first one is, should this topic be developed for a more detailed discussion by ACPS? And the answer is? Yes.

And then the follow-up question is, what? Would you like -- I'm talking to the committee -- to spend some time talking about that? Or should we bounce the ball back to Ajaz and say what are some of the problems that you anticipate, you perceive?

DR. HUSSAIN: The focus right now -- we have started a pilot project on a controlled-release parenterals with Dr. Diane Burgess. And we are planning to have some of that work in our labs too.

There has been, that I'm aware of, one recall situation in that area where actually things had already been implanted and we had to deal with that question. The number of products are small. We are looking at less than 10 products. It's a small area and potentially high risk areas and a potential area where you will see a growth in number of products because of protein peptide molecules being developed more and more. So, it is a small pilot project that we have right now ongoing.

So, my thoughts were to use that as a focal point and sort of build around that from a mechanistic perspective in general, not dosage form specific, what sort of changes that might occur that might lead to changes in dissolution. But my use of dissolution is -- again as an example, Dr. Rhodes pointed out there are many physical attributes that may not even have the right test methods.

So, what I would propose to you is looking at the list, looking at the information on all physical attributes that might be critical, we could do a survey of current methodologies, whether they're validated, not validated, in terms of what are we truly measuring. Maybe we can create that database for you and sort of bring that back with the analysis saying that these physical attributes, although we measured those, there may be some gaps present in that. That then becomes a focus for saying, all right, which areas of focused research should we have?

The current approach for research that we have was driven by the databases that we have recalled and so forth. 80-90 percent of the products are solid. So, although the recall numbers are small, we still play with those dosage forms more so than the parenteral dosage forms.

So, an examination of current physical methods, potential gaps that may exist in those methods and potential risk that may exist, even though recall numbers may not be indicative of that, and then potentially a program that would emerge from that, maybe a small update on that next time and then develop a more detailed program.

DR. LEE: Dr. Rhodes, would you like to add some comment?

DR. RHODES: I think this area is so important that wherever you start is going to be useful.

However, having said that, I think that it probably would be a very good idea to concentrate very largely on factors affecting release of drug from drug delivery systems. Even that is a huge area. The factors that can cause a change. Well, firstly, it could just be the drug substance changing its polymorphic form. It could be as simple as that. That's not all that simple. It could be an interaction between the drug and an excipient, or it could simply be a case hardening on the surface of the tablet or physical degradation of a liposome with no obvious effect upon the drug. So, even looking at release specifications is a big task.

May I make a plea? And that is, if you do decide to go ahead in this way, you don't forget some of the simple tests. When I go in as a consultant, looking at stability protocols, I see lots of lovely charts with the list of tests they carry out, and the first one is appearance. And I always have a big laugh because there's always a check mark on appearance, or 99.9 percent of the time. How is appearance done, I ask. Oh, we look at it. Do you have an SOP on appearance? Oh, no. So, one day they're doing it with this type of light, another day someplace else. One day you're using someone who has got good eyesight; the next day you may be using a colorblind person. So, there's some physical tests, very basic ones, where a little bit of guidance is needed.

And one last point. Mention has been made that the number of recalls in some of these areas is relatively small. That is absolutely correct. What you don't know about is the number of case when the product hasn't got out onto the market and the company has detected a problem and they've called in someone, in some cases me, and the product never got on the market. But it has been a problem. And if we had known more about the mechanism, we would have been able to deal with it.

The number of cases I've been involved with, with very conventional tablets, using that vile material shellac as a controlled-release coating, and the number of intermittent batch failures I've seen with that is awful.

So, I do commend this idea. I would suggest that perhaps, even though you don't forget simple things like appearance, perhaps factors causing a change in release rates would be the most fruitful area and the one which is probably most clinically important.

DR. LEE: So, along the same lines, let make one follow-up thought. Perhaps what you are going after, Ajaz, is some kind of a modified checklist for the reviewers as to what to look for from this point onward.


DR. BOEHLERT: Yes. I was just going to comment. You might also ask those folks that review stability submissions to take a look at the actual data because while you don't have a lot of recalls, what you do see are a lot of OOS results on stability that are tested until they just barely pass. So, you have a lot of marginal values and get a clearer picture of just how big the problem is.

DR. LEE: Let me speak with Pat and Nair. Feel free to interject at any time. Just give me a signal. I don't know what that would be.

DR. DeLUCA: Pardon?

DR. LEE: If you would like to make comments, don't be shy.

DR. DeLUCA: Yes. Well, I think I'd like to emphasize and just carry a little further what Chris was talking about. I think that one must pay more attention to the surface properties and the surface phenomenon here and how the surface is changing with time even during dissolution testing or during storage. So, I think that's an area that maybe has been overlooked and needs to be considered. I think it comes in even this morning's discussion about the sensor technology. We've got methods to determine composition, but surface properties I think play a significant role here in stability and performance, the dissolution performance. So, I think that's another area that we have to concentrate on, methods that can really distinguish differences in the surface properties of a dosage form, of a batch.

DR. LEE: Thank you.

So, the answer to the first question is that we commend you for raising the conscientiousness about this particular issue, but we look to you for further guidance as to how to proceed.

DR. HUSSAIN: If I may summarize, essentially what I've heard is I think it's an important issue. I think there needs to be more information developed for more discussion. But the focus I think, in my mind, is trying to learn so that we can prevent these problems from occurring and focus on understanding the mechanisms.

To give you an example, although the recall numbers are small, even in that small database, I've been looking for patterns of failure. Can I identify something that's going wrong repeatedly? More companies are doing that so that we can prevent this. This is a high risk practice.

One thing that sort of pops up is when you have a combination, regardless of the drug, whatever the drug might be, if you have dicalcium phosphate dihydrate and Explore Tab as the two ingredients -- Explore Tab is a disintegrating agent -- there tends to be a higher percentage of dissolution failures if that product is packaged in a blister pack. What you're looking at is movement of water within the system, hydrating the Explore Tab. The tablet is intact, hardness is fine, everything is fine, but the tablet doesn't disintegrate. You have lost the functionality of that disintegrating agent. It doesn't happen in a bottle. That sort of mechanistic understanding leads you to say we can avoid this.

DR. LEE: Yes.

DR. MEYER: It seems to me it will be hard for an FDA lab to investigate mechanisms because it's to some extent certainly active-ingredient-specific, source-of-raw-material-specific, other excipients that are present, and the dosage form, et cetera. So, they could do a lot of artificial things but not really gain the insight into company X's problem.

In my view, they're not really responsible for solving company X's stability problems. If they repeatedly put those ingredients together in a blister pack, well, tough. You guys will know that they're going to fail. But it's not your responsibility to solve that problem for them.

I think what is important, from a research aspect perhaps, is how to evaluate things like the controlled-release parenterals, the liposomes. What kind of physical and chemical tests are appropriate that will really discern what's going on with that dosage form and not so much try to fish around and find a laundry list of mechanisms that may not apply in the real world.

DR. LEE: Very good point.

Other comments?

DR. HUSSAIN: In my mind, when I come back to you next time, either an update and so forth, probably an update, my proposal is since we have started work in collaboration with the University of Connecticut with Diane Burgess, why don't I update you on that. The focus there is a meaningful mechanistic base to accelerated release test for long-acting parenteral dosage forms. And then start the discussion from that aspect.

DR. LEE: To set the stage for this afternoon's discussion, I wonder whether this would be something appropriate for PQRI.

DR. HUSSAIN: Yes. With PQRI, what we have been trying to do is define the research project very carefully. Here is the thought process leading to a definition right now.

DR. LEE: So, I think that we more or less have migrated into the second question, and I think that Marv may have answered it on behalf of the committee.


DR. LEE: Would the rest of the committee agree?

Well, I think that we should focus attention on how to evaluate. I think by addressing that particular issue, that raises the conscientiousness of the companies as to what you might expect. I think that would be the major benefit to the American public.

Yes, Efraim.

DR. SHEK: If I might just say, talking about mechanism and the complexity, we raised already how complicated it's going to be. But the issue with mechanism, at least from my experience, especially when you develop a sophisticated drug delivery system, you look at the mechanism. You have a theory there why it works.

The path is how do you translate it now to a QC test where you're assured that this product consistently will behave. At least from my experience, every dosage form might have a different mechanism. With liposomes, it depends on how you make them and how they're going to behave. One liposome might behave different from another. So, it's a combination of the drug substance and the composition.

So, mechanism-wise, maybe this information is being developed during the development of the product. The question is now when you've got to register the product, to have some requirements and expectations. And then you come out with tests like saying, okay, you don't wait six months to find out the dissolution. So, you try to compress it or even solid dosage form which lasts for a day and you want to find out over two or three hours whether it's being released. So, this aspect would be of interest deliberating a little bit. It's not so much the mechanism, but adopting appropriate methods to test it.

DR. HUSSAIN: I just wanted to respond to that. I hope I didn't leave the impression with the .1 normal HCL media for the parenteral dosage form was not justified in the NDA process. There was a justification before we accept something like that. But I think when Diane Burgess spent the time going through some of that, when she provided the report to me, she said, you're missing the mechanism. Yes, it is accelerating but it's not mechanistic phase. That miscrosphere undergoes erosion in that, and .1 normal HCL is not inducing the same mechanism. So, what are you learning? So, those are the sort of things I just wanted to point out.

DR. LEE: Yes. I think it's a very important topic. In fact, I learned this many years ago and I think for the first time I began to think about, when you said stability, what are you talking about. I think that the committee commends you for raising this issue, and I think that the committee would like to hear more about that, some examples. And we'd like the agency to focus on the evaluation aspect.

I think this might be a fine example about how the regulatory agency can move and elevate its standards.

Other comments to make? Yes, Bill.

DR. JUSKO: There's been mention several times of the Arrhenius equation which is based on fundamental thermodynamics, and as you develop these evaluation methods, one should keep in mind that all sorts of chemical/physical and physical/chemical things are based on a lot of fundamental laws of nature. So, as you characterize these mechanisms, you should couple them with additional mathematical thinking as to how things relate to basic processes that govern everything that happens in life.

DR. DeLUCA: I support that.

DR. LEE: Thank you, Pat.

Anything else?

DR. DeLUCA: I didn't have a chance to make some comments this morning because I wasn't on the phone, but I just wanted to interject this point.

We talked about the technology for process controls and that sensor technology, and we talked about education in a sense pertaining to dissemination of knowledge. But I really want to emphasize here that academe needs to be more involved in the research and development of these technologies. Ajaz had mentioned in his talk of the downsizing of industrial pharmacy programs in academe, and that's happening. So, I think there needs to be more of an emphasis on looking to academe for research and development in these areas and I think that probably more connection with pharmaceutical engineering programs in academe as well.

Somehow I think we need to stress that. Most of our graduate students -- probably 80 percent of the graduate students -- out of pharmacy schools go into industry. I think there needs to be concern that there is a downsizing of industrial pharmacy programs in the colleges.

I think maybe the subcommittee they talked about this morning should be an issue that needs to be addressed too.

DR. LEE: Good. Thank you very much.

DR. MEYER: Vince, one comment to Pat. I applaud that and I certainly support. We have some industrial activity at the University of Tennessee, but you have to realize that there are a number of chancellors and provosts that say if you don't have NIH money, you don't get promoted and tenured. It's very hard to get NIH money for this kind of work. So, I think that's a battle that will rage for some time.

DR. DeLUCA: And that exists at my institution here too. I think what we have to do is we have to look at pharmacy schools as maybe being a little different in that regard. We have responsibility to produce practitioners as well as graduate students that are going out for the industry. So, I think that, one, when it comes to support and funding, I really feel that the chancellors will be receptive if they understand that the industry, the regulatory agencies have some emphasis that they have to look to pharmacy schools for that kind of research and development. Maybe they'll be more amenable to accepting industry funding as being on the same basis as the NIH funding. So, I think it's a battle we have to continue to fight together.

DR. BYRN: Vince, I have to just comment.

DR. LEE: Is it the same issue?

DR. BYRN: It's been my experience that it's the overhead issue. I don't want to get way off the subject, but now that the issue has been raised. What we've done at Purdue and the reason our program is strong -- and actually we're going to try to expand at least three faculty in this area and probably at least 10 grad students over the next few years. The reason we've been able to do this is because we are paying overhead on our industrial grants. Maybe people thought it wasn't a very good idea, but I can see this because I've had NIH funding. So, when we started building up this program, we said any industrial money of large quantity has to pay overhead just like a NIH grant. Since doing that, we haven't had any problem. In academe, it's really the overhead dollars that people worry about, not where they came from. So, if you can pay those, you can build up your program. I don't want to reduce everything to economics.

DR. LEE: I have more to say about that, but I don't want to prolong the agony.


DR. LEE: Anything else aside from funding?

(No response.)

DR. LEE: Well, if not, we had a very productive morning. We talked about the up and coming Process Analytical Technology Subcommittee, and we also touched upon what appears to be an old subject, but actually a very important subject that needs a fresh look.

On that note, I will thank everybody for participating. Pat and Nair, you can go ahead and do whatever you're doing and please rejoin us at 1:30 with the open hearing. Thank you.

DR. DeLUCA: Okay, thank you.

(Whereupon, at 12:22 p.m., the committee was recessed, to reconvene at 1:30 p.m., this same day.)














(1:30 p.m.)

DR. LEE: Welcome back.

Let me introduce two guests in front of us. Gary Boehm?

DR. BOEHM: My name is Garth Boehm. I'm from Purepac Pharmaceutical Company, and I'm a member of the Blend Uniformity Working Group.

DR. GARCIA: My name is Tom Garcia. I'm from Pfizer, and I'm the Chairman of the Blend Uniformity Working Group.

DR. LEE: Welcome. Thank you.

The next agenda item is the open public hearing. We have three individuals who signed up to speak. I think all have been told they have 5 minutes each to make their case, and 1 minute to respond to questions.

So, the first person I would like to call is Christopher Ambrozic from Umetrics.

MR. AMBROZIC: Thank you very much, Mr. Chairman.

I'd like to thank members of the committee, some of the directors especially, for allowing us to come in and present some of the work that is being done at Umetrics. I think over the course of this morning's discussion, we really looked at a lot of what's coming to fruition in terms of process analysis technology, and I think some of the information that I'm going to show over the next few slides can be very interesting to you.

If you're interested in the company, of course, we have a web site. There are also slides for those in the audience who wish to take a copy of this home.

The background is that obviously process analysis technology provides valuable process information. I think this was clearly defined this morning. We'd like to focus on the concept and the idea that these new opportunities to monitor the evolution of the batch or monitor the evolution of your process can take advantage of this PAT data and other information as well. So, one of the issues that I'd like to talk a little bit about is that we can take not only near infrared information, we can combine that with process information, temperature flows, et cetera, GC analysis, and bring that all together in the form of a summary which allows us to make the best estimate of your production in real time. This is a very important point.

This, of course, necessitates a summarizing and that really becomes a modeling of the data. There are statistical methods that allow us to do this. The resulting model parameters provide an improved interpretation of the process. So, in terms of monitoring your batch, you no longer look at the spectral analysis of an NIR, which has 4,000 individual digitized wavelets, you're actually looking at a summary of the individual batch. We'll see that actually in a few slides.

One of the nice things about the software, of course -- and these techniques allow you to display this information in terms of control charts. This is obviously very advantageous to the people in our plants and the people doing the work.

Today, right now, what we are faced with in production facilities is batch data that is being summarized on a one-level information only in the sense that what we get is doing quality control only after the batch has been completed. Some of the work that we're looking at and others are obviously looking at right now is clearly identifying how we take this information, as it exists transiently across the batch, within-the-batch information. So, you not only get batch-to-batch information, which is very useful, but you get within-batch information, which is obviously very crucial.

This is kind of a cornerstone slide here and what this is what we're being represented with today, and I see this a lot. We have this information right here. This is data. Everybody has got it, and what we do and this type of technology does is that it summarizes into this one chart right here. What that is, if you were to understand that, is a summary of the entire batch from start to finish, dynamically as it changes from the beginning to the end. In some cases, we might call this the golden run. That would be the green line that we have here.

If we move on to the next slide, we can see here that we have summaries of this batch. We have summaries of our golden batch as they exist from the start of the batch through to the finish of the batch. At every single time point, we can also identify what kind of variability is acceptable and what kind of variability is not acceptable in terms of production, in terms of quality control, in terms of validation.

Let's take an example of this. I'd like to mention that some of this data, some of this analysis is being done in a number of pharmaceutical companies from AstraZeneca, GSK, Pfizer. All of these companies are definitely leading the charge with working with this type of data and this type of analysis.

When we put something on line and we are trying to monitor for a batch upset or some sort of upset, this is what happens. You can see here on this slide that our current batch that we're running right here has, for some reason, gone out of control. This is real-time information, as we bring it down into the system. What we're able to identify is not only that a fault has occurred. We see that the fault has occurred. But we're also going to do the root cause analysis. So, it's fault detection and root cause analysis on top of that.

Here's the root cause analysis. It's called a contribution plot. It's to the right of your screens. Essentially what that is is clearly identifying with one single mouse click, to go from our black line, which is out of control, to identifying the root cause of the analysis, which is the green bar. In this case, it turns out that the level of this batch ramped up prior to when it should have. You can see here that this batch ramped up very early whereas it should have maintained a much more steady state through the beginning of that particular batch. So, this is how simple it is. This is how easy it is for our operators to execute this type of analysis and for us to correct batches as they occur in line.

I don't think I need to discuss too much about the opportunities for our companies because the advantages are obvious, being able to correct batches and reduce batch scrap, reduce batch variability, not to mention the advantages for the FDA in the ability to monitor these fingerprints because this really becomes what it is. It's a footprint. It's a fingerprint of your batch. It allows you to, in one snapshot, identify whether or not that particular batch has been processed according to regulation.

From this point here, we continue to drill down. I come from a 10,000 foot level almost to a 1,000 foot level, if you will, this being my 10,000 foot where I'm looking at the entire summary of the batch. I go through where I identified the individual problem. I can then go a level further and look at the actual variable itself. That really shows up right here. We can see that in fact this level really was the indication that caused this. This is a batch pharmaceutical process. It's actually a mixing stage that's going on. The company in question was having difficulties with their agitation, and that actually was due to level changes.

So, really, we introduced this concept and this idea, which is very prevalent in a lot of different industries, whether it's chemical engineering, semi-conductors and so on, this idea of real-time quality control. Being able to take the evolution of representative good batches and then monitor all of your information, we take all of the data. And I really want to stress all process analysis technology data, whether it's from in-line sensors, on-line sensors. We take flow information and so on and so forth. We saw there the example was an agitator and a level.

Obviously, we use the control charts to display this information. So, we represent it in a very simple way that makes it very easy for us to make conclusions when we have problems and difficulties.

Obviously, we monitor new batches as they are evolving, as opposed to just doing batch to batch. We once again introduce this idea of within-batch or within-run. And we detect problems and interpret the solutions on the fly. That really is the advantage of this type of technology.

The culprit variables in the problem batches are clearly identified -- I think we saw that -- very easily with the green bar of our contribution plot.

And the quality of the whole batch is predicted as it is evolving and at completion. This, of course, then allows us to implement possible 6 sigma control. Being able to implement this type of analysis is going to bring us to that level.

The technology, as it is, is based on a multidimensional informative data measured during the batch evolution and multivariate analysis. If you're interested in more of that, I can talk off-line with you, absolutely.

Finally, just some conclusions. Really, we're introducing not only multivariate statistical process controls, well-known with the SPC idea, but batch SPC where it exists on two levels. It exists within the batch and it exists on an upper level.

One of the advantages is that we can not only get information about the batches, but we also start predicting the batch qualities halfway through the batch completion. Let me just say that again. We actually start predicting the batch quality data halfway through the batch completion. The technology allows us to say, okay, our density is going to be this, our viscosity is going to be this, as we trend through the batch. Very useful information, of course, because then if we can see it's beginning to go out of control, we can then go back down to the 1,000 foot level and direct it towards being within target specifications.

Once again, I mentioned already really reducing the scrap rates. Having to throw out batches I think was well demonstrated Dr. Hussain's talk today. Really this whole idea of facilitating compliance inspection, the ability to monitor batches and in a snapshot look at this fingerprint, look at this footprint, and be able to in one clear picture identify whether the batch was made properly and in control.

I'll take any questions at this point. I'd like to thank especially Dr. Hussain for allowing us to come in and the members of the committee as well.

DR. LEE: Thank you. We have time for maybe two questions. Steve?

DR. BYRN: Can you comment on the extent of this kind of data analysis being used in Europe? Does your company originate from Europe?

MR. AMBROZIC: No. Our company, yes, is originally from Europe. I have no background in that location.

DR. BYRN: Right. Do you know how much of this kind of analysis, what we were talking about earlier, is going on in the pharmaceutical industry in Europe compared to here?

MR. AMBROZIC: I would say that in most cases we're pretty much on the same level. This was something Steve and I talked about over the lunch break, that this idea that maybe the Europeans are ahead a little bit in some areas. I think that conceptually we're at the same location. The idea that it has to be implemented -- we want to get there eventually. But I wouldn't say that they are, in fact, running on real-time models in place.

DR. LEE: Marv?

DR. MEYER: This is probably terribly naive, but on the bad batch plot, it looked like, after some period of time, it converged and became a good batch.


DR. MEYER: Did it matter, therefore, that it diverged at the beginning?

MR. AMBROZIC: Well, that's of course going to depend on what ends up happening in terms of the quality. This is the lower level analysis. You're right. What happens is that the batch goes out of control to begin with, and then what happens is the operators, of course, realize that it, in fact, has done so. We can see that by this slide right here where they realized, in fact, they have risen the level too early.

The analysis at the lower level, we then take that to the upper and identify whether that has a clear, distinct impact on the quality of the batch. If it does, something like this would be unacceptable. If it doesn't, then something like this would acceptable. That is the definition that comes out when we start doing the analysis with the data.

DR. LEE: Thank you very much.

MR. AMBROZIC: Thank you very much.

DR. LEE: Next, I would like to invite Nancy Mathis from Canada, and she knows what she's going to talk about.

DR. MATHIS: Good afternoon. I'm here this afternoon to put together for you the morning session that you heard, as well as the afternoon session. What I'm going to be talking about is on-line techniques for blend uniformity and specifically a technique that our company represents called effusivity.

If we agree that blend uniformity needs to be monitored and we agree that the best way to do this is on line, then this afternoon's presentation is going to be valuable for you.

Since we've all just had lunch and our bellies are full and we're getting a little bit groggy, I'm going to have you do an experiment. I'm going to have you do an on-line effusivity measurement with your hands. These are very accurate little sensors. I want you to reach under the table, especially for this group because I've already checked out your tables, grab the metal leg of the table, put your other hand on top of the table. For those of you sitting in the chairs, you grab the leg of your chair and it will also work. Tell me which thing feels colder. The leg feels colder. The metal feels colder to your touch.

You've just done an effusivity measurement. The legs of the chairs and of the tables are both at room temperature. What you've done is an interfacial, nondestructive measurement of effusivity which allows your hand to not detect the temperature of the item it's in contact with, but rather the rate of heat flow. The thermal conductivity and specifically the effusivity of the table, the metal, is higher and it draws the heat away from your hand.

We have sensors that allow that to happen. Those sensors not only work with metal, wood, and solids, they also work with powders.

Effusivity. What is it? It's a combination of thermal conductivity, density, and heat capacity, and it's the root general principle that comes when two semi-infinite bodies come in contact. It's the property that drives the interfacial temperature, and that's what you just felt.

This is a commercially available instrument. It's been available for six years, right now private labeled through Perkin Elmer Instruments for the non-pharmaceutical application. So, this has been out there and the efficacy of thermal conductivity and effusivity has been proven.

This instrument works for solids, liquids, and gases.

The way the system works. Picture a sensor coming in contact with powder. There's a heating element that heats roughly 5 degrees Celsius, and during that heating period, the rate of the heat flow into the material is what's detected.

So, schematically you've got a sensor. What shows in the red arrow is the heat flowing into your sample. The more conductive or the higher the effusivity of that sample, the more heat flows into it. And the smaller amount of heat that's left behind -- we measure the relative rate of temperature rise at that interface and produce the effusivity value.

Now, of interest in unit dose sizes, the longer you test -- and I'm talking the difference between 2 seconds, 3 seconds, 4 seconds -- the further the heat wave penetrates into your sample. So, for a typical 2-second test, you'd be penetrating .6 millimeters into a particular powder bed or giving you roughly a volume of 150 milligrams, a weight of 150 milligrams of material evaluated. If you wanted a larger sample size, you'd simply test longer with the same hardware.

This is something that can be retrofitted onto existing blenders. A hole, not a window, but a hole can be placed in a piece of blending equipment. The sensor can come down and come in contact with that and be retrofitted in. This schematic, this graphic has motion to it, which is not actually working. So, picture eight different sensors. I heard this morning, I think from Dr. Hussain's presentation, that they're envisioning six sensors for one technology. We're envisioning eight for this. Picture eight sensors at various locations all over a blender.

What we're doing when blending starts, it feels like this. When the sample is uniform, it feels like this. So, we're not measuring the absolute value so much as we're measuring the relative value of the effusivity.

Results. Eight measurements, 3 minutes into blending. You're going to see wide variation in the results because the effusivity for different powders varies. At some point, as blending continues and uniformity is reached, there's going to be a minimization of those results, and that's the tightest location indicated on this graph. We can actually see de-blending as well.

What you see in front of you, the schematic, the next one is actual results. This is on an eight-component, commercially available formulation, and the active that was assayed in this case was under 1 percent. You see a 4.5 percent variation at the beginning and at the end a .3 variation.

Now, to clarify, these samples that you see tested, each of these dots were actually thieved and tested off line. The on-line version will not available until the spring.

So, what I want you to think about is this is a relative measurement, not absolute. The absolute effusivity will depend on the excipient mix, the active mix, and the particle size. But what we're hoping for is looking for that optimum value, which is the minimization of the relative standard deviation between multiple readings.

Our challenge is validation, and this morning's conversation was very interesting to me. How do we validate that this is actually measuring uniformity? To do that, we've started the process by doing side-by-side comparisons between thieved samples, tested for effusivity, and thieved samples tested by current assay techniques for percent label claim. On this graph, although we don't have the early data for this set, you'll see that there is that clear trend of looking at the de-blending from the percent label claim results that we could also see, and in both cases, this produced an optimum blend time of 10 minutes.

Issues addressed. As I've done these presentations over the last year to different pharmaceutical organizations, they've presented different challenges. Some of them are listed here. There's technical documentation available on our website that addresses each of those and how they've been solved.

We've had a group of participants, including GlaxoSmithKline in two locations and also Merck. Together we've worked collaboratively with our organization, as well as Patterson Kelly, to investigate effusivity as a blend uniformity monitoring technique.

These results were presented in Denver last month, and some of the results are shown here. We can differentiate between powders. This does have the ability to be an ID potential.

2-second testing gives us 1 percent precision.

Insensitive to pressure after a certain threshold point, which was one of the things on the table of different techniques that was brought out. There is a sensitivity to pressure, but over a certain threshold refined.

The sample size is appropriate, 150 milligrams and scaleable, and the benefit here is that we can retrofit it onto current equipment without the need for new capital equipment.

We're now in a phase of BUG 2. BUG 1 stands for Blend Uniformity Group, and that's an internal group that we've put together with the members I mentioned earlier. We're now forming BUG 2 as a second phase, and our goal in that is to build our portfolio of examples where effusivity has been compared to percent label claim so we can do that validation of this technique.

As I said, current members, GSK, Patterson Kelly, and Merck.

For more information, there's my contact data. I'm And www.blend-tech -- with a dash -- .com. I do have the technical literature that I've kind of alluded to on that site, and I would be more than happy to get questions and also participate with people after the fact and involve them in BUG 2.

Thank you.

DR. LEE: Thank you very much. We have time for maybe one question.


DR. ANDERSON: How large is that sensor?

DR. MATHIS: The sensor right now is 1 inch by a quarter of an inch. I'm Canadian, 25 millimeters by 5 millimeters. I talk both languages.

The sensor is roughly the size of the end of your thumb and that can be scaleable if people want larger or smaller unit doses to blend with their time of penetration into the sample. That can be adjusted based on the needs of the user.

DR. ANDERSON: How do you know that the uniformity doesn't apply to the outside of the sensor when you're putting pressure on it with the sensor?

DR. MATHIS: You'll have to clarify that.

DR. ANDERSON: If you put the sensor in there, everything outside may be uniform and because you're putting pressure there, there may be a difference between -- you understand what I'm saying?

DR. MATHIS: I understand what you're saying, and part of when we bring this on line, we'll have to have a determined homogeneous, uniform, single phase material that we would place in the blender and then you can basically baseline or tear out that effect.

DR. BYRN: Obviously, the blender is moving and so things are changing. How do you envision that? Are you just averaging over the 2-second time? You're averaging what's in the area? Is that the general thinking?

DR. MATHIS: That's where we're going to head. In April of this at Interphex, we hope to introduce a system that you actually blend, stop the blending, tie an umbilical cord back to the instrumentation, take a measurement, collapse that, blend again. The eventual version would be a moving system with radiotransmission.

DR. BYRN: Right now you're doing static.

DR. MATHIS: That's right. We're heading there, but we want to do this in steps because we think it's important to get a solution out there as quickly as we can.

DR. LEE: Thank you very much.

DR. MATHIS: Thanks very much.

DR. LEE: The last one is going to be by Steve Lonesky on behalf of GPhA.

MR. LONESKY: Good afternoon. My name is Steve Lonesky. I work for Teva Pharmaceuticals USA, and our Vice President Chris Palone was not able to be here this afternoon, so I'm going to try to fill in for him.

Teva Pharmaceuticals is a member of the Generic Pharmaceutical Association, or GPhA, and I'm going to speak on the association's behalf this afternoon. GPhA would like to thank the FDA for the opportunity to contribute to the dialogue concerning the issue of blend uniformity.

Briefly, the GPhA endorses the PQRI's blend uniformity proposal except for the 4 percent RSD compliance requirement. We believe that this requirement is unnecessarily limiting and will result in unwarranted investigations and testing of actually compliant product.

The generic industry views blend uniformity as a good tool for the development and validation phases of manufacture but must be carefully considered in light of well-documented problems associated with sampling phenomena of powder blends. We must have a way to deal with the occasional sample result that does not quite makes sense or fit the data set, which we know is most likely due to sampling. We can pick up a tablet and assay it. There's no question what the sample is or what the result represents. This is not true with a sample pulled from a powder blend that is in constant motion. To this end, we must have a two-tiered approach. The investigators should also take this into account when reviewing product data and investigations performed by a firm when a result does not conform to an intended specification. Because this is only one tool to determine the quality of a product and there's a significant flaw associated with the process of obtaining reliable and consistent basis data, this method should not be applied to routine production of commercial product.

In addition, we are concerned with the unequal application of blend uniformity requirements by the agency. If in fact blend uniformity is, indeed, so important in the manufacture of quality drugs, it would seem prudent that the rules would apply to the submitters of NDAs as well as ANDAs.

Thank you very much for the opportunity to contribute to the generic industry's views on this issue.

DR. LEE: Thank you very much.

Are there questions?

DR. GARCIA: I have a question. I'm sort of confused here. You say that the GPhA is objecting to the 4 percent RSD for the cGMP requirement during routine manufacture. In the next paragraph, you're talking about blends. Are those two points related or --

MR. LONESKY: The 4 percent --

DR. GARCIA: You realize the 4 percent is for dosage units not blends.

MR. LONESKY: I thought it applied to the blends.

DR. GARCIA: No. We're getting into my presentation, but for readily complies versus not readily complies, that's dosage units.

DR. LEE: Maybe we should wait until --

DR. GARCIA: Yes. It will become clearer in a little bit.

DR. LEE: Thank you. So, please don't go away.

MR. LONESKY: I'll be here. Thanks.

DR. LEE: Thank you.

That's all the open hearing speakers there are, and we now move into the next session.

By the way, for those of you who are expecting a break at 3 o'clock, there won't be one. There will be one later on.


DR. HUSSAIN: Let me sort of introduce this topic and the questions posed to the committee.

But two things before I give the introduction. One is this is a 100-year-old unit operation that we're dealing with. We're struggling with this. So, it's an interesting reflection on -- I don't know what.

The point, just to clarify, I had referred to putting six different windows or things for near IR on a blender. That's not what I'm saying. It was meant to reflect the publication in J.Pharm.Science by Jim Drennen. I think just one window is enough. We have data. There are technical aspects to that, but let me just clarify that and move on.

What are we talking about here? Background. Blend uniformity analysis, the way we use it is not a control. It's an in-process test. What I mean by that is you will blend, stop the blender, collect 6 to 10 samples from different locations in the blender, assay, and then determine whether the blend is homogeneous. And if it's not, if you have a reprocessing, you'll blend for more time, or if you don't have a reprocessing protocol, you might have to start again. So, it's not a control. It's a test.

The way blend samples are collected. The picture there is from Sonja from Pfizer. She had provided that. In a lab scale, you poke a thief in different parts of the blender and try to collect small samples which are representative of the final dosage unit. Generally 1 to 3X is what we recommend.

What that picture reflects is it's probably easier to do that in the lab, but imagine some of the blenders are the size of the room. Collecting those samples is not an easy task in many cases.

The subject has been intensely debated for the last 10 years. There was a code decision that triggered this. I'm not going to get into that code decision. But debate has focused on sample size, what is the right sample size. Should it be equal to the final tablet weight or should it be smaller, larger, and so forth? That has been a source of debate. Sampling errors are a source of debate.

When you collect blend samples, other processing steps follow. Segregation can occur after blending. We may not be controlling that by simply focusing our attention on the blend itself. And there are positions expressed that there's lack of correlation between the tablet content uniformity and blend samples. So, these have all been debated for the last 10 years, and in my presentation to the Science Board I said we probably have spent a couple of million dollars just talking about this and not getting a solution to the situation.

The story is an old story but was brought into focus with the issuance of a draft guidance for the generic applications, draft ANDA guidance on blend uniformity in August of 1999. That became the focus of research under the PQRI. You'll hear from that, but the story on blending -- the debate goes much beyond. It's older than the draft guidance itself.

Very quickly, I'm not going to summarize the guidance. You have already received that guidance. But I just want to share with you some of the motivations. Some of these motivations are not listed in the guidance, but are underlying concerns that are being expressed in this guidance.

One reason for the draft guidance was to address some of the inconsistencies in the review practices with respect to supplements requesting deleting of blend uniformity testing. It was a minor administrative issue.

But the underlying concerns, the way I am expressing these concerns based on the discussions with the review chemists and so forth, is concern regarding drug content uniformity. Looking at the warning letters and so forth, you'll see a trend. There are cases where blend uniformity might be an indicator of content uniformity problems. A small number of examples but there are some examples.

But the point here is we have insufficient information to ensure quality is by design. I think that in my opinion is the fundamental cause. When an application comes in, we have one batch. We have information on one batch, and we have to make a decision on that batch. We have no other information, literally no other information.

What is in that submission? With respect to this unit operation, we'll describe a blender type. We'll describe a capacity, and we'll describe an operating speed and maybe a time for blending. Generally, the information is the same for the proposed scale-up. The time would be the same. The blender capacity would be different and so forth.

The scope of this guidance was for products which require USP content uniformity test, and that is tablets or capsules which have 50 milligrams or less of drug or 50 percent or less of drug. For dosage units that have more than 50 milligrams or more than 50 percent, USP does not require content uniformity. It's just on the basis of weight. So, we don't do content uniformity tests for those. The guidance did not recommend blend uniformity testing for those.

For complex dosage forms, yes, we recommend but request speaking to the division to get more information.

And also the guidance recommends not to submit a supplement to delete a blend uniformity analysis when it's also used for compliance with cGMP. I think that is also a source of discussion. Is this a cGMP issue or is this a review issue?

Sampling size and procedure are briefly described, and acceptance criteria and analytical procedures are described very briefly.

The point I want to make here is this. Performance of a solid processing unit or any processing unit depends on the underlying mechanisms. In the engineering world -- this is again a publication from the American Institute of Chemical engineers -- how would an engineer go about ensuring the right performance? Keep in mind what I just mentioned before. What information is available in the submissions, what the reviewers have to make a decision. It's the time, blender type, and so forth. The critical attributes, material characteristics, particle attributes, equipment design, operating condition, and how these impact on the forces on the particles and how the bulk mechanical properties are involved, none of the scientific aspects of blending or any other unit operations are discussed.

In many ways, I would say today trial and error is the norm. Reviewers have to look at one batch, two batches, three batches, at most the most data and make decisions. In the absence of a clear understanding and trial and error approaches, one has to ask the question. Do standard operating procedures that we have in place even reflect even the basic heuristics that underlie some of these processes? The answer is no.

To give you an example, in your handout packet I have a publication by Tom and Garth which has discussed the root causes of blending issues and so forth. They have tried to address that in many different ways.

Some of the heuristic rules that come into play that I've listed here -- I'm not going to read every one of those -- would have to be associated with an SOP. None of this, generally, is in any SOP.

In many ways, the question that we're dealing with is a question of representative sample, and let me give you an example. A major pharmaceutical company, in order to support the PQRI effort, started developing databases to submit to PQRI, and they shared this with me. I haven't had a chance to look at the PQRI data, so I'm not sure what data Tom is going to present, but this was submitted to me directly at FDA.

Here is a commercial product on the market, and the company wanted to provide information to PQRI and they did the proposed stratified sampling of this. Using blend sample analysis, beautiful results. Percent RSD is less than 1. We generally say less than 6 percent is homogeneous. USP content uniformity passes beautifully. All you do is take 10 tablets and that's your basis of that. But when you do a stratified sampling the way PQRI has proposed, you take samples repeatedly throughout the run, this is the problem. The company actually had to go back and correct the problem. It would never have been detected until PQRI stratified came about.

So, the question in my mind is, is it a representative sample? I'll pose the questions to you and then invite Tom and Garth to make the presentations.

I have not seen the data, so I'm going to be looking at some of the data Tom is going to present for the first time with you. So, I have an overall impression of what the recommendations are likely to be, and that was the basis for these questions.

Is the current PQRI proposal appropriate for inclusion in the planned revised guidance? If no, we request you to provide suggestions so that Tom and others can work on those suggestions before the final recommendations come to FDA and we can have that accomplished in one cycle.

If yes, should the proposed stratified sampling and analysis plan be applied only for the bioequivalence batch and the validation batches? The validation batches are three batches at the commercial scale that people have to manufacture before they get to go on the market. And bioequivalence batch is the only batch our reviewers will get to see when they make a decision on approval.

DR. MOYE: Excuse me. Can I ask one question?


DR. MOYE: I'm sorry to interrupt.

What's the alternative for the answer to question 2? If the answer is no, then what other batches --

DR. HUSSAIN: Yes, I was getting to that.

DR. MOYE: Okay.

DR. HUSSAIN: If the answer is no, if the proposed stratified sampling and analysis is limited to dose, then how does one assure adequacy of mix for routine production batches? That's the question. So that you would do it routinely on every production batch.

So, that's the question, and I think what I would request Tom and Garth to do is to make their presentations and then we can open the discussion. Thanks.

DR. LEE: Let me interject. Who is on the phone?

DR. DeLUCA: I'm on the phone. Pat.

DR. LEE: I just wanted to make sure because I was told that one person is on line, and I don't know which one. Glad that you're here.

Please go ahead.

DR. BOEHM: Good afternoon and thank you for allowing Tom and I to come and present the work of the Blend Uniformity Working Group this afternoon.

DR. RODRIGUEZ-HORNEDO: I am on the phone.

DR. LEE: Nair, you're on the phone too. Great.

DR. BOEHM: While we're waiting for the overheads to come up, the presentation this afternoon has three parts. The first part is a brief description of the background of the work of the Blend Uniformity Working Group, which I'm going to present. The second part is going through the draft recommendations, which the Blend Uniformity Working Group have come to. The third part is having a look at the data we have so far on the data mining exercise that was undertaken to challenge the recommendations that we made, and both of those parts will be presented by Tom.

At the start, it's reasonable to ask the question, why test blend uniformity? If blend uniformity is such a hot topic, you can avoid all of this aggravation by not testing at all.

The answer to why test it is found, I think, in two documents. The first and older of these is the section of the so-called GMP regulations, 21 C.F.R. 211.110, which reads in part, "to assure batch uniformity and integrity of drug products, written procedures shall be established and followed that describe the in-process controls, tests, or examinations to be conducted on appropriate samples of in-process materials for each batch." And sub (3) under that introduces a term, "adequacy of mixing to assure uniformity and homogeneity."

There are two things in this that you need to take special note of. The first is this is referring to an in-process test or control of some sort, and the second is the use of the term "every batch." It doesn't say validation batches or 10 a year; it says every batch.

The second document to look at is the Office of Generic Drugs draft guidance, which was issued in late 1999, on routine blend uniformity analysis. Now, it's important to note that this was not a new requirement from the Office of Generic Drugs. They had been requiring for some years that generic drug sponsors commit to performing blend uniformity analysis on routine production batches. However, the application of when to do that and the acceptance criteria that should be met were not even. And this guidance was issued to, as it were, level the playing field and let everybody know what was required. And it had three main parts.

The first is that it's required on solid dosage forms, less than 50 percent active or less than 50 milligrams active; that is, that the USP would require content uniformity testing on.

The second was a suggestion to use 6 to 10 samples of blend, and they should be 1 to 3 unit weights per sample. That's weight for the dosage form.

And finally, the data that you generate must meet a mean of 90 to 110 percent of label claim with an RSD of not more than 5 percent.

The Product Quality Research Institute is a collaborative effort -- you've heard about it before -- between FDA, industry, and academia. It's intended to provide a platform where participants can set aside their rhetoric and their some distrust of one another and actually get down to looking at the basic science behind some issues. Its mission is to provide a scientific basis for developing regulatory policy, and one of its initiatives was to set up expert working groups to look at particular issues and analyze those issues with a view to potential future regulatory policy.

I think the first working group set up was the Blend Uniformity Working Group, which was established in late 1999. The group is chaired by Tom and has members from academia, FDA -- that's both from CDER and the Division of Manufacturing and Product Quality -- and from industry from both innovator and generic companies.

The group is charged with making scientifically based recommendations on suitable procedures for assuring batch homogeneity.

PQRI is a public effort. What it does is meant to be publicly available. So, I'd like to run briefly now through a list of the actions that the Blend Uniformity Group has taken from its formation to get to this point.

It has conducted an industry practices survey, which I'll talk about briefly. Published the Uniformity Troubleshooting Guide in pharmaceutical technology. It's held a public workshop on blend uniformity testing issues. It's held several numerous working group meetings and teleconferences. The group has written a draft proposal on the use of stratified testing of dosage units as an approach to batch homogeneity, and we have sought data from industry with which to challenge our proposal.

The industry practices survey was conducted to find out what was actually going on in industry. In order to have people give us honest answers, we conducted this survey in an entirely blinded manner. We have no idea who replied and who did not reply. It was sent to all solid dose sponsors with at least one approved NDA or ANDA that could be located. That's a poorly worded sentence. It's the sponsors we had to locate, not the applications. And it was designed to elicit information on general practices regarding blend uniformity sampling and testing.

134 surveys were sent out. We received 28 replies, approximately 20 percent, which was somewhat disappointing given that this was an issue that generated some heat in industry. Most of the replies came from large manufacturers. That should be borne in mind since most of the sponsors, in fact, are small manufacturers.

The survey asked questions on demographics, what sort of company replied in general terms; on blend sampling, what was done for routine testing, what was done for validation testing; on causes of failure for blend uniformity testing; on costs associated with the test; and on new technology.

The full survey with the results filled in can be found at PQRI's website, and a summary was published in the August 2001 Pharm Tech, and I believe you have a copy of that article in the handouts that you have.

The picture that emerged from the survey was one of a conservative or perhaps very conservative industry, that samples with conventional sampling thieves, taking 1 to 3 unit dose sample sizes. It tests those with conventional wet analytical methodology, HPLC type methods, and it uses established acceptance criteria to test the data with.

About two-thirds of those who replied for testing of routine production batches were prepared to defeat failing blend uniformity testing results with some form of enhanced testing. There were many different variations of this, but it amounted to enhanced testing. About a half of those who replied were similarly prepared to defeat failing blend uniformity results that were found in validation batches the same way.

Most respondents reported having trouble with about 10 percent of the products they manufacture and that that trouble was apparent right from the start, right from the point of validation. Or to look at that the other way around, 90 percent of the products they deal with give them no trouble.

Most of them think failures are due to sampling or analytical error. Very few people, apparently, think their failures are due to nonuniform blends, which is interesting.

And virtually all of them have not adopted any technology. They cite various reasons, among them that there is a fear of regulatory acceptance.

So, that was the picture that we got from the industry practices survey.

Fairly early on in the discussions that we had as a Blend Uniformity Working Group, I think it became apparent to us that there was no concise guide available for diagnosing blend or dosage form uniformity problems. There were some publications which addressed one situation or another, but nothing was pulled together.

Jim Prescott and Tom Garcia took the task of writing the guide, which they did, and designing a companion chart, which you can get from Jim and can use as a tool, a very useful tool, to diagnose uniformity problems really. That was published in the March 2001 Pharmaceutical Technology.

The public workshop was based around the theme: Is blend uniformity testing a value-added test? It was intended to be somewhat controversial since the purpose of holding the workshop was to draw out information from the participants and not for us to hear ourselves talk. It was held in September of last year and approximately 200 people attended the workshop. It's form was that there were several presentations on aspects of blending, blend sampling, acceptance criteria, new technology, and there was a report also on the progress the working group had made to date. And the summary of the workshop was published in the September 2001 Pharmaceutical Technology.

The presentations that were given to set the theme for the workshop were based around the following: that blending of solids is a poorly understood process, and unlike blending of liquids, it's very poorly understood. It's very difficult to sample a static powder bed with conventional sampling thieves. That sampling errors that can occur, when you do try to sample powder beds, are common and can occur both ways. Now, what I mean by both ways is the familiar one is when the sample indicates that the blend is not uniform and you're convinced that it is. However, it's easy to show. You can take a deliberately nonuniform blend and have a sample pulled out of it which indicates uniformity, which is perhaps the more dangerous issue. And post-blending segregation can be a serious problem, particularly for some of the newer types of bin blenders.

The major part of the workshop involved breakout sessions to elicit feedback from the attendees, and each attendee was able to rotate around three of these breakout sessions. Those three were based on the following. Is blend uniformity testing on every batch a value-added test? How do you validate a process when you have a sampling problem? And what new technologies are available to assess blend uniformity?

The conclusions that the workshop reached were as follows. I think it was unanimous or almost unanimous that blend uniformity testing on every batch is not a value-added test. That was also, however, almost unanimous that appropriate and meaningful blend uniformity testing should be conducted during development and validation. So, the workshop doesn't conclude the test is not of any value at all. It's not a value-added test in routine production.

Lastly, probably we had nobody at the workshop from any QC, we all decided that higher costs are acceptable if they yield meaningful results, although nobody has asked anyone who works in the lab whether they think that's true.

So, we've reached the point of having written our draft proposal. We decided in heading into this that it should have the following three attributes. The test should be simple to perform and not involve any complicated equipment, and it should maximize the use of the data that's gathered. Acceptance criteria to be applied should be easy to evaluate and interpret. And finally, acceptance criteria should demonstrate when lack of homogeneity is suspected.

I'll now hand over to Tom who will discuss the recommendation in detail.

DR. GARCIA: What I'd like to do now is just go over the recommendation that we're getting ready to finalize and pass on to the steering committee for their review and eventual forwarding on to the FDA if they approve it. This is more or less the culmination of all the preparatory things that the group did over the last almost two years now into our final approach that we think is reasonable.

First of all, I'd like to start with saying that we do use stratified sampling. Stratified sampling is really a statistical term that refers to selecting your sample points, whether it be in a blender or during a compression or filling operation. You select distinct points in that blender or that run that will target problematic areas. For example, if you've got a compression run, you'll probably want to take samples at the very beginning of the batch, as well as the end of the batch. If you have multiple bins or hoppers that are being emptied onto the press, you'll want to catch the changeover there because that's where you can typically get segregation.

It does not necessarily mean that you take evenly spaced samples throughout the batch. In fact, what we tend to advocate is that you want to probably target more samples around these changeovers at the beginning or the end of emptying a hopper, to pick those areas where you're most likely to find a problem.

The recommendation applies to process validation and routine commercial batches for solid oral dosage forms. It applies only to those products where the active ingredient or ingredients are added into the blend. For example, if you are adding an active ingredient into the film coating suspension or solution, spraying it onto tablets, this recommendation does not apply to that particular drug. It would apply to the drug that's in the core, but not the one in the coating.

It does not apply to those instances where you could use weight uniformity to demonstrate content uniformity per the USP.

The advantages of the approach that we are advocating are it's much more accurate and more relevant of the true uniformity of both the blend, we feel, and the dosage units that are going out the door.

It eliminates all blend sampling errors, especially when you start monitoring for routine production.

The third thing is it will detect segregation, and the slide that Ajaz put up a couple minutes ago shows that exact thing. By targeting more samples toward the end of the batch, you're more likely to pick up those outliers that are probably the result of segregation of the drug.

Finally, it eliminates those instances where you've got to break containment. If you've got a highly potent or toxic drug, you could take the tablet cores out of there rather than cracking open the blender and exposing your operators to the toxic effects of the substance.

The disadvantages are some people say, well, it's too late. Once you compress the batch or fill the batch, how are you going to adjust to improve your uniformity? Others have said it's not consistent with quality by design or parametric release. This one I have a little issue with because I think it really is. The other thing is, is it a control or is it a test? If it's a control, you should be able to make some adjustment during the batch. If it's a test, it's more of a pass/fail thing.

The actual recommendation itself is split up into three parts. The first one addresses process development. We want to make it clear that the stratified sampling approach is not an excuse to do poor development, particularly when assessing your blends for uniformity. You should be defining your sampling techniques and the equipment that you use to sample it. For example, you want to get a very thorough scheme, so you map that blender to make sure that you got all dead spots. You want to look at multiple sampling devices because there are indications in the literature where you could have one thief pull samples on the same blend and get an RSD that's twice as high as samples obtained with a different thief, for example, a plug thief versus a grain thief.

Your sampling technique. How do you insert it? Do you spin the thief around? Do you wiggle it? All these things need to be defined before you go in and start your validation.

Finally, one big thing that we wanted to make sure we covered is the Blend Uniformity Group acknowledges that sometimes you cannot sample 1 to 3X dosage units weights. Therefore, our approach is that you should start at 1 to 3X, but if you cannot get representative data there, you should go up in the weight until you can identify the smallest weight of sample that is truly reflective of the blend.

The next thing is the process validation approach that is in the guidance document. We start out by sampling at least 10 locations from your blender and taking triplicate samples from each location. I just want to add a little thing here. 10 locations are for tumble mixers such as a deblender, tote, things like that. If you get into a convective mixer such as a ribbon blender where you have more dead spots in it, actually we do advocate that this number is increased to 20 locations just because there are a lot of dead spots in those blenders.

You assay one sample per location. The RSD is if it's less than or equal to 5 percent. And all individuals are within plus or minus 10 percent of the mean absolute. This is another little change we made here. We are not saying 90 to 110 percent here. The reason is we acknowledge that blend sampling bias can occur in a very constant, consistent reproducible manner either inflating or deflating the mean. The true measurement of uniformity of blend is the RSD. The blend uniformity test is not the time to determine potency. So, we have incorporated this. All individuals are within plus or minus 10 percent of the mean, and that's an absolute number. For example, if your mean is 90 percent, your range is 80 to 100 percent, not 81 to 99. We don't calculate it based on that exact mean.

DR. MOYE: Excuse me. Just so I can be clear. I'm sorry.

You are suggesting that precision should take precedence over accuracy here? Is that what you're suggesting?

DR. GARCIA: We're saying that basically it's the RSD. We're not looking at the absolute values because those could be consistently biased, high or low.

After testing, one sample from each location, if you fail, we ask that you test the second and the third samples from that location. Basically now what you're doing is an out-of-spec investigation. If you look at this data and you identify that it is truly related to a mixing problem, then your blend is not uniform and you've got to go back to development and figure out what went wrong.

However, if your investigation points to sampling bias, which could be demonstrated through component variance analysis or some other attributable cause not related to mixing, then you go over to stage 2 testing of the dosage units.

If you pass this criteria, you proceed to stage 1 dosage unit testing.

The big thing here is you don't want to go down this route and do a lousy job on your blend uniformity sampling techniques because the number of samples you're going to test here are a lot greater than here. So, there is a penalty to pay. But at least we have identified a means to get around the classic case where you have poor blend uniformity but great cores.

This is the second half for validation. This addresses the content uniformity of dosage units. You can see how it ties in.

During a compression or filling operation, we advocate that you take 20 locations throughout that batch, once again stratified locations. From each location, you take at least 7 dosage units. Now, stage 1 is right here where you assay 3 dosage units per location. So, you're looking at a total of 60 for stage 1.

The acceptance criteria is the RSD of all individuals is less than or equal to 6 percent. Each location mean must be between 90 to 110 percent label claim. We're absolute here now. No more plus or minus 10 percent of the mean.

Finally, all individuals have to be within 75 to 125 percent.

If you pass this criteria, then congratulations. That batch is validated.

If you fail it, assay the other 4 dosage units, and this is stage 2 right here. So, you're looking at a total of 7 units for each of the 20 locations. So, you can see if you do a lousy job on your blend uniformity development work, you're going to pay the price in assaying 80 more samples when it comes to validation. So, it's in your best interest to get the blend down.

You assay it again. The acceptance criteria are the same as up above. Pass, you're okay. That batch is validated. If you fail, then the blend is not uniform or segregation or something is happening during the compression run.

Briefly, how do we justify the number of samples here? The 10 locations for the tumbling blender, as I said before, the Blend Uniformity Working Group felt that that was adequate to map the blender. But notice below that when you get into the convection mixers, we advocate going to 20 locations. As I said earlier, you need to take replicates so that if you do fail the first step of the blend evaluation, you could do your analysis to see if you have sampling error or bias in there.

The number of dosage unit samples during the compression or the filling operations, the 20 locations and the 3 or 7 dosage units to test. These all came through operation characteristic curves that were generated using Monte Carlo simulations. What we did when we generated those OC curves is we looked at things like weight variation, assay variability, between-location error, and within-location error for each one of your sampling points. We also used the USP content uniformity test as our benchmark for reference.

This is an example of one of the OC curves that we use. This particular one is looking at within-location RSD; in other words, how do those 3 or 7 tablets vary within a given location. Basically if you look at our criteria for PQRI, you can see that we start breaking it about 5 percent, I think it is. It starts going down pretty steep. Whereas, the USP test is about 6 percent. So, the PQRI criteria is more discriminating than the USP test.

The other thing is you notice that this is a pretty steep curve, which is good. It says as soon as you hit some sort of a threshold, you're going to start failing batches. So, that's another indication of the discriminating power of our test.

This particular one as well assumed the population mean was 100 percent, and we added a 1.5 percent RSD for our weight.

The next slide I want to put up here looks at between-location. In other words, you got 20 locations throughout your, say, compression run. How does the data vary from each location? Once again, we're assuming a mean of 100 percent. What we did here is the weight is still at 1.5 percent. We also threw in an assay variability of 1.5 percent here. On the bottom this is you're between-location. RSD ranges from 1 to 10.

What you can see here, if you have between location variability, you're going to start rejecting batches a lot quicker. It's a lot more severe of a penalty than within-location. Basically at about 3.7 percent I believe is the exact number, you're at the 95 percent probability of passing the acceptance criteria. So, roughly around 4 percent you're going to start sliding down. Once again, you can see we are more discriminating that USP.

This goes back to Steve's question. Where did the 4 percent come from? It's right here. This computer simulation is what we will use later on to say whether you readily pass validation criteria or marginally pass it. But here it is. It's actually 3.7 was the exact number. We rounded it up to 4 percent. As soon as you go above 4 percent for your RSD, you're going start failing batches. So, that's where it comes from.

Justification for our dosage unit acceptance criteria. The RSD of 6 percent is consistent with stage 1 of USP.

The one that you're going to fail on is all locations means between 90 to 110 percent. This is for each of the 20 locations. What you're basically going to detect here is drifting in the process, dead spots, or segregation in the batch, either at the beginning or the end of it. This is the one that's really going to probably have the most impact of all the criteria.

We also added the 75 to 125 criteria in there just in case we should detect a stray outlier, a superpotent or a subpotent tablet. We felt that if you did have one of those and by some miracle you still were able to pass the mean, that batch doesn't have any business to be accepted.

For the dosage unit test, we also use a two-stage test which is consistent with the USP. You notice that stage 1 and stage 2 criteria are the same. Basically what we're doing is if you have an 89 percent mean, we're giving you one more chance to get it right and salvage the batch.

The final part of the document addresses routine manufacturing and primarily the cGMP component that Garth mentioned earlier. The dilemma we had in PQRI is we're supposed to be reducing regulatory burden. So, how could we incorporate the USP test and the cGMP test without any real additional testing.

So, after some thought, what we ended up doing is we said could we pull the sampling procedure for the USP test as an in-process test. It looked pretty good until we figured what happens if you got a coated tablet. You're going to be doing the USP test on coated tablets, and USP says it's got to -- excuse me. You're going to do in-process tests on uncoated tablet cores, and USP says it's got to be done on finished dosage forms. So, we had to get around that, and I think we have.

Basically we're advocating pulling 30 tablet cores in process at 10 different locations, 3 per location at least. For the cGMP compliance, you assay those 30 tablets and you normalize the data for weight. Why are you normalizing for weight? You're looking for uniformity of the blend here. We're not interested in weight variability.

To satisfy the USP test, you don't normalize for weight.

So, you see what we're doing? We're testing the same 10 or 30 dosage units, performing two calculations on it to satisfy two tests. So, the actual analytical testing work and sample preparations is zero. Granted, you got to do two calculations on it, unless you want to roll the dice and just try to satisfy GMP compliance without normalization.

Here's the key thing. I've got to read this because I want to get it straight because it was worded really carefully. If the in-process sample is not the finished dosage form -- i.e., a core for a coated tablet -- you must demonstrate during validation that the in-process results provide the same or better control as the content uniformity data generated during release testing of the corresponding finished dosage form, i.e., the film coated tablets. If you could demonstrate this relationship, you could do this up on top. So, there's how we took care of two birds with one stone and met our requirement of minimizing regulatory burden.

Now, in routine manufacturing, you're going to see on the flow chart the term "readily complies" versus "marginally complies." Products that readily comply are those that for your ANDA exhibit batches and/or the validation batches, the RSD is less than 4 percent for the dosage units, not for the blend. All the mean results are within 90 to 110 percent for those batches, and we don't have anything outside the 75 to 125 percent range. If you readily comply, you go to stage 1 testing.

Now, for products that do not readily comply -- i.e., marginally comply -- this is where your RSD is between 4 and 6 percent. You have to go into stage 2 testing where you test 30 dosage units.

So, here it is, the flow diagram for your batches. You make your decision, do these products readily comply. If so, come down to stage 1, test one tablet sample out of 10 locations, 10 tablets. If they do not readily comply, you go to stage 2 where you test 3 samples per each of the 10 locations. So, you're looking at 10 versus 30 for stage 1 and 2.

Obviously, if you pass stage 1, adequacy of mix is demonstrated, you then perform your second calculation if you weight correct it to verify that that particular batch meets USP criteria.

If it doesn't pass stage 1 -- and notice that your mean is between 90 to 110 percent, RSD is 5 -- you go to stage 2. You test all 3 samples per location. Your acceptance criteria is still 90 to 110 for the mean, but your RSD has gone up to 6 percent. Then if you pass, the same box as over here. If you fail, then adequacy of mix is not demonstrated and the batch is rejected.

Now, if you come down this route, you got a product that marginally complies, and you do 5 batches in a row where you pass, then you could revert to stage 1 testing and reduce the burden.

The sample size and the number of locations for routine manufacturing are based on USP tests. We're trying to keep the 10 plus 20 approach.

The GMP acceptance criteria of an RSD less than 5 percent and the mean between 90 and 110 percent was consistent with the validation approach, although for validation they want individuals. We talked John Dietrick into just letting us get away with a mean between 90 and 110.

That concludes our recommendation. But I want to just put up the one last slide. This is just one way to demonstrate that the blend and the dosage units are uniform. There are other means out there, and in particular, the on-line monitoring, NIR, those new techniques that are coming out. That's the ultimate that we should be striving for. This is more like a band aid that will take care of the problem at hand right now.

We also had a number of individuals on the Blend Uniformity Working Group that were carryovers from PDA 25. PDA 25 is a very, very good, very strict means to also look at this problem, and there are no reasons why you shouldn't be able to use that either. It's a very good way to do it.

Of course, for the brave ones out there that want to continue sampling every blend, go ahead. But as Fernando Muzzio said, when you fail, don't come hollering at us.

So, this concludes this particular section on the actual recommendation that is coming out.

The next thing I want to talk about is the results of the PQRI data mining effort. This information is really only about a week old. Actually two of the slides in the packet that have been handed out are already out of date. I actually made the adjustments right before I left for the airport yesterday, so I'll point those out.

The objectives of our data mining effort were really threefold. First, we wanted to test the hypothesis that blend uniformity testing is not value-added testing for the products.

The second thing is we wanted to test the assumption we made during the Monte Carlo simulations that the means both within-location and between-locations were normally distributed because basically that's how we establish our acceptance criteria.

And finally, we wanted to compare the various criteria that are out there ranging from our criteria to the OGD, the FDA, the USP, and the modified USP, and see how they stacked up when comparing the same sets of data.

A call for data went out. I think it was in July. We solicited companies to send us solid dosage form information in a number of categories. We wanted to get products that had an active ingredient of less than 5 percent and those between 15 and 25 percent to see if low potency products performed any worse than the higher concentrations of drugs.

The other thing that we wanted to look at was products made by various processes, namely direct compression, wet granulation, and dry granulation.

We also wanted to look at both capsule dosage forms, tablets, and if could get any sachets or powder fills, that would have been nice too.

Finally we wanted to look at large and small batches.

We had a total of eight companies submit the data to us. We got 149 batches. For those members of the audience whose companies submitted data, thank you very much. We would have like to have seen more, but we feel we had a fairly good representation to get some confidence.

This slide is one of them that I replaced yesterday. We had 149 batches for tablets, 0 for capsules. So, we missed that objective.

The number of direct compression products out of 149 was 12 batches. We had, I think it was, 67 batches that were made via wet granulation and 70 batches that were made by dry granulation.

I don't have this data for potency or the batch sizes summarized yet. As I said, we're still in the middle of finalizing the data and information from the study.

This slide ignore, so I'm just going to go right on to the next one and read the other one.

The test for the normality of means -- as I said earlier, we wanted to test both within-location and between-locations. The way that the consultant did it, he did Wilk-Shapiro test for normality. For between-location means, to see if those were normally distributed, we found out that about 11 percent of the 149 batches had at least one value that was statistically different or deviated from normality. Most of those 11 batches that had this problem had that point either at the beginning or the end of the batch. So, you see the power of stratified sampling to detect these changes.

Now, for within-location differences, about 15 percent of the batches had at least one value that was statistically different.

The conclusion for both of these, though, is that -- first of all, most of the data out there was normally distributed, but even though some of it wasn't, the computer simulations that we used to estimate rejection criteria rates will yield slightly smaller values than rejection rates based on the actual data. For example, we may say that 3.2 percent of the batches are going to be rejected, when in reality it's going to be about 3.5 when you start looking at the data. So, it's slightly different. The take-home message is here, yes, we're off by a little bit, but in general using the computer simulations is legitimate and the acceptance criteria that were identified are going to be sound.

The second thing is to compare the blend and dosage unit content uniformity data. Really what we're doing here is we're testing the hypothesis that blend uniformity is not value-added testing. The plots I'm about to show you are really interesting.

First of all, we compared them by plotting blend RSD on the x axis and dosage unit RSD on the y axis, and we did it for all 149 batches.

Here's the plot right here. Notice we have a line going up here at a 45 degree angle. If you have a true prediction of a blend for how the dosage unit is going to be, you're going to get a 45 degree angle. In other words, if your blend RSD is 5, your dosage unit RSD is 5, similarly up the line. What you can see is we got a lot of points off the line.

The second thing I want to point out on this plot is we divided it into three distinct areas: RSDs less than 3 percent, RSDs 3 to 5 percent, and then RSDs greater than 5 percent. That's what I'm going to go into now.

If the RSD is less than 3 percent for the blend, we got a decent correlation of the data. I think we had something like 112 data points here. I can't remember exactly what it is. For about 100 of those, we did see a fairly decent -- probably within statistical acceptable limits -- a real good correlation between the actual blend RSD and the dosage form RSD. You can see a lot of points are very close to this line.

We do have, I think, 12 or 10 points up here where the dosage unit RSD is higher. So, what could be the possible cause of that? One thing that came up is you got weight variability in there now. If it's a tablet, how much weight variation is included into this RSD. The second possibility is, is this particular product segregating? So, you can see that there is a little bit value in further analyzing these particular points.

Now, when we go to 3 to 5 percent RSD, we start to lose that correlation. Everything should be bunched around a line right up here. But what you see is the blend RSDs are a lot higher than the corresponding dosage form RSDs. Roughly it's about 1 to 2 percent higher for the blend. So, we're starting to lose that meaningful correlation and starting to question the value of blend data.

Now when we go above 5 percent, everything blows up. Basically if you got a blend RSD greater than 5 percent, you have no correlation to what you're going to get in the dosage unit. These are the products that are very prone to sample bias.

So, if you put it all together, unless you got an RSD less than 3 percent, your blend uniformity is of no value to predicting what the uniformity of the final dosage form is going to look like. So, we did meet our objective for that, to test that hypothesis based on this data.

Finally, the last thing I want to talk about is the comparison of the acceptance criteria. We put all 149 batches up against the PQRI validation criteria, the OGD criteria, and FDA.

The FDA validation criteria was the most restrictive, and the reason for that is, remember, that you had to have an RSD less than 5 percent, but also all the individuals had to be between 90 and 110 percent for the blend. If you had any bias in there, you're going to start to have batches less than 90 or greater than 110. You're going to start failing it. So, that's the cause of this right here.

The OGD and the PQRI validation approach. Really, there's probably no statistical difference between those numbers there. So, they were on a par when it came to passing it.

For the PQRI routine, USP, the ICH, and PDA 25, we only tested 88 batches of the 149. The reason for the fewer number of batches being tested was because 88 of them had at least 10 sampling points during the compression run. The other ones only had like beginning, middle, and end. Even though we advocated 20 sampling locations in our recommendation, we felt that we needed at least 10 to perform this analysis. So, it's a lesson that you learn when you do data mining after you set the number of sample locations and tablets you want. We're at the mercy of what we got. So, these 88 batches had at least 10 sampling locations.

Basically for the first three, you see there's really no difference in the percentage of batches that were passing it. However, you can see PDA 25 is much, much more discriminating and will reject about 30 percent more batches.

One other thing I want to put up finally is going back to the marginally versus readily complies data. Of the batches that passed in the previous slide, 79 of the 83 batches that passed PQRI validation acceptance criteria, 79 readily complied, 4 of them marginally complied. So, that will give you a flavor for how many tablets you're going to have to test for routine production.

Finally, I just wanted to acknowledge a number of people. From here on up is the Blend Uniformity Working Group, a great bunch of guys and girls. They worked really hard. It was really nice to see people from various aspects of the industry come together in a united way to come up with this.

Finally, Laura Foust, who is not on the Blend Uniformity Working Group, probably did more work towards this proposal than anybody on it. So, this is actually all the brain power behind the final recommendation.

That's the last slide.

DR. LEE: Thank you.

Any questions for the speakers? Because we do have a couple of questions to address.

DR. VENITZ: In your data mining efforts, were all those batches that actually passed? Because it appears to me that if you look at your overall plot, that all the dosage form RSDs are less than 6 percent. Right? So, you didn't include any failing --

DR. GARCIA: No. All the dosage form RSDs were less than 6 in 149 batches.

DR. VENITZ: Do you think that your interpretation, in terms of the predictiveness of the blends, would change if you had included failing batches? In other words, right now you know a priori that all your batches are going to pass your dosage form requirements, but if you had included the ones that failed, would that change your interpretation?

DR. GARCIA: Yes, probably. I couldn't see how it wouldn't.

DR. VENITZ: You're arguing that the blend RSD predicts the dosage form RSD only for the low RSD. Would that be true if you included your failing ones?

DR. GARCIA: What's the hypothesis we're testing though? Blend uniformity is not value-added. Look at the number of batches that had RSDs greater than 5 percent, some of them up around 20 percent for the blends. If you were going to say that there is a correlation there, then that batch is not uniform. The hypothesis we're testing, the data fit it because we had batches of blend that definitely failed, and some grossly failed. But yet, the dosage forms were uniform.

DR. VENITZ: And if you had included the failing one, I think that would have been even more apparent. I think it would have even more confirmed your hypothesis that your blend does not predict your dosage form performance.

DR. GARCIA: Possibly.

DR. LEE: Nair, do you have any questions for the speakers? Dr. Rodriguez? Dr. DeLuca?

DR. DeLUCA: No. I'm okay.


DR. LEE: So, Nair, go ahead.

DR. RODRIGUEZ-HORNEDO: I have a question for the speakers. My question is the relative standard deviation on the blend in all these studies that have been reported may very well be reflecting the error in sampling. Am I correct in that?

And if that is so, we need to be careful because if the sampling technique is really not representative of the whole sample, that is really not a good test for whether blend uniformity would be a good endpoint for dosage form uniformity. So, I'm wondering if any of these were done with in-line or on-line monitors.

DR. GARCIA: We don't have that information whether or not the companies that submitted the data were also using on-line monitoring. I doubt it, though.

DR. LEE: Nair, are you satisfied with the explanation?

(No response.)

DR. LEE: Judy?

DR. BOEHLERT: My understanding is the eight companies from whom you received data were mostly large companies, or were they smaller as well? My concern always when we change a standard is what is the impact on previously released product, product that met the old standard and is it going to be an adverse impact for the large variety of products that are out there?

DR. GARCIA: To answer your first question, we don't know the size of the companies. The companies that submitted the data were totally blinded.

DR. BOEHLERT: Totally blinded.

DR. GARCIA: Right. The way we did it is they submitted the data to Sylvia Ganton, who is our executive secretary of PQRI. She entered it into a database after acknowledging that it was from a legitimate company, removed any reference of product name, company name from the data, and then forwarded it on to the statistician and subsequently to the working group.

DR. BOEHLERT: Did you encourage companies to submit batches that weren't so good?

DR. GARCIA: We tried.

DR. BOEHLERT: Or is it likely they sent their best?

DR. GARCIA: We tried but that was a question that came up.

DR. BOEHLERT: Yes, it's always a question. I'm going to send you my best data. I don't want my company to look bad even if you don't know who I am.

DR. GARCIA: But if you look in the slide where you got the blend RSDs greater than 5 percent, obviously somebody had some guts to send us that.

DR. BOEHLERT: Well, but USP -- the current limit on content uniformity is 6 percent for RSD on the first 10.

DR. GARCIA: There are some 12, 15, 20's in there too.

DR. BOEHLERT: That would be my concern. If they're currently close to that 6 percent, what's the impact of going down in RSD in the future?

DR. GARCIA: One question I think that came up at AAPS that may be related to yours is, is this going to be applied to some of the older products where we don't have blend uniformity? Is that what you're getting at?

DR. BOEHLERT: Yes, absolutely. What's the impact on old products? New products is something else. You validate them using these standards, but old products were validated many years ago in some cases.

DR. GARCIA: Do you want to handle that one, Helen or Ajaz?

DR. HUSSAIN: I'm here to seek the recommendation from the committee.


DR. BOEHM: Well, I'll have a go at it. We did discuss this and the representatives of DMPQ indicated that the current rule would still apply. If it's an old product, you leave it alone. If you don't make any changes, you don't do anything, then you don't need to produce any more information. But as soon as you touch something to improve it or shift it, then they have the right to ask for today's standard.

DR. BOEHLERT: Another reason for not going to new technology I guess. Right?

DR. LEE: Kathleen, you have comments to make?

DR. LAMBORN: I guess I have sort of a follow-up to some of the things that are being said about the basis on which these batches were coming forward because you could argue that if you wanted to try to convince people that the blend uniformity standard was not useful, the first thing you would do would be give some examples that looked just like the graph that we saw.

And then the results that you're getting. I'm assuming that you're recognizing that the biases that come into the -- I mean, there's no way that this necessarily describes the frequency with which things would pass if you were to get a "random" sample of things that come in from the field. I think you recognize that.

The other question I have is could you have predicted pretty well the order in which you would have seen the passage rate just by knowing the differences in the criteria that were set? You said, for example, that the FDA validation results in fewer acceptance. Then you said, well, of course, that would be expected because they have a narrower range.

So, I guess my question to the group is, have you learned anything that you would not have really known already just by contrasting the differences in the criteria as you knew that they were? I mean, you knew that the FDA criteria on that component of it was stricter. So, anything that passed the FDA is by definition going to pass the other ones.

DR. GARCIA: Right. What we were trying to do, though, is what are meaningful specifications. That's the thing. Now, the FDA specification is for individual dosage units. All of the other ones are for means. That's why you got more selectivity. If you got an 89 percent blend sample, on the FDA criteria you're going to fail; whereas, in the PQRI, if you got an 89, a 90, and a 91, you're going to pass.

DR. LAMBORN: I realize that's what you're saying. All I'm saying is that you didn't need data in order to conclude that.

DR. GARCIA: Well, all of our acceptance criteria are based on Monte Carlo simulations, and when we originally went down that road, the steering committee was not comfortable with us using computer generated data. The results of that data were the OC curves that I put up there. So, yes. Could we predict how many were going to fail? Absolutely. The OC curves did it. But what we wanted to do, per the DPTC and the steering committee's request, was to get a reality check on what is actually out there and how was it going to conform.

Does that answer your question? I don't think it does.

DR. LAMBORN: Partially. That's okay.

DR. BOEHM: Perhaps I could add one more thing. The FDA validation criteria, as it's being called here, comes from an old compliance document, and it has blends only. It has no stratified sampling criteria associated with it. So, it just sits out there alone as a blend uniformity criteria in the middle of nothing else.

DR. LEE: Ajaz?

DR. HUSSAIN: Vince, a couple of comments and corrections. Garth in his presentation said CDER and Division of Manufacturing and Product Quality. That's part of CDER. The Office of Compliance is within CDER. The Office of Regulatory Affairs is probably what you were confusing.

Also, I'm seeing the struggle here I think that you will have to face in terms of providing recommendations to the questions because you're looking at a traditional approach to validation and test, test at every stage. That's the traditional mentality, and I think what the recommendations that Garth and Tom have provided are in a sense essentially keeping track of where the samples are coming from in a larger way. That's what is being reinforced here. In terms of number of samples and so forth, I think you still see very similar approaches to the traditional approaches. The number of samples are essentially fixed, not based on the batch sizes, not related to the process and so forth. So, that's the traditional way of thinking about this.

As you start deliberating, I think keep that in mind. In a sense, here we have removed the emphasis from the sampling thief, taken the emphasis to end product testing, although increasing the number of end products more so than we generally might be doing. And at the validation stage, you have a means of providing justification that the thief is giving you the wrong answers. That's in a nutshell the proposal here.

DR. LEE: Thank you very much.

Let me give you some idea of where I'd like to take it. I think that this committee is ready for a timeout, and what I propose to do is hold all the questions, take about a 15-minute break, and return for another 30-minute discussion on answering these questions. The questions are posed very clearly here and I think that we might need some time to clear our heads and come to some sensible answers. So, let's reconvene in about maybe 10 minutes, about 3:15. Thank you.


DR. LEE: We are ready to continue.

Based on our conversation during the break, I think it is very clear that we need to continue with the questions before we address the questions posed to us. Leon, you were about to raise a question before the break.

DR. SHARGEL: Yes, I had a question. It sort of continues what Dr. Boehlert said about old products. I wasn't clear what your answer was on that, whether blend uniformity was needed on products that have been manufactured for a number of years. So, if you can answer that, then I'll go to my next question.

DR. BOEHM: I'm not sure if I can answer whether it's needed or not. My point was that it was my understanding that compliance's view of old products is that as long as they remain exactly as they are, that they will not ask for additional information. If old products didn't have any blend uniformity, I interpret that to mean that they wouldn't be asking for it. But if any change is made, including a change in manufacturing site or equipment, then the product needs to meet current requirements.

DR. SHARGEL: The follow-up is, I checked with a number of manufacturers on the generic side who go along with your conclusions in your public workshop, one, that blend uniformity testing is not a value-added test. It seems to be the consensus which I got, and also that blend uniformity testing was more for the validation and the development.

Now, the sense that I get also from my colleagues is that if you're able to reproduce your batch in manufacturing and you eventually get a body of knowledge, does this new product become eventually an old product that you're very confident in making and do you really need to continue with blend uniformity testing forever and ever, or is it possible to get a body of knowledge -- I'm pulling 10 out of the sky because it's a nice number -- and maybe do it on every tenth batch or some other approach?

So, the first question I really have, which differs maybe from Dr. Hussain, is not on here. Is there a time or a place where we can not do blend uniformity on every batch once we've manufactured it successfully for some time?

DR. HUSSAIN: Vince, let me just share with you some information that might be helpful here. The current good manufacturing practices -- the "c" in the current good manufacturing practices is a continuous improvement and keeping current with the technology and standards. That often becomes a roadblock, for example, bringing new technology in. Old problems become invisible. The "c" in cGMP is that argument. And to extend this to on-line technology, if two companies do it on line, blending for example, does that become the current standard for the rest of the industry? That's the debate here.

Now, with this proposal, how do we address older products which have been there on the market? So, I think I don't have a firm answer for that, but I think we are looking at that. At least my personal approach to that has been let's look at improving without penalizing as much as feasible. If there are problems associated, then I think we have to correct those problems. But if those standards have been used and applied for the last 20-30 years, there has to be a rational reason for updating that. So, I think that's the internal struggle. I don't have the official answer for that right now, but I think we will carefully look at that and make sure we address it right.

Leon had suggested that -- and I think the proposal here is -- we do it for product development. We do it for validation. But why continue doing for routine production? There are two aspects to that. One interpretation of the regulations is, yes, that has to be done for every production batch. The first or second slide said that. So, that's one interpretation of that.

But what is the scientific basis for that? I think I just want to share with you my interpretation of the underlying science or gaps in the science which would say that probably should be done.

What is validation? Validation is a series of qualifications of the equipment process and so forth that culminate into the three commercial batches. A product essentially is validated when you successfully demonstrate three commercial batches meet the specification, plus the supporting development data that goes behind that. So, that's what validation is.

In the absence of a clear understanding of the mechanisms of each unit operation, the discussion and the debate focuses on three batches. All the information you have or the manufacturing history are those three batches before you allow market access.

What would be the problem in the current system? I'll give you two examples. One would be excipients. Excipients in USP are totally dictated by chemical purity. USP NF or lactose NF from different sources could differ considerably in their physical attributes.

To give you an example, magnesium stearate. Magnesium stearate is a very significant challenge in terms of its physical attributes and we still don't know how to really do a functionality test for magnesium stearate. I'll quote a thing from Dr. Kibbe's handbook. One of the culprits with magnesium stearate is the impurity sodium stearate which defines the hydrophobicity and lipophilicity of that molecule, and that is so critical for dissolution. We don't even have a test for that in the monograph. So, one source of magnesium stearate will have the same NF stamp on it but have very different physical and functional attributes.

Now, in your validation run, you have used generally -- validation -- in practice what it has become, in my opinion, is you do everything as homogeneous as possible to prove that three batches would work because that's your ticket to commercialization. That should not be the case but in fact in some cases that is the case. So, you are using the same raw material for the three batches, and then subsequently the raw materials might change. So, that would be a sort of scientific argument saying that raw material attributes are changing during subsequent manufacturing and we have no way of assessing whether that had an impact or not.

The release tests are very much limited in terms of the sample size. Content uniformity, 10 tablets is the basis of releasing a product which could be 1 million tablets or 20 million tablets or 30 million tablets. That's sort the in-built dilemma that we face all the time.

DR. SHARGEL: I agree with you that when you do validation, you have a limited body of knowledge. Then as you go into commercial production, you begin to gain a lot more knowledge with making the process over and over again. The issue is not in change of excipients or such. If I'm making it the same way with the same excipients, using the same raw materials, is there a time and place where I no longer have to do this particular test? Can I be assured? If I'm changing raw materials and then I get into a SUPAC type of issue or some other annual report or something of that sort, I am assuming that I have to make a statement. Then I might --

DR. HUSSAIN: Leon, the argument I've placed is you're using the same monograph material, but it's changing. You don't even know it's changing. That's the point.

DR. SHARGEL: If I'm using the same supplier.

DR. HUSSAIN: Even if you're using the same supplier, because the specifications on raw materials don't address physical attributes.

DR. BYRN: Yes. In mag stearate, I know that the same supplier doesn't control the physical attributes, hydration, other things. So, company X's mag stearate is not a constant thing.

DR. KIBBE: Nor is it depending on where they shipped it to you from.

DR. BYRN: Right. I've even heard that certain companies that make raw materials, when they're approached with this problem, say, well, we'll ship you a drum. You can test it. If it's what you want, you can manufacture with it. If you don't, just ship it back to us. We'll ship you another drum. And it's continued through that process until you get a raw material that works. All these raw materials that were shipped to you meet USP, but they won't manufacture.

I know ahead of time I need a mag stearate that has a certain property. I can't guarantee that that's shipped to me. So, what the raw material supplier says, I'll ship you a drum. You test it. If it's the way you want, you can keep it and make it into a product.

DR. KIBBE: And different products require different strict control of the mag stearate. With some products, it doesn't matter as much. So, then the company isn't going to put the energy into keeping track of that.

DR. SHARGEL: I'd just like to replay to that, if I may on the excipient differences.

DR. KIBBE: Go ahead.

DR. SHARGEL: Again, if I'm doing a couple years or five years or 10 batches or whatever it takes, then the excipients, as you say -- I just learned something, that the mag stearate I'm getting is not exactly the same every time I get it for those batches. But then I know that my method is robust enough that it really didn't make much of a difference because my end product has tested very well all the way through. So, that starting material, as far as the mag stearate or whatever I'm using, didn't really make much of a difference. I'm still getting the same answer. So, I haven't made any major changes in process. I have only ordered from the same supplier what I think is the same excipient. I just learned it's not quite the same excipient, but my end product is still the same end product by all my tests. So, does it still make a difference?

DR. BYRN: You're saying you have an established, robust product.

DR. SHARGEL: I think I have if I'm making it for 10 years and whatever mag stearate you send me, you send me.

DR. LEE: I think we are beginning to drift.

DR. KIBBE: Let me get back onto dissolution and batch selection and what have you.

One of the things I've noticed from all the data you gave us is that poor uniformity in the batches that you had information on didn't predict poor uniformity in terms of the tablet product that you made. Then I'm left with one of those wonderful theoretical conflicts, you know, where you have a beautiful theory that a uniform powder will make a uniform product, and then you have a wonderful fact that says that this uniform powder will make a uniform product. So, I'm struggling with whether my theory is no good, which is that you have to have a uniform blend in order to make a uniform capsule or tablet, or that there's something else going on.

I'm a little concerned that one of the problems we continually face is that we are not expert at sampling blends for a lot of reasons. I don't know whether you feel that those blends that were 15 and 20 percent, or quite large compared with an ultimate tablet, was because we don't know how to sample blends in general or because the companies that did it had an old sampling method and they stuck with it.

DR. GARCIA: We don't know the answer to that question. This is not my data. It's not Garth's. It was just submitted blind.

DR. KIBBE: I just wanted to get a sense of where you were on it.

DR. GARCIA: Right. All that we do know is this is the blend RSD. They were high. We don't know the reasons they were high. If you go back to our validation flow diagram, did they even perform some sort of investigation into the cause of the RSD to determine is it sampling error, is it segregation, is it remixing further on down the process? Those things we don't know the answer to. To do that is beyond the scope of this particular exercise. But, yes, I acknowledge your point.

DR. KIBBE: A theoretical question then. If that data is real, then the agency can't depend on blend uniformity data to predict anything. So, why capture the data? And if that data is real, why do blend uniformity? Which to me flies in the face of what we were talking about this morning about trying to have in-process validation of all our things and quick turnaround time and quick release of batches. So, I'm wondering how we're going to resolve that.

DR. GARCIA: My own personal opinion on this -- this does not necessarily reflect PQRI or the Blend Uniformity Working Group -- is based on the data you saw right up there, whether you think that's enough batches or not, this is the data that we have to work with and I'm going to make the statement based on this. It's clear that blend uniformity data is useless. It does not represent what is really going on in a number of cases where you have sampling errors. The sampling technology today is not capable of extracting small quantities of blend. When you get below 200 milligrams, you get all sorts of problems if you're in that 1 to 3X range. That's fine if you got a 500 milligram tablet, but when you start getting down to a 50 or 100 milligram tablet, you're in some trouble.

So, based on your question, why are we doing it, good question. That is why we are testing the hypothesis, though. Blend uniformity is not value-added

But in the interim, we also released this guidance document. Actually, the guidance document was done before the data mining.

One of the things that we did feel, though, is the company should put forth some effort to show that your blending process is under control. And if you notice in the acceptance criteria, we said that we wanted individuals to be within plus or minus 10 percent of the mean, rather than 90 to 110 percent.

What we're basically saying there is the true measure of uniformity of a blend is an RSD, not potency. Once again, you're getting into the sample bias. If it's centered around 100 percent, great. You've got a really fantastic sampling thing. But if you have a mean of 120 percent and an RSD of 3 percent, obviously something is wrong here, and subsequent tablets made from that batch are centered at 100 percent, you obviously have a sampling error.

Up until this document that's been proposed and until it gets incorporated into a guidance document, you're basically stuck because you cannot check off that blend uniformity box during your validation exercise. So, we're trying to take all these things into consideration.

But is it worthwhile during process development? Yes, I think it is. But on a routine basis? It's got some serious flaws.

DR. BOEHM: Perhaps I could just briefly add to that. The survey suggested that manufacturers have trouble with about 10 percent of their products, about 1 in 10. We haven't been through and looked at that data to see if that is what we're looking at here, but it looks by eye to be pretty much what we are looking at.

DR. GARCIA: 16 out of 149.

DR. BOEHM: Yes. It's 1 in 10, which is what they reported in the survey, give them trouble. So, we're looking at a picture where they use the same old-fashioned ways of sampling blend, and 9 times out of 10 that's fine. 1 time out of 10 it doesn't work.

DR. BYRN: I can't capture all of this and some of it is not published and so on, but at Purdue we've done a lot of comparison of on-line data versus thieving. Maybe not a lot but a significant amount. There's no question that the errors are much higher in thieving, and the errors are like Tom is talking about, the amount you're thieving, how they're handled, how they're transferred. All of us know of consulting situations where electrostatics of the active cause it to not be at chemophore. And there are all these stories. But on-line data is generally much better, way, way better, than thief data.

So, my thought of all this is that thieving is always going to be problematic. I'd like to see us go to on-line data. The main barrier is that we're going to have to validate the on-line data with the thief data, which may be a complete result of artifacts. I'm not sure it's complete, but there could be quite a few artifacts. I think that's what you're saying. I don't know whether you want to jump in here.

DR. GARCIA: You may be able to validate the on-line data and get a correlation with the dosage form data.

DR. BYRN: Yes. That may be the solution.

So, ultimately my view is that this is actually a big advantage of on-line data and that the more we can go on line, the more we'll really know "what's happening."

And then another major factor of going on line is going to be that we can troubleshoot when something goes wrong. That's another advantage of doing every lot is when something goes wrong, we can troubleshoot.

Ajaz didn't get a chance to go into this. He just mentioned it, but there's a lot data that part of the major costs of pharmaceuticals is the warehousing of samples, as Ajaz said, the OOS or the nearly OOS. If we have all this on-line data, we may be able to say, oh, yes, something happened in that sample that we don't know about now because of the problems that we're all talking about in thieving.

So, that's my optimistic view of the whole thing.

DR. KIBBE: If thieving is this problematic and we're not ready for everybody to go on line, why are we still collecting thieving data?

DR. BYRN: I don't think we can completely prove that it's completely problematic, but certainly it doesn't sound very good. Maybe Ajaz wants to comment.

DR. HUSSAIN: I agree with what Tom and Garth have presented in many ways, but I think I would state it a bit differently. When Tom says blend uniformity is not a value-added test, in my way of looking at it what he's saying is blend uniformity testing the way we do it with a thief is not adding any value. That's what my interpretation of that is.

But Dr. Kibbe expressed some dichotomy of what we talked about in the morning and what we are saying right now. I don't see it that way, and let me explain why.

The regulatory concern that we were trying to overcome with the blend uniformity data was the limited end product testing for content uniformity. I think the limited end product testing was the motivation behind all of this exercise for the last 10-15 years, that being the 10 tablets that is the basis of releasing a batch, and those 10 tablets may not represent the 20 million tablets that they're coming from. So, that's a fundamental concern. The approach that was used was to say that every unit operation has to be controlled precisely for us to rely on those 10 tablets.

In reality, I think the 10 tablets is not a true concern from one way. The concern truly is that a representative sample. The PQRI proposal essentially addresses that in a more formal way where you're expanding or increasing the number of end product tests. If we had made that proposal from FDA, I think we would be in front of the Congress probably explaining how are we increasing that number of tests. Having PQRI makes our job a bit easier.

But to go back on the issue of dichotomy and what we talked about, on-line technology and this, I could make the case in many different ways. The current proposal for PQRI is still advocating for the validation development to use blend uniformity analysis. Right now it's sampling thieves. The MIT data, which we presented on July 19th -- I did not summarize it again. Do you know how long it takes to validate just one unit operation? On average, 20 days to do thief analysis and validation. And the range could be 1 day to 30 days because of the sampling errors that are coming in. So, going on line, you improve that process efficiency itself. You do it in a day. But that's not the only point.

All the focus has been on one component of the complex mixture. That's the drug. What about magnesium stearate? I showed you an example of what non-homogeneous distribution can do to dissolution. Guess how many tablets we test for dissolution before we release. 6 tablets, less than content uniformity.

So, building quality in starts at every step, and I think going to on-line will tremendously, in my personal opinion, improve our understanding of the processes and the quality.

I could easily extend the blend uniformity discussion to say, all right, when you validate, I would like to see dissolution data for those many tablets. 6 tablets may not be sufficient. It's every attribute that comes in. All we have talked about is content uniformity today.

So, in my opinion, there's no dichotomy. Tom said this correctly. This is a band aid right now. It's correcting a problem that we have debated for the last 15 years. It's a band aid. It's not a fundamental solution to the overall problem because as we go to the more complex dosage forms, excipient homogeneity becomes critically important for many controlled-release formulations.

DR. LEE: Thank you.

I think Marvin wants to say something.

DR. MEYER: Naively, because this is not my area, it seems to me that if your concern -- and I was glad to hear you mention that the PQRI has come up with increased end product testing. It seems that if your real issue is that end product testing is inadequate and you go to an even less adequate test to support your end product testing, that doesn't prove anything. What you ought to do is simply go to more end product testing. I would think that dissolution might be more difficult, but content uniformity -- if you took a sample somehow at the beginning, the middle, and the end of a run and had 30 tablets or 50 tablets, with today's modern analytical capability, you can run 60 tablets, I would think, fairly quickly compared to 10. It's a negligible. That would be easier than doing a blend uniformity because that's a second step. That's a second process that's different than the tablets themselves. So, it seems like the solution is to increase what really counts and put less stock, if any, into the blend uniformity.

DR. HUSSAIN: Marv, I just want to make sure I clarify the situation. When I said 10 tablets, the stage 1 USP testing is the 10 tablets. The key question there is representative samples, and I think the proposal of PQRI addresses that. It focuses on collecting a representative sample. All of our GMP guidelines, even USP, state it has to be a representative sample. But in practice we may be missing some of that.

DR. GARCIA: First of all, I want to address the question, are we adding more testing into release of the product. For validation, yes. For routine production, no.

I also want to answer your question of about 10 minutes. I didn't get a chance to chime in. When do you stop testing it? The approach that we're putting forth really doesn't matter because we have not added the burden for release testing. All we're doing is pulling those samples in process. You're going to have to do USP content uniformity release testing on it anyhow. The cGMP requirement is what's mandated to do the blend adequacy of mix component in there. According to our proposal, we're not increasing any additional testing. So, given that, it's more or less a moot point. You got two calculations possibly, but hopefully that clarifies that.

I just lost my train of thought.

DR. BOEHM: Perhaps I could just also clarify. The Blend Uniformity Working Group and the outcome of the workshop -- people believe that blending operations should produce uniform blends and that situations where nonuniform blends are made uniform by something like a tablet press are inherently dangerous and should be avoided. That's why we advocate doing the blend uniformity testing in validation but then switching. So, we do not favor situations where potentially nonuniform blends produce more uniform dosage units.

DR. GARCIA: I just remembered the third thing I was going to say. By going to stratified sampling of dosage units, we are putting the emphasis of the testing where it gives you the most value and the true read of the uniformity of the product.

The other thing is if you have a uniform blend going onto your compression machine or filling machine and it segregates, you have a lot greater chance of catching that problem, which is just as bad as having a nonuniform blend coming through. You have a greater chance of catching that using the stratified sampling approach. That's another plus of pulling your USP samples in process.

DR. LEE: Kathleen?

DR. LAMBORN: I wondered if you could just go back and walk us through precisely what your proposal is because I've gotten confused. I'm looking at attachment 1 which looks like one of your slides. But that specifically says mix and content uniformity for ANDA and talks about validation. Now you said validation is different from routine batches, but I was having trouble finding the slide that described routine batches. So, could you just sort of take us back through that and also specifically how it has changed from the current?

DR. GARCIA: Do you have the presentation fired up over there? Yes, put it up.

DR. LEE: You're not going to run through that again, are you?


DR. LEE: I can see that when Helen was introducing the meeting, she mentioned this seems like a piece of cake. It took years for PQRI to come to today, and I think that we're witnessing the same phenomenon here. We need to come to closure. But I think Kathleen's question is very important.

DR. LAMBORN: I think we need that in order to address the question that's been asked of the committee.

DR. LEE: That's right.

DR. GARCIA: Attachment 1 is this slide, and this addresses the blend portion of it, the top half of attachment 1. By attachment 1, I'm referring to the actual proposal. The bottom half of it is in this slide. That's over two slides in my presentation.

DR. LAMBORN: But this refers to validation blend.

DR. GARCIA: Right. This is for validation.

DR. LAMBORN: So, both of these are recommended for validation only.

DR. GARCIA: Right.

Then you'll notice we're advocating -- well, first of all, we don't advocate blend sampling for routine manufacture.

The second thing is we are saying you have to have 20 locations here, test either 3 for stage 1 per location or a total of 7 for stage 2 per location. So, you're looking at 60 or 140. But this is validation. You're supposed to be stressing the product to make sure that the unit operation is producing consistent quality product.

Now, attachment 4 is this slide right here and the recommendation, only not as colorful.

DR. LAMBORN: You're saying it's for routine manufacture.

DR. GARCIA: This is for routine manufacture, right.

DR. LAMBORN: And yet, it says "or validation batches."

DR. GARCIA: No. The first step is for the ANDA or exhibit validation batches, all the data you generated per attachment 1. If you got the RSD less than 4 and all those other things, this is where you determine readily comply versus not readily comply. You're only looking at 10 tablets for a stage 1 or 30 for a stage 2, versus 60 and 140. Is that clear?

DR. LEE: Thank you.

DR. BYRN: One thing that Tom and I discussed just related to all this because on-line validation is going to be completely different from this. So, that's a whole new problem. Maybe Tom wants to expand on this, but maybe I'll try and then you can correct. It's stated in the proposed guidance that you can use on-line methods, but you'll have to develop your own validation package because obviously, especially on the previous one, 20 samples -- one interpretation of that would be 20 sensors, and that's a lot of sensors by any criteria. So, people just need to realize if we're thinking about on-line validation, it's going to be significantly different from this.

DR. GARCIA: Yes. We state that on-line monitoring is actually the way to go, we feel. But we're not going to tell anybody how to do that. It's up to the firms to figure out how they're going to sample, where they're going to sample, where they're going to put the sensors. And as you said earlier, how are you going to validate that? What are you going to use as your benchmark?

DR. LEE: Nair, are you with us?


DR. LEE: Are you ready to take us through this series of questions?

DR. RODRIGUEZ-HORNEDO: Yes, I can, but realize that due to the connection on my end apparently, there is a delay. So, we can try but it may be difficult to have an ongoing discussion.

DR. DeLUCA: I don't get a delay from my end here, Nair.

DR. RODRIGUEZ-HORNEDO: Okay. Well, let's try.

DR. LEE: Why don't you give it a try?

DR. RODRIGUEZ-HORNEDO: Okay. The questions I believe are the ones that Ajaz mentioned at the beginning. Am I correct?

DR. LEE: That's correct.

DR. RODRIGUEZ-HORNEDO: Which are the issues for discussion. First, is the current PQRI proposal appropriate for inclusion in a planned revised guidance? The first one is, if no, please suggest modifications for improvements that would be necessary prior to any regulatory application.

So, is there any discussion?

DR. BOEHLERT: Can I just ask a question?

DR. LEE: Go ahead.

DR. BOEHLERT: Is it the intent on the revised guidance to put that out as a draft?

DR. HUSSAIN: Yes. Also, just to make a point, sampling in many cases works right now in the sense we have a lot of data which says for many products the thief samples also work well. So, our intention is, as we go forward, you have many choices now. If you have a problem, you have an alternate way of doing that. So, it doesn't mean that everybody has to do it this way.

DR. LAMBORN: Can I ask a point of clarification? For the routine process, if I understand it correctly, the proposal, as you're doing it, is deleting an existing requirement? Because currently there is an existing blend requirement or is there not an existing blend requirement?

DR. BOEHM: For most ANDA applicants, there is an existing requirement that they conduct blend uniformity testing on routine batch manufacture. It would be the view of the Blend Uniformity Working Group that substituting stratified in-process testing would be a better solution.

DR. LAMBORN: Thank you.

DR. BYRN: I think that the committee put a lot of work into it. So, it seems like a reasonable thing to go forth with this proposal. It's out for comment. There would be a comment period and then there would be additional time to review those comments.

DR. BOEHLERT: I think that would also give all of the companies that didn't submit data an opportunity to look at the impact on their product lines.

DR. LEE: So, what I'm hearing is that there is some -- I'm reading the minds of the rest who didn't speak. Is there some consensus on this? So long as this is the draft guidance.

DR. VENITZ: I second.

DR. LEE: Then let's get it out there and stimulate, motivate discussion and learn in the process. So, the answer to the question is yes.

DR. MEYER: Vince, which is quite different than endorse the proposal or the proposed guidance. Simply get it out there, let's hear what comes in, and then review that. Is that correct?

DR. LEE: So, Dr. Meyer is going to have a friendly revision.

DR. MEYER: No. I don't have any revision right now. I would hate to try to overturn what well-trained people have done over a period of months and what an untrained person has done in a period of an hour. But I think it's worthwhile to have it out there because they've obviously put a lot of work into it, and it seems to make sense.

DR. BYRN: And comments will come in and we'll have another meeting, and there will be a public hearing. Right, Helen and Ajaz? There could be.

DR. HUSSAIN: It depends in the sense --

DR. BYRN: There could be. It depends on what they are.


DR. BYRN: And it could end up like dermatopharmacokinetic that continues for a very long time.



DR. BYRN: They're assuring us not, but I'm just saying that's a possibility. So, deliberation could continue for a very long time. It may not, but we're just starting the process. Right? That's our proposal. By answering yes, we're just starting the process. We're going to have plenty of input. We're not going to have a lack of input into this process.

DR. HUSSAIN: No. Don't associate DPK with this. We want to have a different process, a more efficient process, and we are process mapping everything we are doing inside too.

DR. MOYE: One question, if I could. Is there any way that the committee could vote an answer to this question that portrays the reservations the committee has about this process? I'm just not sure how to do that based on the phrasing of the question. If the committee has reservations about the implications of the proposed plan, I'm just not sure how they would express those reservations in the answer to that question.

DR. LEE: I think that Steve Byrn more of less summarized the sentiment. Here's a proposal. Let's put it out there and stimulate input, have another discussion, and go from there.

DR. MOYE: Well, it just seems if we put the proposal out -- again, I'm naive about this. I'm just not sure how putting the proposal out would be separate from endorsing it. That's what my concern is.

DR. KIBBE: You want us to approve it with reservations.

DR. MOYE: Well, I was wondering whether we could vote to approve or not.

DR. LEE: I don't think we have to vote on this.

DR. MOYE: Okay.

DR. LEE: I think we're just expressing our opinion.

Kathleen, you have a point to make?

DR. LAMBORN: I think I was just following up on the same concept which I think is that we are, in a sense, not answering the question as posed. We are simply recognizing the amount of effort that's gone in and encouraging everyone to get this out for public comment and then come back. I think beyond that, all the discussion that's gone on so far gives some sense to the people about some of the questions we have, and I don't know that the committee is even ready to say exactly what their concerns might or might not be. But I think the key thing is not to -- we're not saying yes, the proposal is appropriate. We're saying, yes, this proposal is appropriate to get more input on, and in fact, it's been well formulated in terms of getting the discussion started.

DR. LEE: Therefore, the implication is that it's premature to answer the rest of the questions.

DR. HUSSAIN: That's not a problem at all. The process that we will follow is as follows. The official recommendations from PQRI would come in. We wanted to have this discussion up front so there are any reservations/concerns, those are expressed now so that Tom and others can go back and incorporate those reservations and address those reservations. So, as we go through the process of getting the PQRI official recommendations ready, those are already incorporated. So, when these come to FDA as official recommendations of PQRI, we already have the input in that.

What that does is it helps us to move forward quickly. It incorporates the proposal into our draft guidance, which will come out as a draft and go through the process of public comment before it gets final.

So, if answering the question yes or no is difficult, that's not the major concern that I have. I think if there are reservations that are expressed now, they get incorporated. So, it helps the process.

DR. KIBBE: I think my reservations are for those 10 percent of products where batch uniformity is not predictive for tablet or product uniformity. I think we need somehow to stimulate a different testing method for those kinds of batches and in-process testing or something because I'm reluctant to say that as long as it works 90 percent of the time, it's a good tool. Do you understand?

If the current methodology of sampling powder batches with thieves fails 1 out of 10, then there ought to be something in the guidelines that says something about those conditions when it's no longer an appropriate sampling method and that the company ought to look for a way of solving that problem on their individual batch somehow. If I had an analytical method that correctly assayed a tablet 9 times out of 10 and the 10th time got it wrong, I don't think anybody would like to my analytical method, and that's where I'm struggling.

DR. HUSSAIN: Let me sort of answer that. I actually went through the same deliberation in my mind in the memo. In fact, in the memo there was one more question, which I left out in my presentation, and that was that question.

In looking at an alternate method, we do have an opportunity to incorporate some aspects of on-line technology in the revised guidance. Let me expound on that. Data that has been collected with MIT, Steve, CAMP, using near infrared, as well as laser-induced fluorescence, data that we have seen from Pfizer, data we have seen at AstraZeneca -- there are at least six different sources of good data on how to use on-line process for blending. We do have a sub-working group in PQRI which is supposed to be working on that aspect. We could accelerate and actually get those data submitted so that as the revised draft comes about, we have a suggestion of how to do on-line blending as a part of that guidance itself. So, there is a possibility.

But the reason I pulled that question back was not much progress has been made in PQRI on that front. I didn't want to hold the draft guidance just for that. That was the reason I pulled that question back.

DR. GARCIA: I'd like to just address your point. We had 16 batches with an RSD between 3 and 5 percent. It's like the third slide I put up there in the series. Out of those, 12 of those 16 batches did not have a correlation between RSD of the blend versus the dosage form. In other words, the blend RSD was 1 to 2 percent higher than the tablet dosage was. Then, of course, at the end there were 13 blend RSDs that were greater than 5 percent, and of all of those 13, the dosage forms were 5 percent or less. So, really what you have is a total of 25 batches in the data out of 149 where we do not have a correlation between blend data and dosage form data.

I'll go back to the hypothesis, is blend testing value-added? About 80 percent of the time, yes; 20 percent of the time, no. Given that 20 percent of the time it fails, is that a value-added test? In other words, is your failing because of false negatives I guess. My answer to that is no. So, we have accomplished, in this data I think, to successfully test that hypothesis.

DR. KIBBE: I'm not arguing that you tested the hypothesis. What I'm saying is if we're going to put out a criteria for manufacturing process and we have an in-process measure that's supposed to give us an understanding of the quality and it's not predictive 20 percent of the time, then it's not a good measure.

DR. GARCIA: Okay. But our whole proposal is based on that. We are putting the emphasis on dosage form content uniformity and down playing the effect of blend uniformity. So, I don't see how we're putting out a recommendation that's going to fail 20 percent of the time. Our recommendation is being put out to ensure that you're not going to have false failures 20 percent of the time, and it also will add further confidence that the batch is good 80 percent of the time where you would do blend and dosage content uniformity. So, I think we're really addressing what your concern is.

DR. LEE: So, are you saying that the PQRI has reservations about this?

DR. GARCIA: No, no. Not at all. I'm saying PQRI's proposal is -- we acknowledge that there are problems with blend uniformity testing, and the data that we did in the data mining showed that 20 percent of the time that is occurring. We took that into account in our whole approach by removing the emphasis for passing or failing validation batches, let's say, by looking at both the blend and the dosage uniformity data in conjunction, putting further emphasis of the testing on the dosage units, because let's face it, that's what's going to the patient, not the blend -- we don't have sampling errors with dosage units. We do have them with blend samples. So, we are putting the emphasis on testing where it belongs, on the actual dosage unit that's going to the patient. That's what I'm saying.

DR. KIBBE: So, you're eliminating an unreliable test by eliminating powder batch uniformity testing.

DR. GARCIA: What we're saying is the best way to assess the adequacy of mix of a blend is indirectly by measuring the dosage units made from it. Does that clarify it?

DR. LEE: I think that is the point that Art was going after.


DR. GARCIA: The best way is an indirect measurement of it.

DR. LEE: Judy?

DR. BOEHLERT: But during process validation, testing of the blend is still in there, is it not? And if indeed you've got all of these sampling problems, you're not going to be able to validate your process.

DR. GARCIA: Go back to the first part of the document, though. It says you should put adequate development into your sampling technique.

DR. BOEHLERT: Where I was leading to is perhaps there should be some language in the document that gives allowance for using alternate technologies when you run into that roadblock because you may not be able to readily find a good alternate sampling technique and maybe going to some on-line kind of test or some other means of testing would be preferable.

DR. GARCIA: We do address that. We say in the end other technologies are acceptable. So, we do address that.

DR. BOEHLERT: We need to emphasize that so it's clear.

DR. GARCIA: In the flow diagram, we say if the blends fail the criteria we had listed there, perform an investigation into why they failed. If it is a sampling bias, then proceed on, whereas at the current state of affairs, you don't proceed on if it is.

Now, we do feel that it is worthwhile to do some sort of blend assessment during validation because we don't want somebody just putting out -- you know, throw it in a bag, shake it up a little bit, and then compress it. We don't want anything poor going through.

DR. BOEHLERT: I absolutely agree. You need to determine how long to mix and all of those good things.

DR. GARCIA: Right, but on a long-term routine monitoring, no, you're better off testing the dosage units.

DR. LEE: I believe that the committee has expressed a consensus, which is that we need to incorporate the findings into a revised guidance, put it out there, invite more hopefully data submissions. In the meantime, we have a few moments to understand the implications of this document, and by the time that we meet next time, we should be able to maybe answer these very important questions.

Pat and Nair, do you have anything to add?

DR. DeLUCA: No. I think I agree. Blending is going to be part of the validation. It's a very important aspect I think of understanding the mixing process.

DR. LEE: Okay. Nair?

DR. RODRIGUEZ-HORNEDO: Yes, I agree with what has been said and particularly the fact that during the blending operation, we may need to correlate the factors that affect the blending in buying the materials from a manufacturer. What are the specifications? I think eventually the on-line monitoring is [inaudible].

I think what's confusing us throughout these exercises is the term blending, standard deviation, RSD, and the fact that at the current state of affairs, we really are using very problematic sampling techniques.

DR. LEE: Thank you very much. I'll expect both of you to blend in with the rest of us next time.


DR. DeLUCA: It will be a lot easier.

DR. LEE: Well, I'm glad that we changed the agenda a little bit. In any event, I think that was a very good discussion, and let's move on.

The next agenda item is to receive an update from two subcommittees. The first subcommittee is the Nonclinical Studies Subcommittee, and John Doull is going to present the report. John, I think that you have about 5-10 minutes. Is that right?


Well, some of you have in your agenda that Jim MacGregor is going to present this report. He called me on Monday and said he will be in NCTR for some other meetings and couldn't do it. But he told me he's sending me a group of slides. Unfortunately, my e-mail was sick all day Monday and all day Tuesday. But Nancy has put together the slides, and so I think we can do this fairly briefly.

Some of you may recall that in the July meeting of this committee, we had invited Dr. William Kerns and Dr. Gordon Holt to come and tell us what progress was being made with the two working groups that have been appointed under the subcommittee, the Cardiotoxicity Working Group and the first one which we called initially Vasculitis.

After that presentation, there was some discussion, led primarily by Jim MacGregor and Helen Winkle and some members of the committee, concerning the role and the management of the Nonclinical Studies Subcommittee. And today I'm really here to bring you a progress report on what has transpired since our July meeting, and I'm pleased to tell you that both of these committees have been very actively working since that time and also that they have been involved, both of them, in inviting outside groups, industry groups, professional groups, trade associations, and other regulatory groups and so on.

I would point out that that is one of the goals of the subcommittee, to find biomarkers that are useful for preclinical evaluation of adverse effects. The second goal is to look for biomarkers which are also useful clinically. And the third goal is the goal to encourage cooperation between industry, academia, and the regulatory agency.

I should say that the subcommittee met a couple of weeks ago, and what these slides are essentially is a summary report which came from that meeting. This is the one from the Cardiotoxicity Group that Dr. Ken Wallace presented. It's also been reproduced in your handout.

The main activity of the Cardiotoxicity Group is that they organized a symposium. That symposium was held at the American College of Toxicology. It was very well attended and from everything I heard was very well received. And it was on troponins, troponin T, troponin I. Because of that then, the Cardiotoxicity Group is looking forward to doing other symposia and joint workshops with various other groups. They're talking to ILSI and some other groups, as I understand it.

They would like to put together a proposal to use troponins as a measurement, a biomarker for drug-induced cardiotoxicity, and they're in the process of doing that.

Now, they are, of course, looking at other biomarkers for cardiotoxicity, but the initial focus was on troponin. I think Helen described that as available fruit from the tree.

MS. WINKLE: Low-hanging fruit.

DR. DOULL: Low-hanging fruit. That's it.

That's where they are essentially.

I thought there was a list of the members in there, but I guess that's on a transparency. Why don't you go ahead and show that. That's the members of the Cardiotoxicity Group.

The other group then is what we called initially the Vasculitis Group. One of the first things that group did was to change the name of the group to the Vascular Injury Group, and the reason for that is they felt it gave a broader scope to binding biomarkers to discover vascular injury. Vasculitis is one type of vascular injury, but by broadening the scope of looking for biomarkers, they felt they would be able to move along more rapidly.

The difficulty is that we really don't have any good biomarkers for drug-induced cardiac injury, and it's going to be a research effort. Therefore, they clearly needed, they felt, more flexibility in this approach.

Why don't we go back then to the Vascular Injury Group.

I've already mentioned the fact that they've changed the name, but because they changed the name, they appointed a sub-working group of the working group to look at the terminology involving vascular injury. Essentially that's what this says there.

They're also going to look at the value of these biomarkers for both clinical and preclinical, and as I indicated before, they want to combine the injury mechanism with the repair and inflammatory responses.

This simply points out the fact that they are looking to cooperation with other agents. Since this is going to be a research effort, funding is clearly a very important part of it. They are looking at NTP, ILSI, and various other places, NCTR, where they might obtain funding.

This defines some of the biomarkers that they're considering. They're going to look at the injury biomarkers, apoptosis, inflammatory biomarkers, the cytokines, CRP, and then what they refer to as a fishing expedition, which is like genomics and proteomics and metabonomics, and so on in which they will try and find biomarkers which in fact are useful.

These are some that the committee thought initially held some promise, and those are the ones that Bill Kerns talked about essentially at our last meeting.

This simply says what I've already said, that they're going to try and identify a source of funds to carry out these studies. They'll put together a research protocol. If they can get that funded then from someplace, they will then go ahead and do those studies. Out of that, hopefully will come some useful biomarkers.

The final slides here have to do with timing. They had hoped to get this protocol put together by February, and if they've identified funding by that time, then they would hope to go ahead and issue this RFI or research protocol. They figure it will probably take a year to get these studies done. That indicates that a year later they would then begin to look and make some recommendations regarding biomarkers for vascular injury.

The final two slides are the members of this committee. You can see that they represent industry, academia, and the regulatory agency.

As I indicated, our committee met a couple weeks ago, and we reviewed briefly what they've done. But we spent most of the time looking at ways in which the subcommittee could support the momentum which had been generated by both of our working groups. We talked about funding. We talked about publication. We talked about the management, the support areas. We talked about data confidentiality. Although we didn't really solve any of those problems -- in fact, we haven't really solved them yet -- we are working on them, and I think many of the answers, hopefully, that come out of our discussion will be useful to other committees.

The issue of funding. We need to be able to identify funds that could be used to support research. If we have additional committees that have to do with imaging or that have to do with genomics or proteomics, funding would also be an issue for them.

The publication. Some members of the working groups were concerned about the fact that if a committee with the Food and Drug name on it makes a recommendation about troponins, for example, what does that really mean in terms of guidance, and I think we have to deal with that.

The third issue was the data confidentiality issue, and that had to do with the working of the expert working groups. In order to discuss confidential data within those groups, they needed to make some internal arrangements, and I think they've dealt with most of those.

The final issue had to do with reporting. Right now the subcommittee reports to this committee and the working groups report to the subcommittee. Once the link to NCTR is developed, then the question is do we have dual reporting or do we continue to report primarily to this group. I think the members of both working groups felt it's very advantageous to be able to report to this group. We want this group to know if we are close to developing biomarkers which would be useful in cardiotoxicity or vascular injury.

Gloria, you were at our meeting. Did I miss anything? We'll be glad to answer questions.

DR. LEE: We're approaching the time of official adjournment.

Dr. Himmel, you wanted to ask a question?

DR. HIMMEL: No. I thought we were going on to the next agenda item.

DR. LEE: Any questions for John?

(No response.)

DR. LEE: Very interesting. Biomarkers. I think that's something on the horizon.

Dr. Himmel is going to report on behalf of the Drug Safety and Risk Management Subcommittee.

MS. WINKLE: Can I say a few words before Marty talks?

DR. LEE: Sure.

MS. WINKLE: When I showed the slide this morning, I told you that this was a subcommittee that was currently in the making, this Drug Safety and Risk Management Committee. It's still up in the air, and there's a lot of discussion in the agency on where this committee will reside or whether it would be a subcommittee out of this committee. I thought it would be important for you all to hear about this group, though, and what they plan to do, so if the decision is to move it under this committee, we will already know the information and then we can discuss it further if need be. So, I'll turn it over to Marty with that, but I just wanted to give you an idea of where we were, which is nowhere. They've got a good idea, but we haven't made the final decisions on where it will reside.

DR. LEE: Thank you, Helen.

DR. HIMMEL: Well, I think we're a little beyond nowhere, but we're not there yet.


DR. HIMMEL: My name is Marty Himmel, and I'm the Deputy Director for the Office of Postmarketing Drug Risk Assessment. That's also called OPDRA, which I think is easier to say, which is part of CDER. What we're responsible for in CDER is post-marketing drug safety. So, a lot of our work focuses on looking at the post-marketing arena, looking for signals of safety problems, working with the review divisions to develop risk management plans and risk management approaches around drugs. Medication errors is a big area as well, both in the pre-market looking at trade names, as well as the post-market looking at safety signals.

Over the past period of time, months to maybe the past year or so, we've certainly recognized and the center recognizes that there are a lot of complex issues surrounding the types of things that we've been grappling with, both drug-specific types of issues, how you develop the best type of risk management program, how you deal with some of the risk communication, risk perception issues, as well as some of the methodologic issues that we're grappling with, ranging from what should be appropriate metrics for risk management, what should be appropriate designs for looking for trial designs for anticipating medication errors, a research agenda, and the like.

So, because those issues are quite complex, we've recognized that there is value and there would be need in getting outside expert advice in the context of an advisory committee meeting or an advisory subcommittee meeting where we could have some open public discussion of that.

So, over the past few months, we've been working at trying to craft just such a group. Again, as Helen mentioned, we're not sure where that group is going to reside just yet, but wherever it resides, the issues that it's going to be dealing with and the type of structure that it should have would be the same. So, what we did was try to divide out the areas of expertise that we think we'd particularly want to have on such a committee, which is very new and unique to the center. So, we targeted experts in pharmacoepidemiology, experts in risk communication, risk perception, risk management, medication errors, clinical trial methodologies, evidence-based medicine, clinical pharmacology, and tried to identify leaders in the field in each of those areas so that we would have a couple of representatives that could speak to each of those areas and help us as we grapple with a lot of the issues.

At the present time, we've pretty much identified the roster of people that would be on the subcommittee. The paperwork is still in the works at this point in time, so I can't really go into the names and who is going to be on the subcommittee and so on.

Actually we were very encouraged and very excited about the fact that everybody we spoke to out in the community and out in academia was very enthusiastic and excited about having this type of a committee or a subcommittee that could start to grapple with a lot of these issues in a very open and public way.

Again, we're waiting to get final decisions on where the committee is going to reside. We do anticipate that wherever it does reside, probably towards the beginning of 2002, maybe the end of the winter or early spring, we would anticipate having the group, subcommittee/committee, up and running and dealing with issues.

Again, as I mentioned, I would anticipate that as we use the committee -- and we're going to have to learn how to -- it will probably focus on two general areas: drug-specific areas where the review divisions in the center have some drug-specific safety issues that they're grappling with, design of a risk management program and the like. So, therefore we can combine with some of those disease-specific advisory committees and help them deal with those issues, as well as more broader range methodologic types of issues which may not be drug-specific, but which will give us input on how to devise appropriate methodologies and approaches to some of the safety issues that we're grappling with.

So, that's pretty much where we are right now. We're pretty close to having the full complement of people. I think we have a nice, broad range of expertise and are anticipating being up and running in one form or another in the beginning of the next year.

DR. LEE: Thank you very much.

Are there questions for Marty?

(No response.)

DR. LEE: I think you made an excellent, clear presentation.

Well, that concludes today's discussions, deliberations. Are there any other questions that anybody wants to raise before we adjourn?

(No response.)

DR. LEE: Well, if not, tomorrow is going to be another interesting day. We are going to be starting the day with dermatopharmacokinetics in the morning, and in the afternoon individual bioequivalence.

Thank you. The committee members are now invited to stay for the training session.

(Whereupon, at 4:34 p.m., the committee was recessed, to reconvene at 8:30 a.m., Thursday, November 29, 2001.)