U.S. flag An official website of the United States government
  1. Home
  2. Training and Continuing Education
  3. CDRH Learn
  4. Transcript: Design Considerations for Pivotal Clinical Investigations for Medical Devices
  1. CDRH Learn

Transcript: Design Considerations for Pivotal Clinical Investigations for Medical Devices

Moderator: Heather Howell
December 13, 2013
12:03 pm CT


Coordinator: Welcome, and thank you for standing by. At this time, all participants are in a listen-only mode. After the presentation, we will conduct a question answer session. At that time to ask a question dial Star 1 on your touchtone phone.

Today's conference is being recorded. If you have any objections, you may disconnect at this time.

I would now like to introduce your host, Heather Howell. You may begin.

Heather Howell: Hello, and welcome to today's FDA Webinar. I'm Heather Howell, the Deputy Director of the Office of Communication and Education in FDA Center for Devices and Radiological Health.

Today, Dr. Greg Campbell, the Director of the Division of Biostatistics here at CDRH, will be presenting information to further clarify the FDA's final guidance document titled "Design Considerations for Pivotal Clinical Investigation," which issued on November 7 of this year.

Following Dr. Campbell's presentation, we will open the call to questions and will do our best to provide answers. Please limit today's questions to general questions about the guidance document. If you have questions about your specific clinical study, contact your review division directly.

If you have further questions about the guidance following the Webinar, contact information will be provided on the final slide of today's presentation. Please submit your questions to either email address or phone number provided there.

And now I give you Dr. Greg Campbell.

Greg Campbell: Thank you, Heather, and welcome, everybody. So today I'm going to talk about this guidance document "Design Considerations for Pivotal Clinical Investigations for Medical Devices," and I want to remind everyone at the start that this is FDA guidance, which represents FDA's current thinking on this topic but people can feel free to use an alternative approach and - as long as it satisfies the applicable statutes and regulations.

So I need to advance this. All right. So the document is a 58-page document, and I'll be able to touch on many of the highlights but not on - in any great detail. What you do need to know is that it really represents the efforts of a number of people within the center, about 30 people in the workgroup including representation from the Office of Device Evaluation, the Office of In Vitro Diagnostics and Radiological Health, the Office of Compliance, the Office of Surveillance and Biometrics -- where I am -- and the Office of the Center Director, and also include representatives from CBER, from the Center for Biologics, Evaluation, and Research.

So this is guidance for industry clinical investigators, institutional review boards, and FDA staff. It was issued in draft form on - in August - on August 15, 2011. We received 34 comments and took them all very seriously and tried to incorporate the many good suggestions into the final, which was issued in November of this year.

Okay. So the guidance should help manufacturers select the appropriate study design for pivotal study design for submission to the FDA, and this should result in better trial design and improve the quality of the data that may support in a better way the safety and effectiveness demonstration for a device.

And the rationale here is that we believe that better quality data will lead to timelier FDA approval or clearance of premarket submissions and speed US patient access to these new devices. And in particular, what we hope is that there'll be the need for fewer requests for additional data, for additional analyses, or perhaps even for a new study.

So next slide. So this slide is for - reflects the outcomes, so I'll talk a little about the regulatory considerations that are important to consider regarding this topic, and then talk about the principals for the choice of clinical study design and then talk about outcome studies, which are one kind of clinical study design, talk about randomized clinical trials, which are, you know, there's RSVTs, talk about various kinds of controls including non-randomized controls, talk about observational studies, one arm studies, and address in particular objective performance criteria and performance goal, and also say something about diagnostic clinical outcome studies, and then we'll switch gears and talk about the diagnostic clinical performance studies.

There's a section in the guidance about sustaining the level of evidence of the studies, another section on the protocol on this fiscal analysis plan, and then some closing remarks.

So there are three stages of medical device clinical studies that are identified in the guidance, and what we were trying to do was capture the clinical aspect of device development and divided it then into three stages. The first stage is the exploratory stage, which includes (unintelligible) these ability studies as well as what some people call traditional feasibility studies or pilot studies as well - captures the iterative learning that goes on in the exploratory phase.

The fact that stage - it - the fact that the device may change during that time as people learn more and more about the device and how it interacts with the patients and the subjects, and then also the product development. The - this guidance addresses the second stage, the pivotal stage. This is the stage that would be the definitive study or studies that would be used to support the safety and effectiveness evaluation for medical devices for its intended use.

And that - the third stage, which we will not talk about at all today, is the post-market stage, and that includes studies intended to better understand long-term effectiveness and safety including rare events. And the bullet at the end is the reminder that it's not necessarily the case that all products need to go through all stages, and in particular if a product is well-understood, it may be that an exploratory stage might not be needed, although for most novel products, that's usually not the case.

So the next slide, six, talks about the importance of exploratory studies in the device development and although one could in theory skip or treat lightly this stage, it's really important to understand how the device may perform, what its intended use would be, and often that - a good exploratory stage is very helpful in selecting what would be an appropriate pivotal study design.

So there are various kinds of devices that the document outlines, and the first is therapeutic, and a therapeutic device is a device that's intended to treat a specific condition or disease, and so that's very patient-oriented. Aesthetic devices are devices that provide a desired change in a subject's appearance through physical modification and structure of the body. And the third type of device that is identified in the guidance document are - is the category of diagnostic devices, and this is the idea of - that the diagnostic device provides information when used alone or in conjunction with other information could be used to assess a patient's condition, could be used to diagnose, could be used to monitor, could be used to screen a particular individual.

And the important thing here is although there are these three kinds of devices, there are also devices that are combinations of two of these. So, for example, there are devices that monitor patients, and so they have a diagnostic component, but when a patient is in a situation where they need a therapy, that device can also deliver the therapy. So it's a combination of both a diagnostic device and a therapeutic one.

So next slide. What - there's a section in the guidance that sort of underlines what are the unique features of some medical devices, and the first is defined there to be device complexity, but it's really how the device works, and it's the fact that in contrast, for example, with many pharmaceutical drug products where we don't really understand fully the mechanism of action. In many device situations, we do understand how the device works. We understand that it has local effects, and that's unique - pretty unique to medical devices.

The second category is user skill level and training. There - devices unlike, for example, pharmaceutical drugs, usually require a particular skill level for the user that might involve training and that turns out to be important. And the third feature is related to the second, which is to say for many devices there is a learning curve associated with the device, in particular if there's an implant the surgeon learns over time how to successfully do that implant and that turns out to be an important component in thinking about how to evaluate these products and how to design studies to make sure that one is paying attention, for example, to something like the learning curve.

And the last factor is the - sort of is the human factor consideration, the last feature here. And that's very important because that has implications in terms of how the device is designed and that sometimes results in the fact that over the development of the device, it's refined because there are human factor considerations that might not have been originally anticipated.

So a little bit about the regulatory framework in terms of valid scientific evidence. I should point out that the focus of this guidance document is on premarket approval applications and that while the principals can and often do apply to 510(k) or premarket notifications and also two de novos, that the focus here is primarily on PMAs.

And so the statutory directive is that there would be valid scientific evidence to make the determination of whether a device is - whether there's reasonable assurance that the device is safe and effective, and that this valid scientific evidence can come from all of the bullets listed there, all five of them, from well-controlled studies to partially-controlled studies, objectives trials without matched controls, well-documented case histories and reports of significant human experience.

And so if you just step back for a minute, what you see is that what that means in terms of design considerations for pivotal studies that we're talking about a much broader universe, for example, than one would see, for example, in drug regulations in the Center for Drug Evaluation and Research at the US FDA.

The next bullet is a little different in that there is in CFR 860.7 E2 a reminder that valid scientific evidence in terms of effectiveness should consist principally of well-controlled studies, and so that will have implications in terms of our discussion later on because if we're interested in demonstrating effectiveness, we should think about using controls, and so we'll have a discussion about the different kinds of controls that do exist.

So there's now an outline of principals for designing a clinical study and each of these represents a different subsection within the guidance, and the first is the types of device studies. And so the first - so types of device studies would be - the guidance document describes two different kinds. It describes what are called clinical outcome studies and it describes what are called diagnostic clinical performance studies.

And clinical outcome studies encompass every kind of therapeutic device, every kind of aesthetic device, and some diagnostic devices, whereas the clinical performance studies for diagnostic devices are just limited to some diagnostic devices.

So the next major category is a little more statistical, it's bias and variance. And so on the next slide you'll see a definition of the word bias. Here, bias doesn't mean prejudice. This is the statistical idea, statistical bias, and it's the systematic nonrandom error in estimating, for example, a treatment effect. So bias turns out to be a big thing to worry about and we'll talk about bias throughout the rest of the presentation.

And in particular, one of the things to keep in mind is that bias is not your friend and that you might want to try to eliminate it if you can, reduce it if you can't eliminate it, or you could learn to live with it if you can estimate it and then take it into account in - but the important thing is you want to be able to characterize the performance of your device in an accurate manner in terms of the safety and effectiveness.

The second major concept is variability or variance. And variance, of course, refers to how much variation there is in a particular estimate, and statisticians then can do a calculation that describes how much variability there is in a particular kind of clinical study design.

One way to reduce the variance is to just have a larger sample size, but it's not the only way. And then another way is to just pick a - the design that's more efficient and there are choices of designs and some designs are more efficient than others and the idea is for the same number of patients you could pick a design that would provide you more accuracy.

Now the one thing that's really kind of interesting here is if you have bias and you don't know how to treat it, you can't eliminate it and can't estimate it, then you can have a very large study but all you're doing is accurately estimating the wrong thing. So it's important to at least for the purposes of this presentation to think mostly about bias and trying to reduce it.

So the next slide describes four other principals in terms of designing a clinical study and they're the study objective, and that of course is a scientific rationale for stating, you know, why the study's going to be performed, and that needs to be carefully laid out in terms of figuring out what kind of design to choose.

There's the issue about how to select the subjects or the patients in a particular study, and what you want to do is select them in such a way that they reflect the target population, they reflect the group of individuals that you hope that your device would be appropriate for.

The third there is stratification for subject selection. In, for example, a multicenter clinical trial, it's usually the case that there is some effort to stratify separately among each of the centers of a multicenter trial. You could also make an effort to make sure that you enrolled an appropriate number of women in a trial and so you might stratify by gender or sex. You could stratify by how healthy patients are or by their age, and all of that, I think, is important to think about when you think about designing a clinical study.

And the last underlined one there is site selection. You want to select sites that in total will be representative of the target population. That doesn't mean every site has to be representative, but in total you would hope that in the combination of the - that - these sites would be a good representation of the target population.

Okay. So now let's talk about comparative study designs, and there are really three different kinds of comparative study designs. One is called a parallel design, one - another is called paired, and the third is called a crossover.

A parallel study design is when you have more than one treatment that you're going to compare, each subject gets only one of the treatments. So they operate in parallel, if you want. In contrast, if you have a paired design, every subject gets both treatments if there are only two, gets all the treatments if there are a number of treatments. And the third type is a crossover. We don't see this as much as our friends in the pharmaceutical world. This is the idea where a subject would get one treatment and then there would be a washout period perhaps, and then they would get another treatment. They would eventually get them all but in a crossover fashion. The first two are usually much more relevant for most device studies.

I should hasten to add that for the evaluation of many diagnostic products, in particular in vitro diagnostic products, the paired design is much, much more efficient, and that's because you have the - in many cases the ability to subject, for example, a sample to more than one diagnostic test and what - and the advantages of that are enormous. So we'll talk about that more later.

So in terms of clinical outcome studies, the guidance mentions the double-masked, or blinded - and actually, I should hasten to add that one of the changes in the final version versus the draft was the use of the word masked or blinded and when we were writing this document, we actually used them both all the time and some people thought that too cumbersome.

But it turns out that since the Food, Drug, and Cosmetic Act uses the word blinded to describe studies, we decided to opt for using that although this presentation will use both of those terms.

So the so-called gold standard for clinical outcome studies certainly in the drug world is the double-blinded or double-masked, randomized, controlled, multicenter clinical trial. Interestingly enough, although that may be true for therapeutic products, that's not necessarily the case for diagnostic clinical performance studies and the paired design that I mentioned before is not thought of - it is actually a very good design for that.

What I should also say is that the guidance makes the point that these randomized clinical trials are generally preferred for comparative studies, but - because they do tend to minimize bias, but the guidance goes on to make the point that sometimes they're impractical. Sometimes they're unethical. Sometimes they're infeasible in certain situations. And, for example, it may not be possible in a device study to have a - to mask or blind anyone to the treatment. It might be very apparent to everybody in the study from the subjects or the patients to the investigators to the third party evaluators of exactly who got which treatment.

So what the guidance does do is encourage manufacturers to come and talk to FDA before finalizing a study design, and it's usually a good idea to have a rationale for why that particular proposed study design was selected and not other ones.

So what if it's not - what if it - a randomized clinical trial is not masked or blinded? And in that case, then there is a bias, that - there is a statistical bias that is going to need to be addressed and it can arise in lots of different situations, but if in particular - and we'll talk about this in a little while - if the control is no treatment, then people are going to know perhaps whether they're being treated or not, and that's going to - that could introduce a bias, which may be very difficult to estimate, could be quite large and extend over a long period of time even if you - the endpoints that you're studying are not soft ones, they're hard endpoints, and they're very objective endpoints.

So when the study is not doubly masked or doubly blinded, there is the worry that there may be a subconscious or unconscious influence of the patient and or of the investigator that could affect the outcome of the study.

So we're now going to switch gears and talk about the different kinds of controls, and remember that for studying - for valid scientific evidence for effectiveness, the study should be principally well-controlled - primarily well-controlled. So I'll talk about placebo controls, about active controls, about no treatment controls, and then we'll talk about a non-randomized but still concurrent control, which is an observational study, talk about historical controlled studies, which are also observational, and also patients as his or her own control.

Okay. So a reason to worry about the selection of the control and one - why one might want to select a placebo control is something called the placebo effect. So a placebo control - and the guidance goes out of its way to use this terminology and not call it a sham control partly because the word sham has some derogatory meaning, which - whereas placebo control does not. But placebo control is a totally ineffective treatment, and it's usually always done in a blinded or a masked fashion so that the subject doesn't know whether they are receiving a totally ineffective treatment or an investigational, experimental treatment.

So placebo controls for devices are sometimes much more difficult than - to devise than one would - than our drug colleagues would try to formulate a pill that would look and smell like the same pharmaceutical product but have no effect.

So the placebo effect - so this is the effect because people don't know if they're receiving an effective treatment or an ineffective treatment, but it's the effect that - it's the response of patients or subjects to the ineffective treatment. It's - this effect is well-known in the literature and it's particularly a problem if one is looking at endpoints that measure pain or function, and it used to be people that thought, "Well, this could only last for a short amount of time."

In fact, it's been demonstrated that it can last for months, and sometimes longer, and there are people who've worried about the enhanced placebo effects for devices versus drugs and the notion that the more complicated the procedure is, that the more likely that there - the placebo effect might be large.

So as for what are the sources of the placebo effect, there are - the guidance mentions three. One is there is an expectation of benefit. If someone is in a study, even if they don't get an effective treatment, they might have the expectation of that benefit and that may result in their performance being better.

There's also a statistical notion of regression to the mean. If subjects are selected who are, for example, very sick, they may or may not improve slightly even if nothing is done and even if there is an ineffective treatment.

And the last bullet here is sort of the attention that subjects in a clinical study get, which is a showering of attention. People are watching and making sure that, you know, that they - the notion that there is always attention that is focused on them. This is called the Hawthorne effect in education, and it's been shown to have a positive effect even if there is no effective treatment being offered.

So the second bullet makes the point again that this placebo effect occurs not just for subjective endpoints for which it's well known, but also for objective endpoints. And the guidance does in one sentence encourage the sponsor to consider asking the subject and the physician during the study -- the investigator -- which arm do they think the subject is in. And part of the reason for this is it may be that the subjects guess that they're in the experimental condition because they're doing so well and so it's actually a surrogate for what could be a good effect.

So the second kind of control that's discussed in the guidance document is the control of no treatment. This is sometimes called standard of care or best medical management. Usually it's in a un-blinded or an unmasked study.

The patients get no experimental treatment and they know they're not getting anything and so they have no expectation and so what you would expect if you had an arm that had no treatment as a control and you had an arm that had a placebo treatment as control, you would expect the placebo arm possibly to do better because of the placebo effect, whereas the no treatment arm would have no expectation of benefit.

So in unmasked or un-blinded trials where there is a measurable difference between placebo and no treatment, because of that this control usually isn't preferred. It does create a bias compared to, for example, a placebo control.

The guidance then shifts to different kinds of non-randomized controls. I realized just now that I haven't talked about the active controls. I should've had a slide on active controls. Active controls just - which is discussed in the guidance document is that the comparative treatment is an effective treatment, and so one might, for example, do a superiority study for a new device, or you might merely demonstrate that your device is non-inferior to the active control.

Okay. But let's talk about non-randomized controls and the guidance distinguishes two types. One is a concurrent non-randomized control and the other is a non-concurrent, or historical, controlled study. And for both those kinds of non-randomized controls, there's a worry about how comparable the groups are if you don't randomize, and you don't randomize for either of these two types of control, and the worry is that the groups may look similar, but you really don't know if they're sufficiently similar, if there's the same expectation of benefit when you're doing a non-randomized study that creates the concern for a bias.

So these studies do create some concerns in terms of bias. The other thing is that randomization is a very useful tool for statisticians. It provides the basis for a lot of comparative statistical inference, and when studies are observational, this comparative statistical inferential machinery is compromised. And so although one can report confidence intervals and p values, they don't usually mean the same thing.

So in terms of historical - oh, right. So from a scientific standpoint - excuse me - the document does make the point that the randomized control trial, the RCT, offers the strongest form of evidence, the least amount of bias, and is generally preferred where - but does acknowledge that there are situations where this is neither feasible nor practical. And so one might, for example, use a non-concurrent historical control and - or even a concurrent non-randomized control.

One of the things - so it is possible to do some statistical machinations to try to address this, to try and look at how comparable the two groups would be, the experimental group versus the non-randomized control. And so there are things called propensity scores, which companies use in coming to FDA or historical controls or non-randomized concurrent controls to try to address this.

And what you can do with that is you can look at all of the observed measures and try to match them in some way in the control group and in the investigational arm. What you cannot do and what randomization does is it balances not just what you observe, it balances the things you don't observe, and that's why randomization is such an important tool in many - in the design of many clinical studies or medical devices.

And an additional concern for - is the last bullet. Historical controls, unlike concurrent non-randomized controls, have this additional problem of temporal bias that you are measuring patients or you're evaluating subjects at a different point in time. And so the effect could be due to just the fact that you're measuring them at a different time, or it could be due to the fact that the experimental investigational device is really doing better.

Okay. So a historical control is a one-arm study. It's observational, it's not a treatment, it's not a trial, nor is it an experiment, and there are problems with the statistical inference. And I think I probably said most of this already, so I'll go on.

The guidance now talks about one-arm studies and distinguishes two types of one-arm studies, and we'll talk about each of these separately. One is the objective performance criteria, OPC, usually a well described - very well described and publicly available control used to set criterion for success. We have examples in CDRH of these OPCs including intraocular lenses, including heart valves.

The second type that I'll talk about are performance goals. Performance goals are a different type of criteria from the objective performance criteria and I'll distinguish them in a couple slides.

In both cases, it's often the situation that the performance goal, or the OPC, is based on the confidence interval - that statistical confidence interval and the goal or the criteria could be that it exceeds, for example, some upper bound or is less than some lower bound of a confidence interval for some safety or some effectiveness endpoint.

Okay. So more detail about OPCs. An OPC is a numerical target value derived usually from historical data from clinical studies and or registries that can be used for comparison for a safety endpoint or for an effectiveness endpoint. These turn out to be very difficult to develop. So for example, the OPC for heart valves required a lot of statistical analysis to develop those OPCs and over time, those OPCs need to be updated. So it's very work intensive.

It is possible to construct an OPC carefully from all available patient-level data on a particular type of device, and this is usually not done by FDA and usually not done by a particular company.

Performance goals are very similar to OPCs but - in that they are numerical values, usually point estimates, that are considered sufficient by FDA for use as a comparison for an endpoint, whether it's a safety endpoint or an effectiveness endpoint. Generally, performance goals are an inferior level of evidence to OPCs because they usually don't rely on large databases and sophisticated statistical analyses of those large databases.

The guidance recommends that, like OPCs, performance goals not originate from the FDA and not originate from a particular sponsor and that the most helpful is when they come from a scientific body or from a medical society. The fundamental regulatory question though which remains is if a - when a device satisfies a performance goal for effectiveness or an OPC, does that then provide evidence that the device is effective? And that basic regulatory question needs to be in full view when people think about performance goals.

Okay. So the guidance then shifts to our - to the diagnostic devices, and most of the devices that FDA regulates are in the Center for Devices and Radiological Health, but some of them are in the Center for Biologics Evaluation and Research. Some of those diagnostic devices are evaluated with clinical outcome studies and we talked about clinical outcome studies in previous few slides.

Here, the difference is instead of delivering a therapy, or instead of improving one's aesthetics by an aesthetic device, the outcome - the intervention here is the information that the diagnostic test provides. So it's the information of that - the diagnostic test that says perhaps a patient is at risk of a particular disease or a patient has been diagnosed as diseased, and then the actions are what happens clinically as a result of that information. And for outcome studies, those endpoints are clinical.

So, for example, large screening trials would be clinical outcome studies where you would see if the information that was provided by the screening test helped to, for example, reduce the number of cancers, for example. It's usually the case that these are unmasked to the investigator. The investigator has to have the information from the diagnostic test. It - but in many situations, there's a third party evaluator and that person could remain masked or blinded.

So the section eight of the guidance document talks about diagnostic clinical performance studies, and here the objective is to characterize the performance of the diagnostic device based on the test results from the subjects, and to provide evidence to be used in the assessment of the benefit risk associated with the use of that diagnostic device for its intended population.

So here the idea is performance studies could be sensitive in specificity, could be the area under the ROC (unintelligible) operating characteristic curve, they could be studies of agreement of two - of a new diagnostic device with an old one.

And so in this section, there are various subjects that are addressed including the important focus on what is the intended use of the device, what is the study population, and how to select subjects and collect the specimens, comparison studies using what some people would call a gold standard but the device - the guidance refers to as the clinical reference standard masking or blinding, whether that can be accomplished in the different diagnostic performance studies, worries about the skill and behaviors of persons interacting with the device and this is in the context of something called the total test concept, and then there - at the - concludes with a section about a number of other kinds of biases that are fairly unique to diagnostic devices.

So in the last couple sections then, the - in section nine, there is - it's addressed how to sustain the quality of clinical studies in terms of handling clinical data, in terms of study conduct, in terms of how to analyze the studies, and anticipating changes that might occur during the study - during the pivotal study.

And the next section worries about the protocol, the scientific rationale for why the study is done, objective, usually with a statement or hypotheses if it's appropriate. In most therapeutic situations and aesthetic devices, that's the case.

Definition of who is the - what is the target population, the population of interest, of what is the proposed intended use, of what are the study endpoints. All of that is found in the protocol and also in the protocol or sometimes develop later is the statistical analysis plan, which is the topic of the next slide.

So the idea is to provide as much detail as possible of how to analyze the data in the protocol before any data has been collected, and that this should identify what is the analysis for the primary endpoint. Usually there's a sample size calculation and in order to do that sample size calculation, one needs to know how the data are going to be analyzed in terms of the statistical analysis for the primary endpoint. A notion that one needs to worry about what happens if, for example, the protocol assumes that a particular statistical procedure can be done but it relies on assumptions and those assumptions turn out not to be met, that would be something that would need to be addressed in the statistical analysis plan as well as planning for what happens if there are missing data.

The final bullet here is the point that the final statistical analysis plan needs to be submitted before any outcome data are available, and this - it - the full version of this can occur after the protocol but before any outcome data are available or before the data are un-blinded or unmasked.

So in closing, what I'd like to say is that what I hope you have gained an appreciate for is that device studies allow for a very broad range of kinds of designs, much broader than one would find, for example, in the world of pharmaceutical drugs, and that finally, companies are encouraged to come and meet with FDA in the planning stage by a pre-submission meeting or through an IDE meeting. Come and talk to us before you settle on your proposed clinical pivotal study design.

Thank you very much.

Heather Howell: Okay. I - that concludes our presentation today, so we will open the call to questions.

Coordinator: If you'd like to ask a question over the phone, dial the Star then 1, please unmute your phone, and record your first and last name clearly when prompted. Once again, to ask a question over the phone, dial Star then 1. Just one moment, please.

And as a reminder, to ask a question over the phone, dial Star then 1. I do have a question queue. Just one moment, please.

And our first question comes from (Will Mar). Your line is open.

(Will Mar): My question is do all endpoints need to be pre-specified in the statistical analysis plan or would CDRH also consider retrospective analysis to determine endpoints?

Heather Howell: I'm sorry, we're going to have to have you repeat your question, please.

(Will Mar): So for purposes of (unintelligible) or promotion, would CDRH consider retrospective analysis data?

Heather Howell: I'm sorry, it - sir, do you have your - because there's some background noise. We can't actually hear you. It sounds like there's a radio or something in the background. Can you try it again?

(Will Mar): So my question is for purpose of labeling for the label, does CDRH only consider prospective data or would they consider retrospective data as well like a (unintelligible) analysis?

Greg Campbell: So the question is about labeling and whether the labeling could include data that was developed prospectively during the study or data that could have been retrospectively acquired as well.

This is a good question. I think that the guidance document doesn't particularly address the labeling issue. I think that there have certainly been situations where labels have included data that are not just from a single pivotal clinical study design, and that in some situations there's more than one pivotal clinical study and the label couldn't reflect all of those.

I should hasten to add that this guidance document is not just from CDRH but also from the Center for Biologics Evaluation and Research.

Heather Howell: Thank you for your question. We'll take the next question, please.

Coordinator: Our next question comes from (Anthony Aselo). Your line is open.

(Anthony Aselo): Thank you very much for the presentation, just a very quick question. Does the guidance document provide any insight as to how to calculate a sample size when the conditions being detected by the diagnostic device has a very low incidence?

Greg Campbell: So that's a great question and since I'm a statistician, I'm the person that's going to answer that. I do have in the room Dr. (Markam Luke) who will be able to field some of the other questions, but this one is definitely in my court.

So the guidance document is not a statistical guidance document. It doesn't talk about sample size estimation at all, but the question you ask is a very good one because when you're dealing with a rare disease, that creates some challenges in terms of, you know, how to design the studies and the document doesn't talk about enrichment studies, but that's certainly a kind of study that one might consider in terms of how to size the trial.

Heather Howell: Thank you. We'll take our next question, please.

Coordinator: Our next question comes from (Amy Walback). Your line's open.

(Amy Walback): Hi. Thank you very much for the presentation. I had a question regarding the SAP, specifically the last bullet point. Typically in our study protocol we include, you know, an overview of how we're going to do the statistical analysis and that's provided as part of the IDE, and the SAP is a, you know, much more detailed document that's generated later. Is it now a requirement that the SAP be submitted to FDA in advance of analyzing the data or was the bullet meant to say that the SAP needs to be, you know, approved - you know, an established SAP needs to be in place before you do the data analysis?

Greg Campbell: (Amy), that's a great question. What we tried to clarify in the document was that the completed statistical analysis plan did not have to be part of the protocol necessarily, that it could be a draft, but that what I think FDA would like to see though is before the data are analyzed, before the - anyone has any access to any of the outcome data, we would like to see the complete statistical analysis plan. That is usually a different document than the protocol and it often is developed later, but it's important to have that nailed down before outcome results are known. So that's a very good question.

(Amy Walback): Absolutely agree that the - it needs to be established before the outcome results are analyzed. I just wanted to clarify, is it now a requirement that that's submitted to FDA? And if that's the case, is there a requirement to get, you know, approval of that or is the requirement now just that it be submitted before you do the analysis?

Greg Campbell: We would strongly encourage companies to submit that as an amendment to the IDE and the...

(Amy Walback): Okay.

Greg Campbell: ...so that there aren't any questions later on about, "Well," you know, "we want to analyze it one way and someone else wanted to analyze it another way." If everything is agreed upon in advance before any data are revealed, that that's really the best scientific approach.

(Amy Walback): Good. Thank you.

Greg Campbell: You bet.

Coordinator: Our next question...

Heather Howell: Okay. We'll take our next question.

Coordinator: Our next question comes from (Chris Miller). Your line's open.

(Chris Miller): Hi. Thank you very much for the thoughtful presentation. So FDA has recognized that sham controls can have much larger effects than we observe, you know, in pharmaceutical placebo controls. Given that endpoints are designed to be clinically meaningful and relevant, what are FDA's thoughts on the use of superiority margins in the context of randomized sham control trials since one could argue that the efficacy of a sham isn't inherently clinically meaningful? And what are cases when the use of the superiority margin is appropriate versus just testing for a simple (unintelligible)? Thank you.

Greg Campbell: Okay. So I think what you're asking -- and correct me if I'm wrong -- is you're saying if in a situation you're doing a two-arm study where the one arm is the placebo control because we don't want to use the word sham, and the other is the investigational device, one could...

(Chris Miller): That's - yes, that's correct.

Greg Campbell: Yes, so the - one could talk about a superiority margin and say that you would like the new device to perform at least that much better than the placebo control. In certain situations, there could be a good reason for doing that in that you might want to have some assurance that it performs quite well, but in others it may not be so clear.

I'm going to ask actually Dr. (Markam Luke) who's here and he's the Deputy Director of the Office of Device Evaluation to address this as well.

(Markam Luke): All right. Good afternoon. Greg, thank you. The question of comparing yourself to a placebo and whether there should be a superiority margin, I would say it's a very situational thing. It depends. Some products may have specific guidance on this and I would rely specifically on the divisions that you're talking with for your clinical study because it would be - that margin may vary or whether or not there is a margin would vary depending on which area of devices that you are discussing.

(Chris Miller): Thank you.

Heather Howell: We'll take our next question.

Coordinator: Our next question comes from (Sonya). Your line's open.

(Sonya): Hi, Dr. Campbell. Thank you for your presentation. My question is about site selection. You mentioned in your presentation that you should select sites to cover the population of interest. By doing that, I may end up with sites that are not comparable. For example, if I select sites in different parts of the US, I may end up with different ratio rate. What is more important, to cover the population or have the sites comparable so I can pool my data?

Greg Campbell: (Sonya), that's a great question and what I would - the way I would answer that is that it would be very difficult to find a lot of sites that exactly model the target population, and that a better way to think about it is you might be quite aware that some sites perform differently than other sites and as long as together they represent well the target population, that should be the goal.

So for example, you have urban sites versus suburban sites, you have academic sites versus sites in the inner city. I mean, all of these things - one would want to have a variety of sites rather than all the same homogenous site even if you could find them.

So the point in the presentation and in the guidance is to try and as a group the sites would be representative of the target population but, you know, no site by itself may be representative.

(Sonya): Thank you.

Heather Howell: We'll take our next question.

Coordinator: Our next question comes from (Luke Vanhose). Your line's open.

(Luke Vanhose): Thank you for this good overview. My question is about in vitro diagnostic devices and bias. Could you elaborate particularly on what we should put into the submission package for the different biases that could influence an in vitro diagnostic device clinical study particularly if a part of the study includes a retrospective analysis?

Greg Campbell: Right. So that's a really good question because the diagnostic device arena and in vitro diagnostics in particular worry quite a lot about bias, that there are many, many different kinds of biases that can arise in a diagnostic study and one of the biggest kinds of biases that I did not mention is selection bias.

So it matters how you select the samples or how you select the subjects in a diagnostic device evaluation study. So in particular, the guidance talks about in the last part of the - of section eight talks about lots of different kinds of biases, which I didn't go into in this short presentation, but we talked about selection bias. This is a bias because - associated with selecting subjects or samples from - for the study population.

This is actually a problem in the therapeutic and aesthetic realm as well, but there's also something called spectrum bias wherein you're not representing the entire spectrum of individuals for whom the diagnostic devices pertain.

There's something called verification bias. This is the notion that you may not be able to identify whether a particular - whether a person has - you may not be able to verify the disease status, for example, of everyone in the study, that some people may have follow-up tests that will enable you to verify that, for example, they're positive for the test and for the reference standard is another nod.

There's lead-time bias. There's length or survival time bias. There's extrapolation bias. People write articles about this and there are many, many different types and we couldn't - I don't have time here to go into all of them, but one needs to be aware of these when one's planning a study of an in vitro diagnostic device. And it's usually a good idea to think through those in terms of preparing your application.

So I hope that helps.

(Luke Vanhose): Thank you. Yes.

Heather Howell: Thank you. We'll take our next question, please.

Coordinator: Our next question comes from (Will Mar). Your line's open.

(Will Mar): Hi. My question is if the standards (unintelligible) claims, the data to support that, is it different for the data that CDRH expects for label and claims?

(Markam Luke): So the question is whether the study requirements for a marketing study is different for a device versus a study designed for application for approval or clearance of the device? Is that the question?

(Will Mar): No, it's more towards making claims about the device. So if I want to make a claim about a device in a market and promotional piece, are the study requirements, the level of evidence, are they going to be different for, you know, claims that I may have about the device and the product label?

(Markam Luke): I...

(Will Mar): So if I make like a TV commercial and I start making claims about a device, can I use a different level of evidence to substantiate my claims in the TV commercial versus claims I'm expected to put in the direction for use, for example?

(Markam Luke): So again, this sort of thing depends. Are you talking about a 510(k) device versus a PMA device...

(Will Mar): Yes.

(Markam Luke): ...claims or...

(Will Mar): A PMA device.

(Markam Luke): Claims are grounded in the clinical study itself. The content of the evidence that's derived from that clinical study then drives what claims you can make in the labeling, and then the labeling drives your market, and our compliance folks look at that. So ideally they are not different with regards to the level of information and the level of certainty that you're able to make those types of claims.

(Will Mar): Okay. Thank you.

Heather Howell: We'll take our next question, please.

Coordinator: Our next question comes from (Reuben). Your line's open.

(Reuben): Hi. The question is regarding clinical trial sites outside of the United States and what the guidance there is especially around IVD versus medical devices and regulatory pathway 510(k) versus PMA.

Heather Howell: I'm sorry. Can you repeat the question, please?

(Reuben): Certainly. The question is around any guidance surrounding use of clinical trial sites outside of the United States, how that might vary by IVD versus medical device and or regulatory pathway 510(k) versus PMA trial.

(Markam Luke): First of all, I'm going to just say that information that's sent in the context of putting information into a PMA, that information that you send to us should follow the criteria by which - a PMA, those sites are subject to inspection, etcetera, I mean, they're submitted as evidence for a marketing application and so the informational pieces sometimes will be driven by local jurisdictional regulations, but at the same time, they should be of sufficient standard with regards to quality so that FDA can assess them and with a degree of confidence.

(Reuben): Okay. Thank you.

Heather Howell: Thank you. We'll take our next question, please.

Coordinator: And as a reminder, to ask a question dial Star and then 1. Our next question comes from (Munish). Your line's open.

(Munish): Yes, hi. Thanks for the guidance and for an excellent presentation. My question relates to interim analyses that may not have been planned but might be necessitated based on extraneous information that becomes available. So a couple of questions.

One question is can interim analyses be done purely for business and planning purposes as long as you can show that the information was constrained to certain individuals?

And second, if you didn't plan an interim analysis, can it still be - can an amendment be done to the protocol and an interim analysis be added even when you're sufficiently latent to conducting your trial?

Greg Campbell: Okay. So I think that's a good question, which probably I'll try to answer. The guidance doesn't really address the notion of interim analysis to any great extent. We are aware that from time to time, that - yes, that there might be changes to the pivotal clinical study that might necessitate a change in how the data are analyzed.

In terms of one of the reasons people might do an interim analysis is they're anticipating that they might stop early for effectiveness or stop early for safety, in which case that's usually planned.

A question - the question you asked is whether you could do an interim analysis and did not intend to stop at all and I - (Markam Luke) is going to address this in a minute but let me just say that with regard to your second point, there are certainly situations that arise that might cause a change in the - in when, for example, an interim analysis might be done, in which case a company would be encouraged to submit a change in their IDE and let us know about that and we'll then let the company know about that, but Dr. (Luke)...

(Markam Luke): Hi. You raise an excellent question and this is a question that within our center we've discussed across our offices quite a bit and - as well as our folks in the Center for Biologics. The - you may be interested in reading through specifically section 9.4 of the final guidance on anticipating changes to the pivotal study.

I am going to quote from the guidance that says, "(Adaptions) that are not preplanned can severely weaken the scientific validity of the pivotal study." And we would encourage planning, as you said, and you're asking a question, "What if you don't have that planning?" And if you don't have that planning, then can you do some analyses that you had not anticipated and planned for?

I think you can do analyses and provide them to us as some sensitivity analyses and the FDA will be happy to look at those in the context that they are - they were not preplanned. As long as you're transparent with how those changes were made and also provide analyses that were the planned analyses so that we can compare them in a judicious manner.

Does that answer your question?

(Munish): Partially. My question was, you know, as long as you've adjusted for a statistical penalty, even though you didn't plan it, but one can do an amendment to the protocol and submit it to you and then get feedback on whether it's okay to do an interim analysis.

(Markam Luke): We do address that in the paragraph after that about study design needing to make accommodation for (unintelligible) interim safety prior to allowing any - if you're going to do any expansion of enrollment, etcetera, usually there's a reason for your interim analyses...

(Munish): Right.

(Markam Luke): ...and if you have a sense that your study may not be going as well as you'd like and you say, "Oh," you know, "Let's do an interim analysis to really look at this and see how we can rescue that." Well, we would like - with these guidance - and the intent is to make sure that you understand that it's important to do those kinds of analysis as planned analyses that you say, "Okay, well, what" - it's sort of an insurance policy that you take up, you write into the protocol, "We're going to do an interim analysis at such and such a point," and it can coincide with, say, certain decision points within your company's business model, for example.

(Munish): Thank you.

Heather Howell: Okay. We'll take our next question, please.

Coordinator: Our next question comes from (Jake Bankhead). Your line's open.

(Jake Bankhead): Thank you. Some of our partners aren't - weren't able to be here with this presentation so I just have a general question. Is there any way we can get a copy of this PowerPoint presentation that you just presented?

Heather Howell: Yes, this presentation will be available on the CDRH Learn section of FDA.gov under the Medical Device section and I believe in the announcement that came out for this Webinar there's actually a link to that section, but...

(Jake Bankhead): Okay.

Heather Howell: ...if you don't have that, just go to FDA.gov, click on Medical Devices, and look toward the bottom and you'll see a section called CDRH Learn.

(Jake Bankhead): Okay. Thank you very much.

Heather Howell: Next question, please.

Coordinator: Our next question comes from (Steven Kay). Your line's open.

(Steven Kay): Yes, hi, I very much appreciate your presentation and I have one question. In a setting of a currently cleared - 510(k) cleared Class II non-significant risk device that has a specific indication as a therapeutic device, if we wish to expand that indication and do the clinical trials to prove that - the additional indication, is there any guidance as to whether a randomized controlled trial versus a randomized controlled double-blind study would be required to prove out that additional indication?

(Markam Luke): Hi, this is Dr. (Luke). That will again depend on the specific nature of what the investigation is, what is the disease that you're studying. Often, a specific disease will have specific requirements within a division whether they may be beyond use of a 510(k), a cleared device does not necessarily mean that that new indication is not - that could be a PMA type of indication in which case there may need to be an IDE investigation look at how that device performs for that particular indication, so I'm going to refer you to the specific division that regulates your device.

(Steven Kay): I see. So depending upon the nature of what we're doing, that would somewhat dictate whether it would fall into a de novo 510(k) as an example versus a PMA?

(Markam Luke): Correct. Or whether it could be just another 510(k) or an amendment to a 510(k).

(Steven Kay): Or an amended 510(k). Yes, we have not been able to find a predicate device that has this indication and we're trying to expand from a pain management indication to a pain management and reduction in post-operative edema is really the only expansion that we're looking for.

(Markam Luke): And we have a number of modalities by which you can discuss a specific device or studies with the division and you can request a pre-submission meeting or...

(Steven Kay): Yes.

(Markam Luke): ...send in a document for us to review...

(Steven Kay): Okay.

(Markam Luke): ...and we will get - respond in an efficient manner.

(Steven Kay): Okay.

(Markam Luke): The other possibility is that you can contact the specific lead reviewer for your product or a branch chief and see what the best way they want to handle this with you, but ideally, it's done in a discussion where we have time to talk about the specific devices. This course here is intended to be very general.

(Steven Kay): I understand. Well, thank you for your help and I will - I think I just approach the appropriate branch would make most sense.

Heather Howell: Thank you. Okay. We'll take our next question.

Coordinator: Our next question comes from (Sonya Sosa). Your line's open.

(Sonya Sosa): Yes, if I have a Phase III trial with a primary endpoint and it turns out that that primary endpoint is making quite impossible to enroll patients in the study, is viable to switch the primary endpoint to another endpoint and still continue the trial knowing that we can convert the results from the first piece of the trial to the new endpoint, or do I have to restart the trial?

Greg Campbell: So, (Sonya), you're referring to a Phase III study, which is of course the drug term. We would call it a pivotal clinical study.

(Sonya Sosa): I'm sorry. I came from the pharmaceutical industry.

Greg Campbell: Oh, that's fine, that's fine. I mean, you must understand then that - the notion of changing an endpoint during the course of a study is usually problematic. It tends to weaken the scientific validity of the study and it certainly - we're certainly aware that in some situations it may be very difficult to conduct the study even though it's well planned.

There may be some enrollment problems and my - I guess I would encourage you to come and speak to the particular review division and work with them to see if you can find some way to accommodate your needs and the fact that you would like to essentially do a different study with different primary endpoints and a different plan of how to enroll patients, but that's a very good question and it is one that we are aware that companies do face from time to time.

Heather Howell: So, this is Heather. I'm going to do a time check. We have about five minutes left, so we have several callers still left in the queue. We're going to take as many of those questions as we can and if we do not get to your question at 3:00, please contact one of those numbers, the mailboxes or the contact number on the slide that is on your screen right now.

So let's go ahead and take the next question right now.

Coordinator: Our next question comes from (Amy Walback). Your line's open.

(Amy Walback): Hi, thank you. I just had a follow-up question to the discussion regarding the interim analysis. What if it wasn't the typical interim analysis but more an administrative analysis for business planning purposes? You know, say a small company that is, you know, working on a PMA study, you know, needing to get an early look to see if there is - you know, see if the efficacy is there to drive, you know, business decisions on, you know, planning do we need to hire, you know, a team of regulatory people to put the PMA together or do we need to start trial number two, hypothetically?

You know, if it's truly just for administrative purposes and it's not, you know, going to be used to at all change the conduct of the study, can you comment on, you know, the acceptability of these administrative analyses or if, you know, if that needs to be covered in the SAP or is the protocol amendment?

Greg Campbell: So, that's a very good question and I do understand that in many situations companies would like to be able to examine the data partway through the study in order to perhaps plan the next study or to make some business decisions based on the results. Those kinds of looks can be problematic because they can introduce different kinds of biases.

So, for example, if the company has a look and even though they don't plan to stop, they might - that information could affect the conduct of the study and could threaten the scientific validity of the study. So I think it's very important that information about an ongoing trial be closely guarded to prevent things like what are called operational bias, which we didn't talk about here, but that information could affect the ability to recruit patients in the future, the ability to retain investigators, and so on.

So I would discourage companies from doing this in many situations because it does create threats to the scientific validity of the study, although I do appreciate why there are people who might want to do it. In some cases, it's - it could be helpful for a data monitoring committee if the company had set up a data monitoring committee. That committee would have access to the results, but they would not release those results to the company during the course of the study unless - in most cases, the study would continue without that information.

Heather Howell: Okay. Thank you. I'm afraid to say we have only time for one more call and if you are on the line and you have a question, please contact that - those numbers and we will get back with you immediately. But we'll take one more question.

Coordinator: Our final question comes from (Rachel). Your line's open.

(Rachel): I was just wondering in general, is the level of evidence that's needed to obtain clearance of a 510(k) device different than what you're looking for that - would be - what - that would be required for approval of a PMA device?

Greg Campbell: So, that's a good question. I should remind everyone on the line that this is a guidance document targeted toward premarket approval applications and not toward premarket notifications or 510(k)s. The regulatory standard in terms of 510(k)s is different and addresses something called substantial equivalence and so whether - in some cases, certainly the same principals can apply to 510(k) studies that are designed to provide evidence for substantial equivalence, but in other cases there are other ways of addressing that.

So maybe (Markam) - Dr. (Luke) (unintelligible).

(Markam Luke): Okay. I think the information needs will vary even within PMAs or 510(k)s. Some PMAs will require a certain amount of information that could be potentially less than a 510(k) - some 510(k)s, but in general, the 510(k) application - because of the regulatory paradigm where you're looking at substantial equivalence, that information will - information that's submitted is different from the PMA.

I'm not going to get into the level of evidence because that will vary depending on the specific device.

(Rachel): Okay. Thank you.

Heather Howell: Thank you. Okay. This is Heather, and this does conclude our Webinar for today. I just want to thank Dr. Campbell and Dr. (Luke) for being available for questions and I do want to remind you to please contact those mailboxes with your questions, we will get back with you today.

This presentation and the audio recording of today -- as well as a written transcript -- will be available on that site that I mentioned earlier, and again that's on FDA.gov, the Medical Devices section, under the CDRH Learn. And on that page, you will find this information. If you have any problems finding it, you can also contact that CDRH questions mailbox and we'll help you there as well.

So thank you very much.

Coordinator: And this concludes today's conference. Thank you for participating. You may disconnect at this time.


END
 

 
Back to Top