UNITED STATES OF AMERICA
DEPARTMENT OF HEALTH AND HUMAN SERVICES
FOOD AND DRUG ADMINISTRATION
+ + +
CENTER FOR DEVICES AND RADIOLOGICAL HEALTH
+ + +
USING SCIENTIFIC RESEARCH DATA TO
SUPPORT PEDIATRIC MEDICAL DEVICE CLAIMS
+ + +
December 5, 2011
FDA White Oak Campus
10903 New Hampshire Avenue
The Great Room (Room 1503)
White Oak Conference Center, Building 31
Silver Spring, Maryland 20903
MODERATOR: SUSAN K. CUMMINS, M.D., M.P.H.
Chief Pediatric Medical Officer, CDRH
MARY BETH RITCHEY, Ph.D.
Associate Director for Postmarket Surveillance Studies
Division of Epidemiology, Office of Surveillance and Biometrics, CDRH
JOY SAMUELS-REID, M.D., FAAP
Office of Device Evaluation, CDRH
MARKHAM C. LUKE, M.D., Ph.D.
Deputy Director, Office of Device Evaluation, CDRH
PATRICIA BEASTON, M.D., Ph.D.
Medical Officer, CDRH
LAURA A. THOMPSON, Ph.D.
Division of Biostatistics, Office of Surveillance and Biometrics, CDRH
LAURA ADAM, CDRH
BARBARA BUCH, M.D., CDER
JANA DELFINO, Ph.D., CDRH
NADA HANAFI, CDRH
ANGELA JAMES, RN, CDRH
TESSA LEBINGER, M.D., CDRH
CRYSTAL LEWIS, RN, CDRH
STEVEN OSBORNE, M.D., CDRH
SUZANNE RICH, RN, CDRH MedSun
HILDA SCHAREN, M.S., CDRH
JANESIA SIMMONS, M.P.H., CDRH
JOAN TODD, RN, CDRH
VICTORIA WAGMAN, CDRH
WELCOME: STRUCTURE OF THE DAY AND LOGISTICS -
Susan K. Cummins, M.D., M.P.H.
OVERVIEW ON THE USE OF EXTRAPOLATED DATA TO ESTABLISH PEDIATRIC DEVICE EFFECTIVENESS - Susan K. Cummins, M.D., M.P.H.
TOPIC 1: DEFINING THE USEFUL RESEARCH DATA LANDSCAPE: AVAILABILITY OF DATA SOURCES
Mary Beth Ritchey, Ph.D. 12
BREAKOUT SESSION QUESTION #1
BREAKOUT SESSION #1 REPORT BACK
TOPIC 2: DEFINING THE SCIENTIFIC AND REGULATORY CHALLENGES AND LIMITATIONS WITH USE OF EXISTING RESEARCH DATA AND PUBLISHED LITERATURE
Joy Samuels-Reid, M.D., FAAP
Markham C. Luke, M.D., Ph.D.
BREAKOUT SESSION QUESTION #2
BREAKOUT SESSION #2 REPORT BACK
TOPIC 3: WHAT ARE THE SCIENTIFIC AND REGULATORY CHALLENGES WHEN USING THESE DATA TO EXTRAPOLATE OR ESTABLISH PEDIATRIC EFFECTIVENESS FOR VARIOUS MEDICAL DEVICES
Patricia Beaston, M.D., Ph.D.
Laura A. Thompson, Ph.D.
BREAKOUT SESSION QUESTION #3
BREAKOUT SESSION #3 REPORT BACK
M E E T I N G
I'm Susan Cummins. I'm the Chief Pediatric Medical Officer for the Center for Devices and Radiological Health, and I want to welcome you to our public workshop entitled Using Scientific Research Data to Support Pediatric Medical Device Claims.
As a friend of mine once said, please take all those fruity things you have in your pockets and turn them off or turn them down to silent so that we aren't interrupted. If you need to use your cell phone and you need to talk to someone, please go outside. Thank you.
This is our disclaimer. It's a long disclaimer. I'm not going to read it all to you. Aren't you glad I'm not going to do that? This just implies that nothing that's said here today implies that we endorse any comments that are made. Indeed, what we're going to do today is a lot of brainstorming and don't be surprised if conflicting ideas or ideas that don't seem to agree with each other come up in the process of doing this planning effort, and that's okay. We actually look forward to that and want to see lots of ideas come out, including ones that may not agree with each other on a first glance.
So, first, a little bit about what we're doing and I want to acknowledge that we have two audiences. We have those of you here in the room, and we also have an audience on the web that are watching our presentation by webcast.
Much of the work that we do today is going to be in small, breakout groups at the tables you see in the back of the room. Because the work is done in breakout groups, it's not really possible to telecast that in a way that people can understand what's going on. So our web audience will be able to see our plenary sessions, but will not really be able to participate in the roundtables.
I want to say that we have opened a docket and the web address for the docket is here. We really encourage comment, particularly from those who can't be here in the room, but also from those of you who may likely leave and wake up the next day or next week and have another insight about the work that we've done here. We would like to hear from you about that. So please feel free to share it with us through the docket. We'll have this docket open until January 5th, when it closes.
This workshop is established and designed to enable an ample opportunity for public comment and engagement in a way that allows for interactive dialogue among those of you in the room to problem solve. We're problem solving to support the process of establishing device claims for pediatrics.
We'll have a series of -- there will be an introductory plenary talk, if you look at the agenda. Each breakout session is preceded by one or two introductory plenary talks and those will be given by FDA employees, and then we'll go into roundtable discussions that will be facilitated by FDA staff. Then we'll have a series of report-backs from those breakout sessions to the larger group.
Each of the breakout sessions will follow this process: The context for the session is framed by a single focus question, so that's the question that you'll use and think about and contemplate and discuss at your breakout. The purpose of giving you that focus question is to help center your thinking and brainstorming efforts to identify ideas to address that focus question. Through your work with your facilitator, you'll organize those ideas into clusters and name those clusters and then you'll sit back and reflect on your results. That process of brainstorming, organizing, and naming provides a lot of useful information about where we need to go in this effort within the Agency.
This is what the product will look like at the end of each of your breakouts. You'll be asked in your breakout session, first, to think yourself about the question at hand and then to break into smaller groups at the table to discuss your ideas together. Those ideas will then -- we'll ask you to write them down on Post-its that say a short, small number of words, three to five words, written big so other people at your table can read them. And then we'll go in a series of round robins around the table to get those ideas out on flip charts. Then we're going to look at them and cluster them. What you'll see is that there will be thematic areas within those clusters. So lots of idea, some will be conflicting ideas, but they'll start to group in logical groupings. Together, you'll organize those groupings and will name them.
This is just an example. I did this just to get you an idea. So you'll see each of those little white blocks represents an idea. Some of them are overlapping because they're similar and you'll see that the clusters have each been named. These are just titles I made up to match the color of the circles that I put on here. You won't have colors for your circles up there. So you'll organize them in these groups, sit back and look at them, and give me the name. Play with the name, but make the name meaningful. Have your name address the concepts that you're trying to bring together in this group.
Now, why is this a valuable way to go? It's useful because you'll see that there's not equal weight to each of these clusters. That's usually the case when we do this kind of facilitation process and kind of planning process. Usually there's one, maybe two themes that come out and are really dominant. That's very important information because it helps us understand where the energy is, where the biggest problems are, where the investment of time and work is most worthwhile. So that's why we're going to go through this step-wise process, and that's the learning process you'll get from clustering these ideas and naming them.
So here, we've named the first biggest cluster "Let's Innovate." Innovation is a huge -- that's a word we hear many, many days, every day at CDRH; greening the future, solving with solar. And then you'll see there's a couple of little outliers here. Those little outliers, there's often ideas that don't seem to fit and that's important information too. We just capture those on the side, kind of like a parking lot.
I'll also ask you that when we're done, that you move quickly to your small group session. I encourage you not to sit with your friends, to sit next to someone you don't know, to build collegial relationships and to share your knowledge with people who may not be as familiar with it as people who you've worked with before.
So now I just want to give you a brief overview on why we're here today, why we're talking about extrapolating data to establish pediatric device effectiveness claims. This is just my title slide, so we'll move on. I want to first say that FDA is committed to achieving on-label use of medical devices in children, just as we're committed to achieving on-label use of all medical products that are used in children. We've had a very long effort in the Agency working on labeling drug products for use in children. We're committed to achieving the same robust labeling for medical devices as well.
We have the authority to do extrapolation of effectiveness that was established through the Pediatric Medical Device Safety and Improvement Act of 2007; the PMDSIA is its acronym. The act explicitly gave us the authority to extrapolate effectiveness if the course of the disease or the condition or a similar effect of the device on adults may allow us to make a determination of effectiveness. So, if the disease course or the effect of the device in adults is likely to be the same in children, then we can extrapolate effectiveness from adults to children. We can also extrapolate across pediatric subpopulations.
Now, a couple comments about the statute. The statute allows us to extrapolate effectiveness. It does not allow us to extrapolate safety. That's been a general practice on the drug side. We've always collected safety data, but have frequently done some extrapolation -- a full extrapolation of effectiveness and done limited studies to determine effectiveness on the drug side. It's similar on the -- we've always collected safety data and we're using a similar model here for drugs.
This statute is also limited to approved devices, which means PMA devices and HDE devices. We're not talking here today about the 510(k) arena, which is a huge discussion in and of itself, but extrapolating effectiveness was not authorized for 510(k)s.
The other component I wanted to mention is on the safety side, the Act also gives us the authority to conduct longer safety studies post-approval. This is the language that establishes that authority. I'm not going to read it to you, but we can think about using that authority to conduct longer post-approval studies for safety.
So why extrapolate effectiveness? It's important to extrapolate effectiveness because it helps to foster pediatric labeling and establish on‑label use of approved medical devices. As I said, we want to foster on‑label use of medical devices. Why? Every time a device is used off-label, it's an experiment of one, and one child from which we usually get absolutely no data. With on-label use and moving towards on-label use, we get good and useful information about how the device has performed, what the problems may be, how effective it's been, and it allows us to move forward in a systematic way to be able to better use devices in kids and know how well they're performing and what safety issues can arise.
The other reason for extrapolating effectiveness, and this is a really important reason for extrapolating effectiveness, is that it is an efficient approach which allows us to make use of all available data, data that may not be a perfect fit, but it means that we can minimize it so we need to do experimental studies in kids to only those studies which are necessary to establish a claim.
I want to just give a nod to our colleagues in CDER. CDER just published a review of their extrapolation efforts for medical claims for drug products in kids. It was published in the November 2011 issue of Pediatrics, and this is a wonderful study. If you've not read it, I encourage you to read it. They showed that extrapolation of adult effectiveness was used for 82.5% of all studied drug products, for 137 of 166 products for which there was a written request issued. There was complete extrapolation in nearly 15% of studied products, partial extrapolation in 68% of studied products, and for 61% overall, that extrapolation led to a new pediatric indication in labeling or an extension of an indication to a new pediatric age group. It's a concrete example of how CDER was able to take available data and use it and fill in holes to establish effectiveness claims for children.
Our next steps today, Mary Beth Ritchey is going to introduce a discussion of defining the useful data landscape and we'll then address afterwards, after our first breakout, the scientific and regulatory challenges, and then we're also going to have a discussion at the end about modeling and other approaches we can use to fill in data gaps.
So I want to thank you all for being here, and I look forward to a very productive and fun day together.
In thinking about this, we know that there's a need for data. We know that we know some sources that are available for this data. We know that when we combine some of those sources, we can create new data sources, and we also know that there are some things that we don't know that we don't know, and so there's unknown data sources. In our guided discussion this morning, we would really like to think about all of these things.
As Dr. Cummins said, our goal right now is to minimize the need for new studies, and with that goal, we want to move toward on-label use for pediatrics. So here we want to extrapolate the data we have for adults to pediatrics, and we want to do this for effectiveness because, for safety, we can look later in the post-approval period for additional data.
So our data source inventory is particularly important. We want to capture all of the data that's available, and our goal today is not to just have this list of this is the source, this is the contact, this is the source, this is the contact, but to really think through, what are the strengths of the data that's available and what are the limitations of these data sources.
So there's a lot of data right now. We have data at FDA. There's data available to companies. There's data that other governmental agencies holds. There's also professional societies which have data, insurance providers. There's published literature and many, many other things.
At FDA, we have premarket data, which is non-clinical, as well as clinical. We have post-market data, which often includes both clinical and non-clinical as well. We also have collaborative efforts both with companies and other entities. Companies hold similar premarket and post-market data. Then, with corrective and preventative action plans and their own post-marketing surveillance, companies also have additional information about both on-label and off-label uses of their devices. And companies work worldwide. They don't just talk with FDA. They talk with other entities throughout the world, and so they have a robust experience of data.
With this, one example would be the IMPACT registry. This is IMproving Pediatric and Adult Congenital Treatment. It is something that a post-market study is embedded into and it's a registry that's held by the American College of Cardiology. The goal here is to capture prevalence, demographics, management, and outcomes of catheter-based interventions. So that's one type of data source that's held through one of these collaborative efforts.
There's also data that's available through government agencies. The CDC does many surveys, and NIH sponsors many trials and cohorts. The Agency for Healthcare Research and Quality also has some sponsored studies, including their healthcare cost and utilization project. This is a series of healthcare databases, one of which is the Kids' Inpatient Database, and this is a database of hospital inpatient stays for pediatrics.
This particular database is used specifically by AHRQ for use in cost of hospital services, but here, for extrapolation, we might be interested in the fact that they have a lot of information about medical treatment and effectiveness, as well as quality of care and other access to care issues. And so we may be able to start there to move forward for extrapolation.
Then professional societies hold registries. In addition to the IMPACT registry, societies hold other registries that may just be getting started, but might have some information about a procedure that just, the device is sort of added onto that, something that we could leverage there. And then professional societies also hold a wealth of clinical expertise that we could potentially capture or tap into.
Other data that's available might be from insurers. There may be administrative data or claims data. In many areas of devices, there's RCTs and cohorts that are published, as well as some off-label use for pediatrics that may be captured in literature via a real-world practice scenario.
Here, something like this could be the General Practice Research Database, and this is an electronic health record out of the U.K. that holds information about, from 660 practices and about 5 million people right now. This includes adults and pediatrics and they capture all of the devices as they're used within primary care. So we may be able to tap into this data from outside the United States as well and other things like this.
Then there's also device-specific facility data. This type of data might be something like ventilation that's -- the ventilators are captured within the hospital data and we might be able to use single-center data in order to start our process there.
We also know that there's the possibility of linking data sources. We may allow device-specific data with minimal outcome information from a registry that might be linked to Medicare or Medicaid data or other claims data that has a longer term outcome performance to it. We may be able to leverage that type of combined data source. There may also be device areas where there's a cross-design synthesis. Lots of different types of data has already been combined to evaluate the expectation of outcomes, and if that has happened in any particular area, we would like to know about that as well.
So our unknown data sources can happen on two levels: There's types of data sources that I may not have mentioned that we may not have thought of; there's also data sources that are specific to a device area. We might know in general that something exists, but not that it exists for a specific device area. And so as you're walking through in the next little while, the data sources that are available, thinking about the strengths and limitations of the data sources that are available, please consider what's presented within that data, what may be missing within that data source, and how all of the data sources that are available may be useful for specific types of devices.
So here's our discussion question: How might the available research data resources be used for extrapolation of effectiveness for pediatric use of devices? So if you would, we'd like you to move back to the tables, and in two minutes we're going to begin our discussion.
(Off the record.)
(On the record.)
DR. CUMMINS: Could all our moderators please come up to the front?
DR. CUMMINS: Well, thank you all for your time this morning. It sounded like you were having pretty good conversations as the din raised as time passed and you started to get into the work, so we really appreciate that.
What we're going to do now is our moderators are just going to walk down and talk about the thematic areas that came up, not the details, but the thematic areas that came up in your discussion, and we'll just go quickly down the line and then we'll move on to the next segment.
Just so you know what's going to happen, at the end of the day, after we have all these little Post-it notes and all these comments and all these chunks of work, that will be put together in a detailed summary and we hope to post that summary very quickly within the next week or two so that there will be ample opportunity to review and comment on it. Because we would like your comments after it's posted, and I'm sure you will leave with thoughts and ideas that didn't get brought up today and we want to hear those from you.
So why don't we just start down at the end of the row? Angie?
MS. JAMES: So we basically came up with two categories: what data we have and how to use the data. We looked at the data to see if it is representative enough to allow us to use and extrapolate the adult effectiveness to the pediatric subset by looking at the current data that we have and then how we are going to use it. So that's a summary of what ours was.
MS. TODD: Well, in our breakout session, we had a total of five. We had one from academia, we had one from industry, and three FDA staff. We had three categories it came into. One was collection and consolidation of data and importance of timely consolidation of raw data. The next category was to improve access to viable available data. And then the last one was a new idea for data generation, which was computer modeling simulation.
MS. SCHAREN: We had actually, three categories. The first category were really incentives, and one of the things we came up with safe harbor for reporting and capturing pediatric use. The second category was data sources. So one of the things that we came up, one of the examples would be the insurance company payer data may be useful data. The last category were barriers. So one of the things that came up were maybe the sample size of pediatric studies. So those are kind of our three broad categories.
DR. CUMMINS: Can I ask each of you to introduce yourselves so that we have it for the record? I'm sorry. I should have done that in the beginning.
MS. SCHAREN: I'm Hilda Scharen, and I'm with the CDRH.
MS. JAMES: Hi. I'm Angela James. I'm with CDRH as well.
MS. TODD: I'm Joan Todd, second speaker, CDRH.
DR. CUMMINS: Thank you.
MS. ADAM: Good morning. I'm Laura Adam. I'm with CDRH. Our group had about five kind of general topics they came up with. Kind of the largest was identify data sources, and, of course, we talked about published literature. We talked a little bit about look at data sources for drugs where devices were already used, like to deliver the drugs. International data, things might be used differently in other countries, other settings. Also, we had an idea to compare data from general hospitals that treat children versus pediatric hospitals.
The other big topic idea was kind of the utility of the data: was it valid; how is it formatted; is there a degree of bias? Some ideas for that were, are the device types current? There might be data available, but the device has now changed, and so the data was collected with a device that isn't current.
We also talked about the interplay of safety versus effectiveness and that sometimes you could look at safety signals and that might inform some of your ideas on effectiveness, so that looking at data from like post-marketing safety recalls, MDRs, or HHEs, health hazard evaluations, that might help you get some information.
There was also premarket issues; what's the market size, how to get companies interested in maybe helping with this if it's not a huge market. Just unmet pediatric needs to help identify -- look for patterns and help to develop registries. Then we also talked about the age of the subpopulations, the idea of age versus size, long-term device use versus growth, kind of the ease of use for data for certain subpopulations, like adolescent or neonates or something. Got all of them.
MS. SIMMONS: Hi. My name is Janesia Simmons. I'm also from CDRH. My group came up with three different categories for all of our ideas. The first category is existing data sources. The second category is creating new collaborations; and the third category is data quality concerns.
MS. WAGMAN: Hi. I'm Victoria Wagman. I'm with CDRH. We sort of came up with about five different areas. The largest is to have collaboratives in terms of gathering the data, having organizations, particularly specialist organizations, help with gathering the data and make sure that there is input from the specialists in terms of literature -- also look at specialists in terms of literature and data.
Then in terms of looking to the future, making sure that what NIH does is collaboratively with the end product in terms of FDA. And then make sure that the federal funding within NIH -- the funding for pediatrics is all over the place. Just make sure that where they have funding internally, that it's better coordinated through all their centers.
Then, finally, to look at the model of Medicare, how they approve devices in terms of getting coding, how we can sort of mirror some of that in terms of financial incentives for pediatrics.
DR. BUCH: Hi. I'm Barbara Buch. I'm from CBER. General categories for understanding the data collection, sources available and developing an infrastructure for standard use; in addition, being able to use those things for analysis. There were a couple of examples, including modeling, standardization of data sources and reporting, and understanding how databases can come together for information purposes.
Then, on a completely different side, we talked about good PR, about possible pathways for data collection and towards regulatory understanding, as well as possibilities that are available that may not be well known or conventional.
The other thing we talked about was, in general, endpoint development and understanding how that's different for pediatrics, considering how biomarkers may be useful looking at the national history of pediatric diseases and informing how we should look at post-market studies in that way.
The last one we talked about -- well, we talked about different modes of analysis, as well as some regulatory identification of priorities based on pediatric disease processes and how we can inform through regulation and provide guidelines that everybody could understand.
DR. LEBINGER: Okay. Thank you. Tessa Lebinger from CDRH. Our group, we focused mainly on sources, potential sources for information. I thought one of the most innovative suggestions was using social media and parents' groups to see when devices have been used off-label, with the recipients of the use of the device, think about it, to take that information. There's a lot of information about using databases and registries, including looking for interesting funded programs that -- like states will have like certified congenital heart disease centers and they collect data, and different states have different programs and collect different data.
So, we had a lot of suggestions in terms of using information from FDA advisory panels, information from SEDs, the safety and effectiveness summaries on approved devices. We also had a lot of, try to do kind of meta-analysis, trying to combine data from similar devices, even if they aren't the same, but to get an idea of how they're working.
We also talked a little bit about process. We're going to talk about it more at the next session. But, for instance, if you have a device approved to do something in one setting -- we had a large cardiovascular representation. So if it can open a vessel from one indication, can it open another vessel for a different indication, pediatrics, but what can you take to extrapolate what it's done in adults in a different setting.
Also, we talked about getting the European data and animal data also.
DR. DELFINO: I'm Jana Delfino, also from CDRH. Our group also talked a lot about sources of data. We thought there might be a role for data sharing either among FDA and industry or within industries. We talked a little bit about whether that should be blinded or un-blinded or both, but since the numbers for pediatrics are so small, we thought there might be some benefit in finding some way to pool things. Then we talked a lot about, now that you have this data, irregardless of the source that you get it from, how do you evaluate it and can we come up with some standardized ways or can maybe professional organizations or other ways like that help us kind of wade through this data that we have gotten from all these various sources now.
DR. OSBORNE: I'm Steve Osborne from CDRH. Our group had similar themes to what you've heard already. We had four categories. The largest category was usability of data, by far, and that was followed by relevance or factors, and then standardization or validation of data, and limitations.
We had two themes that, perhaps, have been mentioned somewhat, but that our group was interested in. One was using global data, not just U.S. data, to help extrapolate for effectiveness; and the second idea was, of the data that is currently available, if it could be stratified for age, perhaps pre-specified, and the age group, the young adults, those who are closest to the pediatric age group, perhaps effectiveness from that age group would be more easily extrapolated to pediatrics than the overall database including older individuals. Thank you.
DR. CUMMINS: Any questions or comments from the audience? All right.
DR. CUMMINS: Well, we're not doing lunch yet -- okay, here we go. We're going to do Plenary Session Topic 2, which, we're early. We're ahead of time. That's a great thing. I always love to try and work through issues early.
What we're going to do now is talk about defining the scientific and regulatory challenges, and we will have two speakers. First will be Joy Samuels-Reid, who is a pediatrician with the Center for Devices and Radiological Health. She's our Senior Pediatric Medical Officer in the Office of Device Evaluation. Our second speaker will be Markham Luke, who's the Deputy Director for that office.
DR. SAMUELS-REID: Thank you, Dr. Cummins, and thanks for the introduction.
So let's look at the pediatric device landscape as it exists right now. We have devices that are used in both adults and children, devices such as syringes and otoscopes, and then there are those devices that need special sizing, as you've heard throughout the breakout sessions, devices such as heart valves, spinal rods, cochlear implants. Then, of course, there are the devices that are indicated for pediatric use only, devices such as phototherapy units, hydrocephalus shunts, and newborn screening diagnostic tests.
So what are some of the device considerations? Well, as we've heard, devices are varied. They may be simple, such as a tongue depressor, or complex, such as a ventilator. Then there are those devices that fall under the category of combination products. So a device that is paired with a biologic is considered a combination product and, similarly, a device that is coupled with a drug may be considered a combination product as well. Such an example would be an antimicrobial-coated catheter.
Then, of course, there's implantable devices that we've talked about. These fall into the category of invasive as opposed to noninvasive. Then there are categories of devices that are only used once. These are single-use devices or they may be reusable. So as you go into the sessions that follow and deliberate, consider whether or not a device is used for the short-term or a long-term and what are those exposures. Then we must not forget the biocompatibility issues, the environment of use and, most importantly, as you will see, the user-device interface. How does the device interface with the child and vice versa?
Let's look at some of the pediatric-specific devices use. We often say size matters, but it may not always matter. It may not be always the arbiter of what's appropriate for a particular pediatric subpopulation. So, we have to ask the question, does one size fit all or most? Will the function of the device be different? Do we need different pediatric settings? Are the algorithms robust enough? What are the software issues? Are they different from the adult population? For instance, do we now have a device that was a minor level of concern for the adult population and is now a moderate or severe or serious level of concern for a particular pediatric subpopulation?
We know the adverse events are going to be different, potentially, depending on the subpopulation. How do we do risk mitigation; what are the strategies? For each subpopulation, the risk mitigation strategy may be different. Human factors, as we mentioned, in terms of a device-user interface is an important part of all of this.
Turning to the definition of pediatrics as it pertains to CDRH and devices, the medical device arena, I think it's clear that we have a pediatric spectrum, all the way from birth through 21 years. So there are a number of subpopulations within that continuum and each subpopulation is different: different growth and development, milestones, different categories in terms of each subpopulation.
For an example, take the neonatal period. The first 28 days of life, we actually have different subsets just in that period of 28 days. We may have a normal for gestational-age baby or small or large for gestational baby, and we have normal healthy babies versus premature babies who have all of the attendant issues pertaining to prematurity. So just in those 28 days, you see the variation.
Then there are maturational changes. Each organ system will reach maturation at different times, whether it's the skeletal system, the cardiac system, and will vary across all of the different subpopulations.
So, case in point: You have a 14-year-old boy who presents to the pediatrician's office in January. He's 5'4". He's on the same height level as his mother. About 10 months later, he comes back. He wants to be evaluated for sports activity. He's now towering over her. He's 5'11" and it's only been 10 months. Naturally, his body morphology is different. The anthropometric measures are different, and his gait and other body mechanics will be different. And we can't forget behavioral and psychosocial factors which will vary depending on the subpopulation.
What about understanding the device? Each subpopulation will need different neurocognitive abilities. So an adolescent patient may need to understand how to use a pump, be informed about the effectiveness of a pump and understand the risk, as opposed to maybe a neonate who maybe needs a parent or guardian to operate the device.
We know children tend to be more active than adults, if they're normal, and therefore the wear and tear on a device has to be considered and the level of activity of each of the subpopulations should also be considered.
We also know that the pathophysiology of various diseases vary depending on what the underlying problem is. So, for instance, cardiac diseases may be different for the neonate than the adolescents, and definitely from adults.
So I came up with a mnemonic. And, of course, I'm a pediatrician, so I like to use these things. I thought this was a good way for you to consider the day's activity and in your deliberations. Any of these letters could mean different things, but this is how I grouped them. I thought one of the things you must consider is the clinical difference, the clinical difference between adults and children and between subpopulations. So when we say pediatrics, we must understand that there are several subpopulations in that continuum.
Human factors, we've talked about, and the human factors folks will tell you that it's not just ease of use, it's not just dexterity, not just handedness, and we have to also think about human factors engineering. Was the device engineered for a subpopulation such as a pediatric subpopulation? What about age subgroups? We've talked about those differences.
One of the things I think gets left out a lot is the learning curve. Devices used in adults may have a short learning curve because they're used all the time, but as you get down to various subpopulations, the learning curve may be different. You may need particular expertise, maybe even subspecialist expertise.
We've talked a lot about the literature in terms of the literature sources. I'm sure you'll hear more about those, so I won't dwell on that.
Endpoints we talked about in the breakout sessions as well, but we need to consider whether or not the endpoints that were derived for the adult population now support safety and effectiveness for the different pediatric subpopulations.
Natural history of the disease, a disease might be more prevalent in one end of the spectrum than the other or it may span the entire gambit, start in childhood through adulthood. We have to consider that.
Growth is always the theme. Can the device grow with the child? Do we need subsequent interventions to allow for safe and effective use of a device? So growth is central.
And, of course, we've talked about effectiveness, which is sort of the central theme here. And last, but not least, is safety.
I think one of the things that I wanted to do to assist in the breakout sessions is give sort of a pulse of what FDA thinks about. Some of the frequent FDA questions that we deliberate in-house, as well as pose to sponsors, are the following: Is the device the same? By the time the literature is published or the source of data that have been gathered are looked at, the device may have changed. What iteration has been studied? Are there new technological advancements that we need to consider? Are the indications for use the same? Is the disease the same? As we've mentioned over and over, what about short-term, long-term and repeated use and cumulative use, depending upon the device? Is the target population one that is easier to extrapolate to, for example, as has been mentioned, adults to adolescents? Maybe that's a shorter leap than all the way down to neonates. What about anatomical site? In one case, for adult population, it may have been used in a particular anatomical site and now is being indicated for another. So, do the data support safety and effectiveness?
As has been the theme throughout this morning, we must consider the differences in the target population, in the indications for use, the matter of use, single-use versus re-use, cumulative effect, are the clinical settings different. A device that is used in a home setting may have the need for more oversight, more training as opposed to a device that is used in a healthcare setting, or a patient is hospitalized and there is great oversight and more specific attention to potential for adverse events and outcomes.
The types of users are going to be different, naturally. If they're children, they themselves may interface with a device and depending on their ages, different kinds of training will ensue. So the level of oversight must be a part of the equation. Different risk mitigations strategies because different adverse events may require different types of mitigation strategies.
What does FDA want to do? We get those questions a lot. So, here are a few highlights from review categories that we've done for different devices. So, what is the population? Are these healthy children or is there an underlying medical condition? What about the age subgroup; is it appropriate? Do we want the device to work across all subpopulations or a specific subgroup? And, of course, the growth and developmental effects.
We can't leave out exposures. We need to consider the biomaterials, how they interface with the different tissues and the different subpopulations. For neonate, exposure rates will be different in terms of what we are concerned about than, say, for adolescents. Adolescents tend to get short shrift sometimes because people sort of chop them up as little adults, but in truth, there are hormonal issues that may influence exposures for the adolescent population, and we may have to be concerned about estrogen disruption, for example, for some of the populations, depending on the biomaterial. So as you think about exposures and deliberate this afternoon, I ask you to not forget that category.
We've talked a little bit about whether or not subjects are healthy or unhealthy and, of course, whether an illness is acute or chronic. So long-term effects versus short-term effects will be part of the discussion.
Then, although we're talking about data sources, we must go back to what has already been evaluated. What do we know about the device? Even though it may have been tested and evaluated for an adult population, we still need to figure out whether or not those performance data still support a pediatric subpopulation. Are the animal models appropriate, for instance? Do we need to now look at juvenile animal models because maybe the adult animal model was not necessarily appropriate for all subpopulations? Biocompatibility issues may be that of cytotoxicity, irritation studies, leachability of biomaterials. All of those will affect the different subpopulations in a different way.
The organism profile may be different. Issues related to cleaning, disinfection may be different. There may be new methodologies that now are introducing different kinds of concerns since maybe the last published data, where for adults, cleaning and disinfection issues may be different than what's needed for pediatric use.
And then software we mentioned earlier. What about the software-user interface; how the user interfaces with the software and how the software interfaces with the user?
Just changing the size of a device doesn't mean that now it's a pediatric device because we may raise new issues in terms of the physics of the device. The flow dynamics, for instance, in a catheter, just because we made it smaller, maybe now we made it more prone to, say, clotting or dislodgement, different issues that will it pertain. We talked about wear and tear in terms of physical activity.
Something we haven't talked about much today is the interplay between devices. We assume that when we put a device in, it's the only device, but typically, there are lots of other devices. So maybe we have an antimicrobial catheter in, but we may have another device that has some other kind of antibiotic or antimicrobial, and what is the cumulative effect of all of this in terms of, say, a vulnerable population, maybe a particular pediatric subpopulation. That may vary from the adult population. Of course, as you can see, there's a running theme of human factors.
When looking at the study considerations, what is the study's design? In terms of the breakout session, a lot of this came up. What is the source? Was this a randomized clinical trial? How robust are the data? Were there enough pediatric subsets or subjects?
What we see a lot is that adult data can inform some pediatric information because typically, because of a variation in the definition of adults -- Europeans particularly may have a younger age population in terms of their cut-off. So, say, they may have some pediatric data that would ordinarily be considered adult in their country, but is actually a pediatric subpopulation when you look at it. So how representative are the data that we're looking at of the various pediatric subpopulations?
Then, of course, we have to consider the underlying conditions. Are there variations in how they're being treated? What are the differences? Did we use different assessment measures in the adult population that are not valid for the pediatric subpopulation? In terms of clinical outcomes, we talked already about endpoints and the different measures and how endpoints in adult studies may be different depending on what you're trying to achieve for the pediatric population. Of course, there's always the balance of risk versus benefit. All of this informs labeling and the labeling will help ensure that the device is safe and effective for the pediatric population.
My colleague, Dr. Markham Luke, will talk about the regulatory pathway and the challenges, but I want to say that typically what we see is that maybe an adult device may have gone through one pathway and maybe we need to assess whether or not this pathway is still valid for the pediatric application.
So, I've identified a few deficiencies, and deficiency is sort of a kind word. It's not meant to mean that there is something wrong, but I think it's important to inform that, when FDA looks at certain submissions, here are the kinds of concerns. This is sort of a summary. Are we talking about the same device? Has it changed? Is there a new iteration? Again, are there robust algorithms that have been shown to be effective and can be then extrapolated from adults to the pediatric population? Maybe the algorithms only support the adolescent population or some subpopulation therein.
Typically, we are unable to extrapolate safety, but we may have information from the adult population that could inform safety for the pediatric population. We may need pediatric expertise. We have to examine whether or not the level of expertise needed far exceeds that of what was used in the adult population. Long-term effects should be sort of a central theme in all of your deliberation.
So, I didn't want to be too catchy, but it does take a village. Similar to practice in a pediatric setting, clinical setting, where you need a multidisciplinary team to assess challenges, within FDA and, of course, without, in here, during the workshop, we need different experts to address the challenges, many physicians of different subdisciplines. We can't forget the vets. Engineers, we need. We talked about the software/hardware issues. We need their perspective. We need the microbiologists as well. We have a need for the biochemists in terms of different kinds of formulations that may apply to different kinds of devices, physicists, epidemiologists, and statisticians.
I haven't listed the ethicists, and I want to apologize ahead of time if I've left anybody out, but I think central to all this discussion is the issue of the bioethics of extrapolation of adult data to pediatric population.
So with that, I thank you for your attention.
(Off the record.)
(On the record.)
DR. LUKE: Joy, I think you left out the lawyers also.
Good morning. It's nice to see all of you folks here, coming together to have a discussion on a topic that we all share a common passion for, and that is, developing good devices for our nation's next generation and looking at the best ways to do that in that setting, in our regulatory and scientific setting.
So before I move on, I just wanted to have a profound quote. The science informs regulation and regulation guides science, and you can put adjectives like good or best in front of all of those. These things are interwoven and FDA is where the two meet, the science and regulation. We look at how best to regulate new products coming on the market in the setting of the science that's there.
Science for medical devices, it's old and there's new science. The devices are constantly evolving so we're always trying to look at ways to look at how we regulate these devices in the setting of evolving science. Then the other piece is, a lot of the regulations -- our Center, the Device Center is a relatively new center compared to drugs or biologics, and the nature of regulation for medical devices is anywhere about, I guess, a decade behind -- or not necessarily behind, but a decade apart with regard to regulations coming out for devices versus regulations that have already come out addressing a certain specific area for drugs, for say, NDAs and BLAs. So it's an interesting area that we can have more conversation about.
I'm going to start talking a little bit about extrapolation, which is the main topic of this meeting, and how we might be able to extrapolate data that we already have for adults and using it in the pediatric environment.
Our medical device regulations allow flexibility. They're written to allow some flexibility. The flexibility is placed in a setting to allow us reasonable assurance of safety and effectiveness. While there is some judgment call, but a lot of that is based on science; what do we know about it already that we can help make the jump to extrapolation versus what we don't know?
So, our clinical data -- and this is -- the labeling is the piece that we communicate, manufacturers communicate to the practitioners, FDA communicates to the practitioners and to the patients on how to use a medical device and what are the risks and what are the benefits. What's a tradeoff that a practitioner, and whether in the operating room or a pediatrician in their office, is guiding the patient or the patient's parents in making a decision whether some device needs to be implanted in the child or in the operating room, of a choice between, say, two different procedures, one involving one device and one involving another device; which direction do you go?
So the labeling which guides marketing, which guides hopefully the talks in the professional societies, that labeling is how we put our information forward. And we think that new clinical data, while it's helpful to have, isn't always needed to demonstrate safety and effectiveness for devices intended for pediatric populations, but that is given in context of that you already have use of that device. There's information to inform use of that device in the patient population, whether it be in the literature or some background use of that device.
It's a measured call that is made on one area to another area. There are risks involved in some of these devices. There are already procedures that may have better benefit and so, if you're putting forward a new device that you think is better, then you should have some data to say my device works well in this population because, and you have the clinical study maybe to back that up. So it's not always necessary, but it's something that is important to inform the practitioner, inform the patient and their parents.
Our medical devices, I guess we're generous. We're more generous than the other centers in that we allow pediatric age to go all the way up to 21. CDER has a cutoff of 17, so a little more stingy, I guess, but we do break these out. If you note, it goes from birth, as Joy mentioned, all the way to 21 years of age. So they range from the smallest of our children to children that are adult size and are even of legal age and are out there fighting our nation's wars; we have devices that address those needs as well.
So when we talk about extrapolation, some of the extrapolation may be easier in some populations, extrapolating adult to adolescent or adult to -- older than 21 to older than 18. That extrapolation may be easier for both safety and effectiveness. But extrapolation of adult to toddler or adult to infant can be more difficult to both rationalize and justify and it may be less reasonable in terms of parlance for our regulatory language.
It's important for sponsors, when you apply for your application, if you're going to extrapolate some of your data, provide the rationale for why that's the case, why you would want to extrapolate that part of the data and why it's reasonable and, perhaps, some additional information that would help with regards to making that argument for reasonableness.
Susan mentioned this earlier, the approval versus clearance issue and we're here talking about approval. So the term approval in a regulatory definition, I guess, is the way they were written, applies to IDEs, PMAs, and HDEs. Of note, the subset of the marketing approval applies to PMAs and HDEs. You can't market a device that's under IDE. Clearance applies to 510(k)s. I recognize that some of the elements of our discussion and the discussion from Joy's talk applies to perhaps both. You can make a broader application of the scientific discussion to say, why wouldn't a next generation of 510(k) device that could show probably some SE evaluation for that device come under 510(k), and that argument be applicable. It could be, but for the purpose of discussion today, we're going to focus on Class III devices and premarket applications and a little bit on IDE studies as well.
So when we talk about extrapolating data, I'm going to broaden that to both non-clinical and clinical data. You can extrapolate some non‑clinical data, but Joy mentioned, also, the scientific questions about downsizing devices to fit pediatric population. While some of the elements of that data might be extrapolatable, some of it may not and may call for some additional bench testing, but some of the bench testing may be extrapolatable.
Appropriate animal models. It's hard sometimes to find good animal models to model growth or other issues that are related to human pediatric patients. In some cases, you might be able to find that, and let us know what those are that you think might be applicable in our review, as we have veterinarians, we have non-clinical reviewers who are happy to weigh in on whether it's appropriate or not.
There are some specific issues, as I mentioned, pertinent to pediatric patients, the accommodation for growth and pediatric physiology and the lifespan of the device, which is kids, if you're relatively healthy, you're going to live longer than a relatively healthy adult. And so the lifespan of the device, if it's an implanted device, it's important to take into consideration and there might be some additional non-clinical information needed for those devices, especially if that device is not intended to be explantable.
So a little bit about significant risk. IDEs are how we do studies in devices for significant risk devices, and how do we address risk for our pediatric patients, again, in the guise of discussion of extrapolation? Are we able to extrapolate a significant risk to pediatric patients? I would say in some cases, perhaps, depending on the population, but in general, the pediatric populations are -- most recently, there was standards for 14155, which just came out this year, that declared the pediatric patients a vulnerable population, and we know they're a non-consenting population when less than 18. Our regulation, 21 C.F.R. 812.3, at the risk of perhaps some of the folks here their eyes glaze over whenever you see a C.F.R. number, but don't let that happen. This particular regulation is intended to address the potential for serious risk to health, safety, and welfare of a subject and that defines what is significant risk.
IDE may be needed to study the use of the device in the pediatric patient even though it's labeled for use with adults, in adults with that indication. So that gets at the issue of how might we be able to build into the notion of iterative development, that you develop a device, perhaps, for adults where you define how that device is used in adults first and then you're going to a non-consenting population and say, okay, my device might work in children maybe in a smaller size or even the same size, and I recognize that because it's a non-consenting population, I'll need some data from the adult population before I move into children.
Keep in mind, you don't always need to have an approved or cleared device before you move into the pediatric population. But it's something to keep in mind, why not test the device first in a consenting population and a less vulnerable population. Then you can extrapolate the data that you learn from the adult population on.
Informed consent. The age of consent is a legal issue, and it could vary from one country to another country. Submit the copy of the informed consent for our review, please, and informed assent when appropriate, provide that. We're interested in whether the language is appropriate for the population that you're treating and whether there is potential for cognitive understanding for both the patient and the guardian, specifically, procedural surgical risk and device risk, and also discussion about permanence of the procedure or implant that the patient is getting.
So moving on to PMAs. PMAs regulate the Class III devices. For Class III devices, we want to make sure, in your application, that you provide what you deem to be an appropriate age range -- for the sponsors in the room -- for the medical device. That needs to be proposed with the application and that proposed age range should be supported with the data. You can extrapolate some of the data, like, I think we have a -- I'll get into some specific examples later, but you may have data for mostly adults with a few outliers in the pediatric age range. But you can talk about why the adult data might be relevant to the pediatric population and why FDA would want to put into labeling the younger age group.
So, I guess, we should move into the definition of what is valid scientific evidence, and this is something that we talk about over and over again. In fact, there's recent guidance on clinical trials that came out in August of this year. Make sure you look at that. It's draft guidance. It talks about how clinical trials should be designed for medical devices and what their overarching issues are with regards to clinical investigations. It finishes a comment period and we're currently looking over the comments. For those in the room who submitted responses to that, thank you so much.
Valid scientific evidence can stem from a variety of sources: well-regulated investigations, partially controlled studies, studies and objective trials without match controls, well-documented case histories conducted by qualified experts, reports of significant human experience with the market device. So, there's that range.
Where we want this evidence to come from with regards to driving a pediatric indication depends on the device. Remember, these definitions of evidence, for valid scientific evidence can apply to both -- in the language and the regulations, it applies to both Class III and some Class II devices. So depending on what type of device it is, what are the benefit/risk calculations for that device, we would focus in on where on the spectrum of scientific evidence we would want the study come from.
There are also some practicality issues of how that study is best conducted. So, those are factors that come into play. Remember, these are tailored, a la carte type of discussions for every application that comes in. So it does require a lot of discussion, and hopefully you've provided your best argument and we've provided our best reviewers who can think about this and have that discussion.
So if I could mention, valid scientific evidence depends on:
The device characteristic, is it an implant; what is the risk?
Conditions of use, for example, by whom, where is it being used? Is it used in a home-use setting? Is it used in a clinic or is it being implanted in a hospital in ICU?
Existence and adequacy of warnings and other restrictions. You can inform by the device with sufficient restrictions for use, say, by a certain body of experts that would then forestall any concerns about a specific type of concern.
Extent of experience with use, whether it be outside the United States or in the United States. So those are all questions that come up.
Some examples of how we can include clinical scientific evidence, apart from extrapolating from adult patient populations, there is also, if you have some pediatric data that would help guide and help with the extrapolation, so you can have a partial extrapolation. It does help to have some inclusion of some pediatric patients and build that into your protocol. I've seen various approaches through the years on how pediatric patients are included in clinical studies, anywhere from a priori stratification with randomization of the patient population to an a priori subgroup analysis strategy described in the protocol, so a statistical analysis plan for pulling out those pediatric patients and saying this is what we expect to see for a subgroup, that there is no stratification and there is no randomization for that subgroup.
There's also post hoc analysis, which is after-the-fact like, oh, we ran the study and it so happened that we included a few patients in the teenage group, so let's do a little analysis to see how well it worked -- FDA might ask you to do some of that and try to figure out, because it wasn't pre-specified, but we are interested in developing, as I mentioned in the beginning of the talk, good pediatric devices for pediatric population -- pulling out that subset of data, doing a little bit of post hoc analysis to see how well it performed.
Then we get into the benefit/risk in determining amount of evidence needed. Say that there's a device that's a cosmetic implant and we have concerns about some safety in the adult population, we might need a little bit more information before we put our nation's kids at risk to allow labeling for use in the pediatric population versus a life-saving remedy that's going to help with prolonging the life of a child. Those might have very different benefit/risk and would help determine the amount of information that we need for evidence.
Then there are these -- many of the procedures done in children are rare. There are less than 4,000 per year. This is an option that many device companies are thinking about, and through the Humanitarian Use Device designation is applied for through the Office of Orphan Products. Since we've removed the no-profit issue for pediatric HDEs, we've seen an uptick in the number of Humanitarian Use Device applications in the Office of Orphan Products and, hopefully, that will translate into more humanitarian device exemptions. I know we've approved about, I think, three devices now with the Humanitarian Device Exemption for pediatric patients. Again, the regulatory hurdle for this is "probable benefit." We would be a lot more willing to extrapolate for these devices some information from, say, adult testing or models, et cetera.
Then also, there's this issue -- someone had said speak about combination products. So I said, okay, I'll throw a slide in there. But I think this is important. We do have colleagues here from -- biologics folks, and we have many devices that rely on drugs to help achieve their primary mode of action. So these device-drug or device-biologic products do have additional informational needs. For drugs, it's pediatric metabolism and excretion of those drugs, especially when they're not covalently attached and they can be metabolized. For biologic products, we are interested in pediatric immunogenicity, which might be a little different from adult immunogenicity.
On to some examples. This is fun and, I think, just put them out on the table, but -- for the folks, when you get to your discussion groups this afternoon, you can feel free to use some of these examples or something like these. I think we're a little early, right, so we can spend a little time talking about these?
DR. CUMMINS: Go ahead.
DR. LUKE: Okay. So, first example, PMA for an in vitro diagnostic product for measuring viral titer in a blood sample. What's the difference between a pediatric blood sample and an adult blood sample? So you folks in the diagnostic arena might think about that, maybe some difference in plasma proteins, maybe some difference in how much blood can be obtained from a patient.
I remember when I was doing reviews on pediatric patients, we were always -- and this is for PK analysis -- we would always say, okay, does the patient have that kind of blood to give for these kind of analyses? We've seen companies come in and they're vampires. They want to draw lots of blood from these kids because they're interested in -- but these kids are small. They don't have that much blood. So that's a practicality that someone who designed a study for an adult population might not have factored in. It's a common sense thing, but every now and then you notice that. So, these are key pieces for looking at some of these kinds of applications.
Next example: PMAs for neonatal use of a ventilator utilizing respiratory rates that greatly exceed that of normal breathing. Neonates are small; they have small titer volumes. But some of the extrapolation for, say, the hardware for the ventilator component, you can extrapolate that. For the clinical use for the neonates, how extrapolatable is that? That's something that you need to have a conversation with our anesthesiologist or our critical care person to talk about.
Third example: an HDE for a bone replacement that is length or size adjustable. I think there might be a couple orthopedic folks here in the audience. This is an interesting example. It gets at can we extrapolate some of the data for, say, the non-size-adjustable pieces to the size adjustable, and that's something that we can talk about maybe the materials. The longevity of the materials could be extrapolated. The actual use of the product itself, you would want some information, I think, especially since you're raising questions about how adjustable is it and does it really grow with the child, that sort of thing, and how reliable is that?
I think we can come up with any number of additional examples. I left out the aesthetic class. I guess we have a diagnostic and we have two potentially therapeutic devices. I did mention the possibility of aesthetic devices as well, so the three major categories of devices that we see in our Center.
So, in summary, to wrap things up for my talk, at CDRH, we have the regulatory latitude for some flexibility in determination of what data could be extrapolated from adults to pediatrics. The regulatory flexibility depends on, a large part, on the scientific clinical rationale for the extrapolation. With that, I think there's some food for discussion.
Should we take questions about the talk specifically or we're going to focus in on the discussion?
DR. CUMMINS: We would like to not take questions about the talks, but have people move to discussion. We're ahead of time. It's about 10 after 11:00. What I would like to suggest is that we move to our second discussion group now and then take lunch afterwards and take lunch about 12:15. If there's no objection to that, why don't we go ahead and do that, and then when we come back after lunch and do a report back from this discussion section?
This is our question: What are the scientific and regulatory challenges we will have to consider when using this data to extrapolate or establish pediatric effectiveness for various medical claims? We've heard an overview of this, but I would like you to delve deeper into it in your discussion groups and bring up issues that come up for you as well. Thank you.
(Whereupon, at 12:15 p.m., a lunch recess was taken.)
A F T E R N O O N S E S S I O N
DR. CUMMINS: Anyone at the podium who has an iPhone -- you think it's an iPhone, correct? Could you please turn it off and move it away from the podium? Because, apparently, it -- not a BlackBerry, an iPhone specifically. It was reacting. It made it difficult for us to pick up all the report back in the recording.
All right. To go over the schedule, this is now the report back for Breakout Session 2, which will go from about 1:15 to 1:45. We will then have our third plenary session about 1:45 to 2:30. Then we'll take a break. Then we'll have our third breakout session and a report back and a wrap-up. So that's the plan. I anticipate that we'll be out about an hour early, which is really nice. Great work, everyone.
Should we start at this end of the table first? Okay. Great.
MS. HANAFI: Hi. My name's Nada Hanafi.
DR. CUMMINS: If you could introduce yourself before you report back for your group?
MS. HANAFI: Okay. Hello. My name's Nada Hanafi, and I'm at CDRH. Welcome back. Hope you had a good lunch.
Our group came up with six different categories. The first one was growth and development, which also included things like the prevalence of the disease and condition in the population, the different impact on different body parts in pediatric population, immunogenicity, and so forth.
Our second category was the role of assumptions, which is basically about the verification of the data, the reliability of that data and how can you actually pool it as well.
Our third area was actually called training and communication efficacy, and it's adequately communicating risks and benefits of the device for that population, as well as -- what was really interesting was the design evolution and how you can take into account the rapid pace that devices change, specifically for that population.
Fourth group was financial considerations, which included the fact that there was a lack of financial incentives, issues of reimbursement, as well as the limitations in finances for these types of devices for this population.
The fifth group was level playing field and landscape fairness. And I won't get into that discussion because it was quite an interesting discussion, but we can see if other groups came up with that.
The last group was the public health impact, which also included discussion about the ethics of studying devices for this population and identifying the adequate or appropriate stakeholders.
DR. DELFINO: Hello. I'm Jana Delfino, also from CDRH. Our group came up with four very broad categories. The first had to do with study -- we called it broadly, study design issues. So the fact that there were not separate safety and effectiveness studies, that it was sometimes difficult to randomize such small patient populations, and some other examples I won't go into, but mainly having to do with just study design in general.
Then our second very broad category was the issue of off-label use and that that makes it difficult to collect data and that, perhaps, off-label use coupled with -- or off-label use can provide an insufficient incentive for seeking pediatric-specific indications.
Then we have this very broad issue of -- sometimes it's not clear what a pediatric population even is. I know we went through this morning about the age base categories and the subgroups of those categories, but the issue was that, really, the question has to be asked in a device-specific arena in that the issues and the definitions and the applicable subgroups are really device-specific and one age or size doesn't fit everything so it has to be done on like a device-by-device basis.
Then, finally, we called this category sort of the transition from adult to pediatric, or those relationships, and that sometimes it's even difficult to figure out if the safety or effectiveness issues are different, because for some devices they are and for some maybe they aren't. Grouped in there are long-term plans for pediatric development of devices and just general questions about what makes a device a pediatric-use device.
So overall, those are our four groups.
DR. LEBINGER: This is Tessa Lebinger. I am also from CDRH. I would say that our major barrier focus was regulatory issue. We talked about statistics, having more use of Bayesian statistics and other statistical methods. We had a lot of discussion of various issues regarding endpoints, that an appropriate endpoint for an adult use of a device may be different for pediatric use of the same device.
We talked about also -- there was a lot of discussion on whether or not FDA would consider approving devices as, quote, "tools," rather than without having specific indications for the initial approval. For instance if you have -- we have a lot of cardiovascular people at my table. So, for instance, if you have a device that opens or closes holes in the heart or anywhere, can you get an approval of this device as a tool to open or close holes without initially having approval to open or close a specific hole, a VSD or a valve; then if a company wants to come in saying, well, I want to say that my device has a specific indication for pulmonary valves, they can come in separately for that.
They talked about, also, what's the appropriate endpoint in the sense that, for instance, apparently there was a device to close a patent foramen ovale in the heart that had an increase of emboli, causing strokes. However, since children had more plastic brains than adults, it turned out that the functional deficits were actually less than in the adults, at least in the short-term. But they talked about a lot, also, do you have short-term endpoints; do you have long-term endpoints; are the long-term endpoints post-marketing devices? So for a foramen ovale device, do you look at the incidents of strokes or do you look at the functional defect?
We talked about reliability and adequacy of data sources, are adult data applicable to children? A lot of sources have expert opinion and sometimes there's a lot of conflict of interest from the experts, and how you take that into account.
We also talked about design issues, maybe different for pediatrics versus adults, that you might need to design a device that is smaller than the adult device initially, but you may need to have to let it grow.
Then there was a lot of discussion on patient centeredness and risk versus benefit. Are we taking into account how the patients weigh the risk versus the benefit of a device? If you have a device that is less invasive than a surgical procedure and it's slightly less effective than the surgical procedure, and the patients would prefer having slightly decreased effectiveness, but decreased invasiveness, are we taking into account the patient's opinion of the risk versus benefit?
MS. ADAM: Hi. I'm Laura Adam. I'm with CDRH. Our group came up with kind of four main topics under regulatory issues and four under more focused on scientific issues.
Under regulatory, we talked about the need for guidance, if there should be a novel regulatory pathway for things where we're using like a data extrapolation. They talked about, we need a regulatory definition of adverse events.
Another regulatory issue is labeling, if it should be pediatric-specific, if it should be specific to extrapolated situations. Talked a little bit about longitudinal care and just things specific to children growing up, being more active, how should that be addressed in the labeling.
We talked a little bit about lack of standards and standardization issues with the device and also standards of care between adults and children. Then the last regulatory issue we looked into was post-market issues, sort of follow up compliance and adherence. If a device is approved based on a data extrapolation, should there be tighter post-market surveillance? We also brought up liability issues for the FDA or the firm if it's just done with extrapolated data. So that was the regulatory issues.
Under the scientific ones, we had acceptability of the data and data quality, so issues related to data quality. We also discussed that there was different kinds of data: preclinical, clinical, post-clinical. We talked about human factors, the difference between, you know, what you would need to do with a child, how would you measure it.
Somebody brought up aesthetics and acceptability with kids. We talked about some devices, they try to make it, for an adult, less obvious, but kids sometimes like it more obvious, like colored ear molds or colored bands on braces or things like that. You think they want to stand out instead of trying to be more discreet about it.
Then we had some device-specific issues related to that things needed to go -- like in case-by-case issues, differences in disease pathology and the extent of the disease in a normal or a healthy child versus a child with a disease and how, what's the extent. We had kind of a lot under that. So I think that was all. Okay.
DR. CUMMINS: Great.
MS. ADAM: Thank you.
DR. CUMMINS: Tori?
MS. WAGMAN: Hi. I'm Victoria Wagman. I'm with CDRH. We had, basically, five different areas. Our biggest was about collecting data to have appropriate controls, which might be different for adults and children. The age groups, like certain age groups, obviously it's easier from adolescents to adults to extrapolate it and then how, maybe from adolescence to infants and beyond, how can we start extrapolating.
The other one is how do we use existing knowledge? That is, for instance, what's going on already at NIH, CDC, industry, FDA, and how we can look at public health in terms of what is the greatest need.
Then, funding, which is a huge issue, is how do we get companies to look at pediatric devices and to want to go into that area. Then, humanitarian use, should there be in terms of regulatory, what should be the pediatric age for humanitarian use?
The other areas, early thinking, is that we should do that up front, early thinking, like when the idea comes out, but not at the end and looking back and extrapolate. It's actually started -- how do you encourage people when they start with an idea to start to think. And that leads into the final area, which is pediatric experts and the need for pediatric experts. Our group recommended expanding the network of experts which CDRH just came out with to make sure that when we use that mechanism, that you include at least one pediatrician so that we have that included in the up front. That's it.
MS. SIMMONS: Hello. I am Janesia Simmons from CDRH. Our group came up with four different categories, the first category being regulatory hurdles, which included access to pediatric expertise, a lack of regulation and guidance to both industry and FDA and specific laws/regulations protecting children.
The next category we talked about was applicability of data, which we were trying to figure out how to evaluate off-label use of data and in creating smaller data points. Then applicability of methods, and that included the lack of test methods specific to pediatric populations and the applicability of study designs.
Then the last one was pediatric-specific factors and that included the impact of behavior and psychosocial factors, as well as the differences in physiology and sizes and various ages.
MS. SCHAREN: Hi. I'm Hilda Scharen, and we came up with four categories, the first one being the risk/benefit paradigm for peds is different. So one of the things that we discussed extensively was the difference between peds and adults and recognizing the fact that a device may fail more in peds than adults because of the increased activity. That's just one example.
The second category was financial issues, the cost of the studies, which possibly could be alleviated through the right type of partnerships. Our third category was the need for regulatory flexibility premarket, possibly through legislative changes. Our fourth category was effective collaborations. One example would be to use nationally recognized IRBs instead of hospital-based IRBs.
MS. LEWIS: Hello. I'm Crystal Lewis, also from CDRH. Our group came up with four categories from the input that we came up with. One was, we were most concerned with the obstacles and challenges of study design, and that included the ease of use of the device for the caregiver and the patient, ethical considerations and assessment of the device with non‑verbal patients.
Our next area of greatest concern was the limited availability of quality data. That included human testing of the device for the pediatric population, as well as limited publications related to the device.
Our third area was the physiological differences between the adult and the pediatric population, as well as the subsets of the pediatric population. Also, not including the various disease processes.
Lastly, we were concerned with the indications for use for the device, changes in indication for the device, as well as disease progression.
MS. RICH: Hi. I'm Suzanne. I'm with CDRH's MedSun Program, and a lot of our ideas clearly mirrored what's already been expressed, but I would like to just build on the last couple speakers.
Our biggest concern was, is that this question is framed against the backdrop of an evolving timeline of technology in standards of care. Often, we, as regulators, are running to try to catch up with what's really going on out in the clinical world. What we want to see is, since we don't have a lot of guidance -- I don't think we have any, no standards -- that we would want to go ahead and utilize or find a way to incorporate current clinical practice guidelines as part of our decision algorithm.
To that end, we then moved into this lack of long-term exposure data. We thought, well, let's take a look at some other venues by which we can get some data. A couple of ideas were, let's take a look at some of the post-market study data that we're getting and incorporate those findings, take a look at collaborating with clinicians in their use of off-label. I know that there's a fine line there, but there may be some lessons learned. And to look at the HUD/HDE experience as viable data by which we can take a look at pooling all of this data to come up with a clinical picture and a regulatory picture that might be able to dovetail and be able to give us some information from which to proceed.
Last but not least is, is once we get a pool of data, we need to take a look at some new regulatory pathways. Maybe we need to look at changing the paradigm to allow for more flexibility -- I've heard this all throughout -- but also is allowing access by all to some of the de-identified data that are being used in this decision-making process.
Finally, last but not least, is who will do this?
DR. LEBINGER: I forgot to mention that my group also asked that we have less request for randomized controlled trials in pediatrics. I forgot to mention that. I'm sorry.
DR. CUMMINS: Okay. Excuse me. Thank you all.
DR. CUMMINS: So we're now going to go into our third plenary. This is our last plenary of the day. Let's wait a few minutes, let people get adjusted and seated.
The focus of this plenary is, often, you'll have a situation where you might have some data from which you can extrapolate, but there are gaps in the modeling of the data is such that you want to fill in. One of the ways you can do that is with a smaller study, but there are also statistical methods and computational modeling methods and other approaches that can be used to address data gaps and pitfalls in trying to extrapolate effectiveness from adults to pediatric patients or across pediatric age groups.
We're going to have two plenary presentations on this topic. First will be from Patricia Beaston. Dr. Beaston is an endocrinologist who is with FDA and is going to talk about computational modeling for the artificial pancreas as an example. The next speaker will be Laura Thompson, who is statistician with CDRH Division of Biostatistics who will be talking about modeling and particular Bayesian methods for making the most use of available data.
DR. BEASTON: Well, thank you for coming back after lunch and participating in this. I'm only going to give you a very brief talk. Unlike all the other discussions where it's focused on the approaches for using data for adults from already approved or in the process of being approved devices, we're actually using this approach in the development of the artificial pancreas.
For those of you who aren't familiar with this system, the artificial pancreas is made up of a number of devices that will include a blood glucose measuring device, a continuous blood glucose monitor which has a small sensor that's inserted into the interstitial space and measures interstitial glucose, a pump that delivers insulin into the subcutaneous space, and a complex computer algorithm that takes all of the information from the CGM and then guides how much insulin should be delivered over time.
So, as you can see, this is a very complex device system, and the patient actually affects how the device system works and the device has an effect on the patient. This is a constantly changing dynamic device system. Investigators had used animal models for years, and you can imagine that the animal models, while giving information on the devices, aren't really perfect for modeling what the human effect is going to be. But also, they take a long time; they're very expensive, and there was a perception that the process could move along faster.
So, they came up with an in silico population of patients. This slide was provided to me by Dr. Boris Kovatchev at UVA who developed a very complex algorithm system for describing what would happen to these virtual patients. In silico just means that it's done on the computer chip. So, they've made this patient population and then you can run your algorithm using this patient population and see if the algorithm will give you an expected result and whether it runs smoothly.
What they've developed is an adult, adolescent, and children population by which they test different algorithms and this has helped us move from the development of the algorithm into the CRC, or the clinical research center, tests very quickly. We have had a number of studies that have been approved and there are patients being tested in the CRC. What we've learned is that the modeling gives us a lot of information about how well the systems work, but they still have to do some changes to the algorithm once they have done some dynamic testing. But as you can see from the slide, that these populations are not interchangeable and this sort of lends the idea that we have to have additional information.
I just want to tell you what we've learned from the in silico models. The benefits are: We can explore a wide range of theoretical subjects: pediatrics, adolescents, adults. We can explore different devices: infusion pumps and CGMs. Let's say that if the CGM has a different way or -- reporting the information, can the algorithm still function with this different input? If the infusion pump delivers insulin a little differently, can the algorithm accept these changes? It also allows us to explore multiple algorithms with very little time. Some of these can be done within 24 hours or a few days depending on the size of the algorithm and the number of patients that are run through it. Then, most importantly, it doesn't do any harm to a patient and it allows them to cull through big problems with algorithms and move forward.
The limitations are that the model only tests the algorithm. It does nothing to tell us if the device fails, what the issues are, or if there's inconsistencies or errors. Those are the things that we have to learn in the testing.
There's only a limited number of in silico patients right now. The artificial pancreas is still in development, so we really don't know, at the end of the day, how well this will have worked. What it is allowing us to do is, not only get into the CRC with adult patients, but we're moving more quickly to allow adolescent patients to be enrolled in the study than probably we normally would have allowed.
We do the in silico testing with the adults. We do the in silico testing with the children. We have the CRC with the study with the adults, and after a certain number of adult patients have successfully run through the study, then we can start enrolling younger and younger age range of pediatric patients. That's allowing us to include pediatric subjects before completing all of the studies on the adult subjects. Right now, we only have one in silico model.
The potential is that we're continuing to gain experience with this model. We are adding new patients. As each study center is using the in silico model, they are adding to the population. It's hopefully encouraging new investigators to develop other in silico models so we can maybe test algorithms across a number of models and see if we get the similar information, just as if you would test different patient populations at different study centers. If we have different models, we might have more information. Then, as we move down with the development of the artificial pancreas, hopefully we can look back at the final device and see how that informed our decisions and whether we can refine the approach better.
DR. CUMMINS: Dr. Laura Thompson, again, is a biostatistician with our Division of Biostatistics.
DR. THOMPSON: Okay. Thank you.
So I'm going to be discussing some challenges in pediatric study design and analysis and some potential solutions. Here's an outline of what I'm going to be talking about today. I would like to start first with some unique issues that affect the design and analysis of pediatric clinical device trials, and then I would like to discuss some potential solutions for them using statistical modeling, including Bayesian hierarchical models and some special study designs, and then I'll conclude.
So some issues that affect the design and analysis of pediatric clinical trials are included on this slide. We've heard before that pediatric clinical trials often have small sample sizes. For example, diseases can have low incidents in pediatrics, so it might be hard to find pediatric subjects. Also, informed consent might be more difficult in pediatrics. Small sample sizes are problematic. They're more prone to variability, leading to more uncertainty in the treatment effect estimate and they lack statistical power.
Another issue in pediatric clinical trials is that there might not be a suitable control group. For example, an approved active control might not be available for pediatrics and a placebo or surgery group might not be ethical. Without a comparator, we have nothing to compare a device group with, and it also precludes us from using that all important randomized controlled clinical trial.
What I would hope to convey in this presentation is that there are design and analysis methods that can be used to deal with the consequences of small n trials and/or lack of control group, and I would like to get your thoughts on some of these ideas during the breakout session.
So the following slide describes what might be considered a typical design for pediatric device study of effectiveness. We might have a randomized controlled trial that enrolls both adults and pediatric subjects in order to get the sample size to be sufficient. We might design the trial assuming a similar treatment effect in both adults and pediatric subjects, then at the time of the analysis test for differences in clinical outcome between adults and pediatrics.
If there is a considerable difference, then we would do a separate study in pediatric subjects. If there isn't a considerable difference, whatever the definition of that is, then the Pediatric Medical Device Safety and Improvement Act of 2007 might apply. So I question whether that typical framework is ideal.
First, we have to determine what a considerable difference means, and preferably before running the study. We don't want to determine what it is after we've already seen the data. If there is a considerable difference, we must evaluate pediatric data by itself, even if the sample size is very small.
I would like to propose an alternative option, which is the use of Bayesian hierarchical models. With these models, there's no need to determine what a considerable difference is between adults and pediatric subjects. We can borrow strength from the adult data to help us make that evaluation. And borrowing strength is not a yes or no decision. It's a continuum, rather, how much do we borrow or how little do we borrow, whereas considerable difference seems to be like a dichotomy.
Now, a disclaimer before I get into the discussion of these models is that not all devices can use this method and not all studies can borrow strength. Later, I'll be discussing a concept called exchangeability, and to the extent that studies are exchangeable, they can borrow strength.
First, I want to discuss the first challenge of small sample sizes in pediatric studies. So as I mentioned, a potential solution is to borrow strength from previous studies to make inferences about the pediatric population in a current study. By strength, I mean by information from the results in previous studies. Information is kind of equivalent to patients, so when we borrow information, we end up with a sample size boost.
The extent of borrowing depends on the similarity or closeness of prior results with the pediatric population, and most important, the previous studies can primarily be on adults where we have the most data.
So to enable borrowing, we could use Bayesian hierarchical models. These models allow a sample size boost by borrowing strength from prior studies. What's important is that the model that's the current data determines how much to borrow from prior studies. Essentially, the more similar the results, the more we borrow, but we don't have to decide beforehand how much we're going to borrow.
So I've mentioned the term Bayesian, and I realize that not everyone is completely familiar with the Bayesian approach, so I have a couple of slides to kind of bring you a little bit up-to-date.
The Bayesian approach describes a method for learning from evidence as it accumulates. The method combines prior information with current study information on an endpoint of interest (for example, a success rate from using a device) in order to form conclusions about the endpoints. Prior information typically comes from the results of previous studies and this describes what we might want to do with pediatric data. We would like to use prior adult data with current pediatric data to make a decision about pediatric effectiveness.
So, in short, the Bayesian approach describes a way to combine the past, prior with the present, the current study to make decisions about the future or posterior conclusions. I will be using the term posterior in future slides. I also want to mention that the Center for Devices published a guidance for the use of Bayesian statistics in medical devices, which was released in final form February, 2010.
What's nice about borrowing strength with hierarchical models is that, not only can we borrow strength to estimate individual study means, but we can also estimate a predicted mean from a new study using what's called a Bayesian predictive distribution. I will illustrate such prediction later.
Now, in order to apply the hierarchical model to borrow strength among studies, we need to make the assumption of exchangeability. Roughly speaking, exchangeability of study results means that knowing a result would not divulge which study it came from. Practically, it translates to comparability of studies or similarity of studies. If studies were very similar to one another and you knew the result of one study, exchangeability means you wouldn't be able to attribute that result to study X or study Y; they're just too similar.
Ideally, exchangeability is decided upon prior to actually seeing any study results, but practically, of course, this often doesn't happen. Typically, at FDA, we get a request from a sponsor to use prior studies within a hierarchical model where the current study and, of course, the prior studies are already completed and the results are known. So, sometimes we have to make the decision of exchangeability using some type of imagination.
What's important to emphasize is that when we do decide upon exchangeability of prior and current studies, it's really a clinical decision. It's not a statistical decision or a mathematical decision. So what happens is that CDRH clinicians and engineers will compare the previous studies with the proposed study for similarity in relevant factors, including those listed here, like the device used, the protocols in the studies, patient populations across studies, inclusion/exclusion criteria, patient management, proximity of the studies in time, et cetera.
So, you might be wondering -- you know, we're talking about two different patient populations, so are studies done on adults and studies done strictly in pediatrics exchangeable? For all sorts of reasons, you might immediately answer that no. Enrollment might differ between adult and pediatric studies. Enrollment in pediatric studies might be more through hospitals, whereas adults often volunteer themselves. Informed consent, of course, differs between adult and pediatric studies and treatment or handling of the patients in the trial might differ between adult and pediatric studies. With these dissimilarities, how can you still borrow from adult studies?
Well, I propose a three-level hierarchical model, which does allow us to borrow between adult studies and pediatric studies, but it does so sort of indirectly. So if you look at this tree, I have two subpopulations: adults and pediatrics. In the model, I'm going to assume the adult studies are exchangeable and the pediatric studies are exchangeable. I do have a branch called future study which I'll get into in a minute.
The adult studies and the pediatric studies are not directly exchangeable, but I make them exchangeable through the patient population. So, the populations themselves are exchangeable. And what this might translate to is suppose I had a treatment effect from a device. Just knowing the result of that, I couldn't attribute to, yes, it is an effect that came from an adult population; yes, it's an effect that came from pediatric population.
Now, I understand that this might not apply to all devices and it might not apply to all studies, but to the extent that it can, we can make use of it. What's nice about this model is that the borrowing, it will sort of adjust itself based on what the data show. So if it shows a lot of dissimilarity, there's not going to be a lot of borrowing.
Then I have a branch for a future study, and this is where the predicted distribution will come into play and I hope to illustrate that in an example shortly; actually, next.
So here, I wanted to examine the hierarchical model. I simulated data for this. This is completely hypothetical. So I made up a device called QuickFix Device for Pain. Our research question is what is the device effect in the pediatric population. Our primary endpoint is percent of patients without pain.
Suppose we have two prior randomized control trials in adults. In the first study, we had 125 patients per arm; in the second study, 75 per arm. So they're fairly high sample size, randomized controlled trials, but then we have 1 small randomized controlled trial in pediatrics and only 10 subjects per arm. The study hypothesis is that the device is superior to a control in reducing pain in the pediatric population. Of course, with just the pediatric study by itself, the sample size is too small to make a precise conclusion, so we want to try to borrow some information from the adult studies.
There are two questions of interest. First, can we make an inference about the device effect in pediatrics using the small study, but also borrowing from the adult studies? Second, can we make an inference about the device effect in a new pediatric study exchangeable with the previous pediatric study also by borrowing from adults?
So here are the observed difference in percentages, device minus control across the three studies: Adult Study 1, Adult Study 2, and the Pediatric Study. Then a 16.6% difference between device and control in the pediatric study may seem like a difference of quite high magnitude, but if you look at the standard error, it's 30%, so it's almost twice as big as the treatment effect. So if you were to run your run-of-the-mill statistical test like a T test, you wouldn't be able to reject the null hypothesis. So the borrowing from the adult studies, as you'll see, will end up reducing that variability so that we can make a more precise conclusion.
First, I wanted to show what happens if we don't borrow from the adult studies. You can imagine, with only 20 subjects in the pediatric study, we wouldn't get a very precise conclusion. In fact, that's what happens. Here's our posterior distribution -- so there's the word posterior again -- for the percent difference, which was our primary endpoint in Study 3. So the height of the graph kind of conveys the relative probability of the result.
So we see there's a relatively high probability for 17% effectiveness, but then there seems to be a bit of variability because that difference could go -- there seems to be some relative probability for an effect as low as 0% and for something as high as maybe 40%. So we still see some variability.
For a predicted distribution, we see quite a bit of variability where it's almost meaningless. If we were to predict a percent difference in a new population, well, anything from -100%, which would be a complete control effect, to +100%, which would be a complete device effect, you know, it's basically like not running a study at all.
Now, let's compare, first, numbers in a table. When we do borrow from adult studies, what do we get? Well, we reduce the variability considerably and one measure of how much we are borrowing or how much variability we reduce is what's called the effective sample size. It's what I have at the bottom of the slide.
Our effective sample size in Study 3, after borrowing, becomes 180. We started with 20 subjects. We're effectively borrowing 160, you can call them typical subjects, from the adult study so that when you increase the sample size, you're reducing variability. So, out of the total number we could have borrowed, which were 400 -- 250 plus 150 -- we borrowed up 40% of our possible amount.
If you look at the numbers in the red, so our standard deviation went from 9% to 3%. Now, we probably could reject a null hypothesis. Graphically, you can see this even more closely. On the left side, I have the posterior distributions for the percent difference, device minus control, no borrowing in the black and with borrowing in the red. You can see -- I showed you the black curve before. It was kind of wide, running from maybe 0 to 40, not a lot of certainty. The red reduces considerably. So, now we can get what would be like a nice, credible interval or confidence interval, if you will, on our device effect. And prediction increases as well. Precision increases in the predicted distribution as well, so comparing the black curve, which was meaningless, giving about the same probability to anything you could possibly get to something a little bit more narrow.
To sum up the example, with borrowing, when it's appropriate, we obtain not only a more precise treatment effect estimate for the current pediatric study, which is usually a small study, but also, more precise prediction for a new pediatric study.
I want to briefly discuss the second challenge, which is the lack of a suitable control group and discuss how we might deal with it using potential study designs, for one, and also kind of another idea of an application of the Bayesian hierarchical model.
Although, what was discussed in the last moderator session, we would like alternatives to randomized controlled trial, I do want to reiterate that it is the gold standard for all the reasons given here. It effectively minimizes bias if the trial is masked. Probably most importantly, it balances demographics and baseline characteristics, as well as unknown patient idiosyncrasies across arms, which you don't get with a non-randomized trial and it attains an unbiased estimate of the treatment difference. Whether Bayesian or not, the randomized trial is actually the gold standard.
However, I understand, as well as the rest of the FDA, that there can be practical challenges to doing a randomized controlled trial in pediatric device trials. One special reason is that there may not be an approved active control for a pediatric population, though it might be approved for adults, and just that fact might make it impossible to conduct a two-arm comparison because a sham or surgery might not be ethical.
In this slide, I've given some potential alternative designs and these may be some you might want to discuss in the breakout groups. Of course, there's just enrolling the pediatric subjects in investigational device arm and then just getting an estimate for the treatment mean. We could also enroll a randomized controlled trial for adults so you can get the treatment effect estimate in adults and then maybe sort of infer that it might be the same in pediatrics.
Second, there might be a suitable historical control group. It's possible that even though some of the sham options and surgery options are not ethical now, there might be some historical control data available for those options that can serve as comparators. Historical control groups indicate non-randomized designs and so there has to be some kind of adjustment to equate the two groups so that you can do a comparison across groups.
Finally, the last option goes along with what I've been talking about before. It involves using a Bayesian hierarchical model and in particular, the predictive distribution. So, in this case, instead of predicting an effect in a new trial, you would be predicting the control group response in the pediatric population. So there might be some avenues for using that type of modeling. Of course, when you are predicting, it means you have more uncertainty. So the drawback of using number 3 is you wouldn't get such a very tight confidence interval and so it may be a little bit harder to reject the null hypothesis.
Okay. Finally, I would like to summarize what I've talked about. Small trials with pediatric subjects do not have to impede making precise inferences about device effectiveness. If adult studies are available that can be considered relevant, pediatric studies might be able to borrow information from them using Bayesian hierarchical models. Although the randomized controlled trial is the preferred study design for device trials, if a control arm cannot be used for pediatrics, but is available for adults, there may be study design options available.
Should I read the discussion question?
DR. CUMMINS: Sure.
DR. THOMPSON: Okay. Our discussion question for the breakout groups is: How might we overcome identified challenges and limitations by these and other approaches? I guess the statistical and the computational are the approaches that were discussed today.
DR. CUMMINS: But there could be others. Another approach could be to do even a small trial to fill in some of the gaps that exist. So if you could go into your sessions, and it will be for an hour. Then if we'll take a break -- maybe we should just go into sessions and we'll take a brief break after the breakout and then come back and report back and wrap up.
(Off the record.)
(On the record.)
DR. CUMMINS: Joan, do you think you can start? Okay. Great. So if people could take their seats, and then we'll wrap up and I'll tell you what's going to happen next, and then we can all go home early.
MS. TODD: Okay. Hi. I'm Joan Todd from CDRH and our group, we had six in our group. We had one from industry, one from academia, and four FDA folks. Basically, they had a category called ethical concerns, which consisted of possibly having a global consent form. Also, talking about discussion with parents who have had experience with multiple device failures, talking to them about -- because thinking they'd be more likely to look at high risk device, especially when multiple devices have failed, they might be more easy to get consent when you sort of bring them into a discussion. And less stringent institutional requirements.
Another category, which were alternatives to the randomized controlled trials, which were just having smaller subsets of studies; use of historical controls and treatment arms with rare diseases, having smaller clinical trials and shorter studies based upon the device; define missing information in the current data pool; and then looking at published data of off-label use, or also present more of the post-market concerns with devices.
Then we have another title group called creation of a pediatric review team. This would incorporate multidisciplinary teams of clinicians for input. Basically tap the clinicians that are using these high-risk devices and having them put input into our advisory team. Have a separate section, a peds section to obtain clinicians' expertise and also to increase the guidance that we don't have, increase our guidance on the uses of devices in pediatric patients. And collaborate among industry, academia, and clinicians to create this pediatric review team for all devices. Thank you.
DR. RITCHEY: Hi. Mary Beth Ritchey. We had three different topics that we came up with in our group. The first was changing regulatory framework. Within this, we talked quite a bit about the need for a culture change to change the mindset of various stakeholders within this community. Then we also spoke quite a bit about the need to develop regulatory incentives and to limit the liability profile within this topic.
The second was more about collection and infrastructure of data. We talked about the use of registries, as well as what to do about off‑label data for this.
Then the third was design alternatives, and here, we talked about different options for RCTs, including small trial designs and adaptive trial designs. We also talked about use of Bayesian techniques and looking for alternatives from other fields, and NASA came up with their small trials there.
MS. SCHAREN: Hilda Scharen, CDRH. We came up with four categories, the first one being that the trials need to be less burdensome, different options and incentives for data collection. The other category was the benefit of using historical controls when control trials are not possible for pragmatic reasons.
The next category -- actually, I'm sorry, five categories. The three last ones are the challenges and the benefits of the Bayesian approach, so we discussed that. The other one is data appropriateness, so that was kind of one of our other big categories. Is it appropriate to borrow data when it is available for adults to extrapolate to peds?
Our last category was trying to come up with other approaches as sources of data. One example we came up with was simulated clinical computational model and if that could be used as well.
MS. SIMMONS: Hi. My name is Janesia Simmons, also from CDRH. Our group came up with two categories. The first one is broadening the acceptable sources of data. So, we considered age barriers and definitions, infrastructure database to collect data on off-label use.
Then the second category was reducing the burden on companies. So with that, we talked about using patients as their own control so we have data for patients before they had, maybe, like an implantable device. Why can't we use that data? Then also, reconsider the hurdles to multicenter trials and then smaller studies.
MS. ADAM: I'm Laura Adam and our group came up with three main topic areas. One thing we talked about was more collaboration and more sharing of resources, possibly with hospitals or end users with industry, outside organizations, trade groups. We talked about there was lack of funding, lack of resources and maybe the need to think of new ways to be creative and new ways to leverage each other without -- see a little more sharing towards a common goal kind of idea.
We also talked about ways to get missing data. So use more modeling for effectiveness. People like the message described in the session, in silico modeling. We talked about possibly developing like virtual clinical trials. Also, there was an idea to develop more tissue banks where actually when a surgeon's implanting a device, to take a sample of the tissue that you have, sort of like an initial sample to be able to say if it degrades or what the effect is over time. Then a public access database where there's just more sharing or people could sort of see the data or enter data.
Then the last area was new ways to analyze or reanalyze existing data, and that was the idea to look at data in a fresh way. So, talking about there's a lot of maybe combination products between CBER and CDER, so drugs or biologics that are also used with a device that might be a good data source.
Let's see. Again, off-label data maybe where they're already being used off-label for pediatrics, and see what's going on with that. Also, there was a little talk about different stages of the disease when an intervention happens, if the disease was just diagnosed or if it's had time to progress, and look at that data, kind of get an idea. So those were the three main areas.
DR. CUMMINS: Great.
MS. ADAM: Thanks.
DR. LEBINGER: I'm Tessa Lebinger. I'm from CDRH also. One of our big topics was planning ahead prospectively for pediatric use of devices. One of the challenges was that there's a lack of pediatric norms for many things, and if you want to study a device in an ill population you have to know what certain normative values are, and there was a suggestion to have various pediatric organizations collaborate to establish pediatric norms.
There was also a suggestion to develop partnerships with appropriate academic and advocate groups, and to encourage communications to develop earlier stage in the device to plan to collect data in adults in a manner that would make it easier to use the data to aid in pediatric approval in the future. There was a suggestion to use data from other countries and I believe that was it for the planning ahead. Excuse me.
Then we had a lot of statistical issues. There was a suggestion for improved communication and understanding about the requirements for Bayesian trials. There was also the suggestion that we need to translate these documents that we produce into practical terms. There was a request for greater availability of statisticians. There was a feeling that one challenge was there's lack of definitions of what is needed for exchangeability. There was a suggestion to narrow and refine definitions of effectiveness. Again, we also had the suggestion that patients we approach should be allowed to be used as their own control and a request for more pediatricians at CDRH.
We had post-marketing issues of tracking off-label use of devices, present off-label use, without threatening the manufacturers. Because the manufacturers aren't allowed to promote off-label use, so they're afraid to collect data at times, because they're not allowed to promote it, so how do they acknowledge and collect data on post-market use?
There was also a suggestion to collect post-market data from payers like insurance payers of off-label use -- apparently this is going on a lot in the drug world -- and to track off-label use so this can be used in program and study development. We talked about barriers to conducting studies, that there's a challenge in getting institutions to commit to conduct studies and, as has been previously mentioned, the ability to get informed consent from patients is often difficult. If you want a blood test, as was mentioned, parents are often so upset about what's going on with their children, they don't want to give an extra cc of blood even though the risk is probably negligible.
We need to have ways to overcome the barriers for marginal profits in companies when they have pediatric devices. Kind of related to that, we went into legislative issues for financial incentives or legislative incentives for companies to develop pediatric devices. We suggested that we need more regulations to improve the incentive to develop various programs and studies.
It was also suggested that a scale be developed to rate how close various devices may be to adults, often based on age, and that depending on that scale -- you know, if it's very easy to extrapolate from adults to teenagers, but difficult to extrapolate from adults to infants based on that scale of how difficult it is to extrapolate because of how close or far away it is, that incentives could be based on this scale and study requirements could be based on this scale.
DR. OSBORNE: Thank you. Steve Osborne, CDR. We had four categories, and I'll give just our two from each. First is study design and control. As echoing what was mentioned earlier, we should consider appropriate historical controls and the value that they could add. Also, to consider something called a cluster design control. There may be other names for this, but you may have an institution, a large one, that's willing to use a particular device in a certain way and another institution that's not willing to do that. They see a different path for a device for that indication or something similar and that you may be able to use one of those institutions, sort of as a control for the other.
Another was pre-study concepts, and that we should have an early identification of challenges whenever possible because the challenges, when you look at them in retrospect, can sometimes be insurmountable. There may be other FDA input on study design where FDA is perhaps a little more specific on the types of designs that look to be acceptable and will be accepted. Then there is data collection, and this is echoing something that was just mentioned about off-label use of a medical device, and that is to consider the value of off-label data that's positive, the positive outcome.
We may also consider the value of off-label data where the outcome was not a positive outcome. There's a challenge with that in that sometimes negative data's not published or available, but that if it were, we should look at both of those for off-label use.
Then, to look at validating this imputed control concept that was mentioned in one of the lectures, where you project what a control group might look like based on statistical means. Somewhere in extrapolating data, we're going to have to take a chance in doing that extrapolation where we don't know the outcome. This may be an example where we could do a study based on one of these imputed controls and see if that model is validated by the outcome of the study.
Then, as was also mentioned, financial incentives to include an exclusivity period for a firm that's willing to take a monetary chance on doing some form of study to extrapolate effectiveness of the device from adults to the pediatric group. That's it.
DR. CUMMINS: Thank you all. So the next steps are for us to take all this input, summarize it in a workshop summary, and my goal is to have that up and posted within 2 weeks.
I want to thank you all very, very much for coming and for participating. We really appreciate the time that you've given us today and your ideas and energy. I also want to remind you that there will also be a meeting transcript posted. Of course, the meeting transcript is pretty limited because so much of the discussion went outside of the ears of the microphones for the transcript. Especially once the meeting summary is posted, if you have thoughts or even before then that you want to share with us, we really want to hear from you. Please feel free to submit early and submit often to our docket, and the website address for the docket is right here. Thank you so much for being here today. Have a great afternoon.
C E R T I F I C A T E
This is to certify that the attached proceedings in the matter of:
USING SCIENTIFIC RESEARCH DATA TO
SUPPORT PEDIATRIC MEDICAL DEVICE CLAIMS
December 5, 2011
Silver Spring, Maryland
were held as herein appears, and that this is the original transcription thereof for the files of the Food and Drug Administration, Center for Devices and Radiological Health, Medical Devices Advisory Committee.
TIMOTHY J. ATKINSON, JR.