• Decrease font size
  • Return font size to normal
  • Increase font size
U.S. Department of Health and Human Services

Medical Devices

  • Print
  • Share
  • E-mail

Transcript for Public Workshop - Design and Methodology for Postmarket Surveillance Studies under Section 522 of the Federal Food, Drug and Cosmetic Act, March 7, 2012

UNITED STATES OF AMERICA

DEPARTMENT OF HEALTH AND HUMAN SERVICES

FOOD AND DRUG ADMINISTRATION

+ + +

CENTER FOR DEVICES AND RADIOLOGICAL HEALTH

+ + +

DESIGN AND METHODOLOGY FOR POSTMARKET SURVEILLANCE STUDIES UNDER SECTION 522 OF THE FEDERAL FOOD, DRUG AND COSMETIC ACT

+ + +

March 7, 2012

8:00 a.m.

FDA White Oak Conference Center

10903 New Hampshire Avenue

Silver Spring, MD 20993

CDRH: WILLIAM MAISEL, M.D., M.P.H.
Deputy Center Director for Science, FDA

MODERATOR: MARY BETH RITCHEY, Ph.D.
Associate Director, Postmarket Surveillance Studies
Division of Epidemiology
Office of Surveillance and Biometrics, CDRH, FDA

TOTAL PRODUCT LIFE CYCLE APPROACH AND ITS APPLICATION TO POSTMARKET SURVEILLANCE STUDIES

ANITA M. RAYNER, M.P.H., Associate Director for Policy and Communication, Office of Surveillance and Biometrics, CDRH, FDA

PHILIP DESJARDINS, J.D., Associate Director for Policy, CDRH, FDA

MARY BETH RITCHEY, Ph.D., Associate Director, Postmarket Surveillance Studies, Division of Epidemiology, Office of Surveillance and Biometrics, CDRH, FDA

DANICA MARINAC-DABIC, M.D., Ph.D., Director, Division of Epidemiology, Office of Surveillance and Biometrics, CDRH, FDA

CHALLENGES AND OPPORTUNITIES FOR COLLABORATIVE EFFORTS

DIANE MITCHELL, M.D. , Assistant Director for Science, CDRH, FDA

SERGIO M. DE DEL CASTILLO, B.S., Office of Device Evaluation, CDRH, FDA

NILSA LOYO-BERRIOS, Ph.D., M.Sc., Associate Director, Post-Approval Studies, Division of Epidemiology, Office of Surveillance and Biometrics, CDRH, FDA

MICHAEL STEINBUCH, Ph.D., Executive Director, Epidemiology, Safety and Surveillance Center of Excellence, Johnson & Johnson MD&D

LAURA MAURI, M.D., M.Sc., Brigham and Women's Hospital; Harvard Clinical Research Institute; Associate Professor of Medicine, Harvard Medical School

E. ANTHONY RANKIN, M.D., American Joint Replacement Registry

ROLE OF NETWORKS, REGISTRIES, AND OBSERVATIONAL STUDIES

MEGAN GATSKI, Ph.D., Division of Epidemiology, CDRH, FDA

JAMES FRICTON, M.D., Professor, University of Minnesota; Senior Research Associate, HealthPartners Research Foundation

DAVID R. HOLMES, M.D., Mayo Clinic, Rochester, MN

KEITH TUCKER, M.B.B.S., FRCS, National Joint Registry

TED LYSTIG, Ph.D., Lead Corporate Biostatistician, Medtronic Clinical Research Institute

CARA KRULEWITCH, Ph.D., CNM, FACNM, Branch Chief, Division of Epidemiology, CDRH, FDA

METHODOLOGIES AND SCIENTIFIC INFRASTRUCTURE TO PROMOTE INNOVATION

TED LYSTIG, Ph.D., Lead Corporate Biostatistician, Medtronic Clinical Research Institute

ART SEDRAKYAN, M.D., Ph.D., Weill Cornell Medical College, Cornell University

NATASHA CHIH-YING CHEN, Ph.D., Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School

SOKO SETOGUCHI, M.D., Dr.P.H., FISPE, Duke Clinical Research Institute, Duke University School of Medicine

SHARON-LISE T. NORMAND, Ph.D., Harvard Medical School and Harvard School of Public Health

MARY BETH RITCHEY, Ph.D., Associate Director, Postmarket Surveillance Studies, Division of Epidemiology, Office of Surveillance and Biometrics, CDRH, FDA

ALSO PARTICIPATING :

GREG MAISLIN, M.S., M.A., Biomedical Statistical Consulting

JANELL COLLEY, Coloplast

AMY PETERSON, American Medical Systems

UCHENNA ONYEACHOM, American Venous Forum

SCOTT BROWN, Covidien Peripheral Vascular

KATHLEEN BLAKE, M.D., M.P.H., Center for Medical Technology Policy

JOSH RISING, Pew Charitable Trusts

JEFF SECUNDA, M.S.B.M.E., M.B.A., AdvaMed

JOE KENNEDY, Remedy Informatics

HESHA DUGGIRALA, Ph.D., CDRH, FDA

JENNIFER CONTE, American Gastroenterological Association

MICHELLE WELLS, Gore & Associates

MICHELE BONHOMME, Ph.D., Division of Epidemiology, CDRH, FDA

DAN DILLON, MED Institute

BENJAMIN  C.  ELOFF, Ph.D., CDRH, FDA


INDEX

CALL TO ORDER AND INTRODUCTIONS - Mary Beth Ritchey, Ph.D.

WELCOME REMARKS - William Maisel, M.D., M.P.H.

GOALS FOR THE DAY - Mary Beth Ritchey, Ph.D.

TOTAL PRODUCT LIFE CYCLE APPROACH AND ITS APPLICATION TO POSTMARKET SURVEILLANCE STUDIES - Moderator: Anita M. Rayner, M.P.H.

Postmarket Surveillance - Section 522 of the FD&C Act - Philip Desjardins, J.D.

Process for Implementing 522 Regulations - Mary Beth Ritchey, Ph.D.

Evaluating the Need for Additional Data Using the TPLC Approach - Danica Marinac-Dabic, M.D., Ph.D.

Q & A for Regulations and Processes

CHALLENGES AND OPPORTUNITIES FOR COLLABORATIVE EFFORTS - Moderator: Diane Mitchell, M.D.

Premarket FDA Perspective - Potential 522 Questions that Arise - Challenges and Opportunities to Collaborate when Additional Data are Needed - Sergio M. de del Castillo, B.S.

Postmarket FDA Perspective - Potential 522 Questions that Arise - Challenges and Opportunities to Collaborate when Additional Data are Needed - Nilsa Loyo-Berrios, Ph.D., M.Sc.

Industry Perspective - Potential 522 Questions that Arise - Challenges and Opportunities for Collaborative Efforts - Michael Steinbuch, Ph.D.

Academic/Researcher Perspective - Challenges and Opportunities to Collaborate when Additional Data are Needed - Laura Mauri, M.D.

Society Perspective - Challenges and Opportunities to Collaborate when Additional Data are Needed - E. Anthony Rankin, M.D.

Panel Discussion

ROLE OF NETWORKS, REGISTRIES, AND OBSERVATIONAL STUDIES - Moderator: Megan Gatski, Ph.D.

Patient-Centered Postmarket Surveillance: Experiences with NIDCR's TMJ Implant Registry and Repository - James Fricton, M.D.

ACC/STS Registries - David R. Holmes, M.D.

NJR Registry Collaboration with Regulatory Agencies - Keith Tucker, M.B.B.S., FRCS

Medtronic PAN - Ted Lystig, Ph.D.

Recommendations from FDA - Leveraging Data Sources - Cara Krulewitch, Ph.D., CNM, FACNM

Panel Discussion

METHODOLOGIES AND SCIENTIFIC INFRASTRUCTURE TO PROMOTE INNOVATION - Moderator: Ted Lystig, Ph.D.

Overview of ICOR and IDEAL - Art Sedrakyan, M.D., Ph.D.

Linking Data Sources for Postmarket Surveillance Studies - Natasha Chih-Ying Chen, Ph.D.

Leveraging Various Data Sources for Postmarket Medical Device Studies - Soko Setoguchi, M.D., Dr.P.H., FISPE

Evidence Synthesis - Sharon-Lise T. Normand, Ph.D.

Recommendations from FDA - Design and Methodology of 522 Studies - Mary Beth Ritchey, Ph.D.

Panel Discussion

MOVING FORWARD WITH 522, NEXT STEPS AND VISION

Next Steps and Vision - Danica Marinac-Dabic, M.D., Ph.D.

ADJOURNMENT

M E E T I N G

(8:00 a.m.)

DR. RITCHEY: I hear there are a couple of wrecks on 29 and 495, and so we're waiting on a few more people to get here. But I think, due to the packed agenda, we should go ahead and get started.

Many of you have been here before. For those who have not, the bathrooms are out in the lounge, they're right here. The food for breaks, coffee, that type of thing, there's a little kiosk here.

And then the package that you received, I wanted to walk through that just a bit as well. It's got a lot of information in it, including all of the slides. Behind the agenda are the slides for the morning, behind the bios are the slides for the afternoon, and then the draft guidance for 522 is in the back, on the left as well. The two slide inserts are for afternoon sessions, so we can slide those in on the left.

We are very glad you're here today. We are excited about the program that we have, and we're really willing to have an open discussion about the 522 program.

To begin our day, I'd like to introduce the Deputy Center Director for Science here at CDRH, Dr. Bill Maisel.

DR. MAISEL: Good morning. And I'd like to add my welcome to Mary Beth's.

We're here to talk about the 522 program. It's really hard to separate out the issues of the 522 program from the issues we face in postmarket surveillance in general. And so I think a lot of the dialogue and conversations that happen today will really help us and be informative for how we look at postmarket surveillance overall.

We view postmarket surveillance as extremely important, for obvious reasons, but it's also important from a policy perspective for the Center. And we've identified, provided a comprehensive view of postmarket surveillance as one of our 2012 strategic priorities and have committed to producing that vision by the end of April of this year, and we'll be publicly putting that vision out and welcoming input and comments on it over the ensuing months.

We certainly don't view the development of a comprehensive postmarket plan as our job alone, and that's part of the reason you're here today and will be engaging with other stakeholders. We recognize the important impact on patients and providers and industry and academic institutions and certainly look forward to continuing that dialogue.

As we think about developing a stronger postmarket infrastructure or leveraging the infrastructure we already have, there are a few key goals, I would say, of the postmarket program, and one certainly is to identify underperforming products quickly, and that's always been one of the targets and goals of our postmarket program.

Another would be to, in selective cases, be able to evaluate the "real-world" performance of selected devices. Certainly a lot of our premarket decisions are based on clinical trials. Sometimes we have mandated post-approval studies, and those are often targeted towards trying to figure out how those devices perform in the real world after we move them from the most expert clinicians in the world out into the community.

And then a third pillar of our postmarket surveillance system, in our view, is that it should be able to produce data that can be leveraged to help produce new indications to feed back onto the premarket side so that devices can be iterated and developed, that the data that's produced in the postmarket world can be used on the premarket side, either to help bring a new product to market or to help bring a new indication for an existing product to market. And so those purposes of postmarket surveillance we should keep in mind as we're thinking about some of these programs today.

The other factor is that it's impossible to think about postmarket surveillance without thinking about the premarket side. And while you'll often hear FDA say, you know, the data on the premarket side has to stand alone and we need to be able to show reasonable assurance of safety and effectiveness before the product can get to market, it's really impossible not to think about what happens after the device is on the market. And a robust postmarket surveillance program can certainly give comfort to the premarket decisions and help us identify what sorts of data we need on the premarket side and what things can be collected on the postmarket side.

And we've undertaken a number of actions over the last 18 months or so, many of which do relate primarily to the premarket side, but a number of them do have implications for the postmarket side. And let me just highlight a couple of them.

Last summer we released a clinical trials guidance, which primarily speaks to premarket evaluation of products and primarily for higher-risk products. But many of the issues and the lessons and ideas in that guidance document certainly apply to clinical trials that are conducted at any time in a product's life cycle, including on the postmarket side.

And so while the guidance doesn't specifically speak to 522 studies or postmarket studies, the concepts do, such as the least burdensome principle of not every trial needs to be a randomized, blinded, placebo control trial. There are many existing databases. There are many observational studies which can provide very important and good information which we can use to answer the fundamental questions we might have about a product's safety or effectiveness.

The other guidance document we released last summer was a benefit/risk determination guidance document. And, again, while that speaks primarily to premarket decisions, the concepts really apply more broadly, and that is, we are trying to articulate the issues that we think a reviewer should be looking at and what we should be looking at when we're trying to make decisions about what products are appropriate for marketing.

And it's not just based on a clinical trial and the data sitting in front of us. It also is important for us to consider what alternative therapies are available for that patient. How do patients feel about accepting certain risks that might be associated with certain therapies or certain devices?

So we try to look more broadly, from a patient's perspective, of about what therapies they might be willing to accept and what alternatives are available to them.

We have a number of ongoing efforts in the postmarket world, and I don't have time to detail them all, but I thought I'd give you a little bit of a taste. So many of you, I'm sure, or most of you know about the unique device identifier program that's mandated by Congress, that we develop a unique device identifier for medical devices. And, quite frankly, it really limits our ability not having unique device identifiers in existing databases, electronic health records, registries. It makes it very hard to hone in on the individual devices that might be underperforming. It makes it hard to assess claims and administrative databases.

So we do believe that that will really advance postmarket surveillance. Right now, that's under administrative review and we're hopeful that we will be able to share that proposal publicly this year.

But there are a number of other things we're doing. For example, we get inundated with adverse event reports, passive adverse event reports, where patients or providers report to manufacturers or manufacturers become aware of adverse event reports. We gets hundreds of thousands of these reports each year and, quite frankly, it gets very difficult for us to review every single one of those reports (a) because there are so many and (b) a lot of the reports don't contain "useful information." Sometimes we don't even know what device was involved or it might be a very cryptic report, and we can't really tell whether the device caused the adverse event. We might not know enough information about the patient or the procedure.

So we are developing data mining techniques where we'd have automated review of adverse event reports to try to screen out some of the adverse event reports that maybe aren't as useful, so we can focus in on the ones that have more data and better data. Data mining can also be used to try to identify trends or changes in adverse event reporting and numbers.

We're looking at ways to increase and strengthen adverse event reporting. For example, we're developing a mobile app so that patients will be able to directly report adverse events that would go into our standard database. We're looking at ways for automated adverse event reporting out of existing electronic health records so that certain things might trigger an automatic report to make adverse event reporting easier for the clinical and patient communities.

So, you know, the bottom line is there's a lot of work to do in postmarket surveillance. Nevertheless, we believe we have the strongest postmarket surveillance system in the world.

That being said, we're not an island and it really has become increasingly a global community, and so we want to -- we've been reaching out and trying to collaborate with other regulators, other data sources throughout the world. The great example is the ICOR registry, the International Consortium of Orthopedic Registries. We've helped coordinate and collaborate with registries around the world and, quite frankly, I think that is the future. There's no reason that data on a product that's used in the United States, that is implanted in someone in another country, isn't relevant to our patients and to our community. And so that also potentially can lessen the burden on data collection or additional data collection.

So we're interested in hearing about your ideas and how we can leverage existing data, how we can link data from one source to another, whether it's from a registry to administrative or claims databases. I think there's a lot of opportunities for us to develop additional methodologies.

So, again, I'd just like to add my welcome. I think it'll be a great workshop, a lot of important things on the agenda, but focusing from our standpoint a lot on the collaboration and the opportunities to work together developing new methodologies and leveraging existing data in ways that maybe we haven't done before.

So thank you.

(Applause.)

DR. RITCHEY: Thank you, Dr. Maisel.

As we move through the agenda today, we're going to spend the first session talking about the total product life cycle and the 522 process. Then we'll move to talk about challenges and opportunities from different collaborator perspectives. And then in the afternoon we'll talk about the role of networks, registries, and observational studies, sort of the infrastructure that's available to us. And then, finally, methodologies and the scientific infrastructure to promote innovation.

As we move through the day, I'd like to really encourage everyone to come to ask questions, to be a part of the conversation. And then after today, the docket for this meeting will be open for 30 days. And so any additional information that you have that you'd like to share, we welcome that as well.

In addition to the people in the room, I'd like to let you know that there are about 150 people who are watching us via webcast today. And so for those people in particular, we welcome information into the docket.

And our goals today are really to say where we are, to talk about the regs, to get some information and some discussion around that, but then to really say where we're going, to say what it is that FDA would like to see this program become and to get feedback on that, to really find collaborators and not just talk about it, but to really start moving forward on that path.

So with that in mind, let's begin the first session. I'd like to introduce Ms. Anita Rayner, who is the Associate Director for Policy and Communication in the Office of Surveillance and Biometrics, and she'll be our moderator for this session.

Would all the people who are speaking in this session please come to the front?

MS. RAYNER: Good morning, everybody. Can you hear me in the back? Is it okay? We're okay here?

Good morning. I'm delighted to be here at this 522 workshop. I think it's extremely exciting, not the least reason because I was actually one of the progenitors of the 522 program back in -- well, I'll tell you, it's been over 20 years. So it's very exciting personally for me to see how far this program has come. And I'm also honored to be asked to moderate this session, so I'm going to jump right into it with introductions.

As Mary Beth said, this session will lay the foundation for the discussions for the rest of the day. We're going to be talking about the total product life cycle, TPLC, and the 522 process. And we'll go through all of our speakers, and then we'll be opening our mikes to your questions and comments, as well as hearing from our panelists, to elaborate on some of the issues that they bring up.

So, first, I'm going to introduce Phil Desjardins, to my left. He's our first speaker. He's going to give an overview of the 522 regulations. Phil is the Associate Director for Policy in CDRH. He works with all of the CDRH offices in setting regulatory policy and is involved in all aspects of medical device regulatory issues. Phil joined us in 2005, and we're delighted to have him advise us on these issues.

Next, I actually have a very able substitute for Nicole Jones. Mary Beth Ritchey is subbing at the last minute because Nicole is ill today, and she is going to give us an overview of the process of implementing 522 regulations. And Mary Beth, I believe you already introduced yourself, and your bio is in the packet.

Rounding out our panel we have Dr. Danica Marinac-Dabic. Danica is going to talk about evaluating the need for additional data using the TPLC approach. Danica is the Director of our Division of Epidemiology in the Office of Surveillance and Biometrics in CDRH. As a physician and epidemiologist by training, Dr. Marinac-Dabic leads CDRH's 522 and post-approval studies programs, and she also oversees CDRH's epidemiologic research program, which is focused on the methods and infrastructure for evidence development and appraisal to apply to medical device regulatory science.

So with that, I'm going to turn the podium over to Phil.

MR. DESJARDINS: So we've got the introductions out of the way, so let's get a brief overview of what I'm here to talk about a little bit. I just want to give a brief overview of the 522 statutory authority and regulatory requirements. I believe most of us in the room are generally familiar, but I think the statutory and legal foundation, before we get into the four discussions of sort of policy and strategy, makes a little bit of sense.

So this is designed to be sort of a high-level overview. If there are specific questions, I think we can get into them during the moderated discussion. I'm going to walk through some of that right now.

Because slides have been provided, I thought it would be helpful to lay out sort of the statutory criteria in its entirety in a single slide. I'll be going through each of the elements in a little more detail later on in the presentation, but referring back, I think it's always nice to have the framework laid out up front.

When thinking about 522 studies, I think it's important to think more broadly about two things, the purpose of the statutory authority in its entirety and, second, the purpose of the individual 522 study itself. The purpose of the 522 authorities is to allow FDA to require the collection of useful information after devices reach the market. This is usually done for two reasons.

The first is to reveal or uncover unknown or unseen adverse events that we may not have been aware of at the time of premarket approval. This can be due to a number of factors. As Dr. Maisel indicated, we don't know -- we don't have access to unlimited information at the time we're making our premarket decisions, and often questions can arise about unknown elements that don't rise to the level of keeping a device off the market but it's worth exploring further once the device hits the market.

The second issue that's going to be helpful for us, once adverse event issues or adverse reactions are known, it can help give us a little bit more context in terms of the real-world experience that patients and providers are experiencing when they're using these products.

So we may know that an event is going to be occurring once a device hits the market. The 522 studies allow us to dive a little bit deeper and identify exactly how often they're occurring and the severity in which they're occurring, things of that nature. So that speaks to sort of the 522 authority in general.

Looking at the studies themselves, we do identify unanswered questions that the studies are designed to answer. And I think it's important to look back to the order requiring the 522 studies to identify what are the questions that FDA is trying to answer with the study and, again, sort of when designing the study, those issues are the ones that should be in the forefront of our minds. What are we trying to answer? What's the purpose of this individual study? And is the design going to allow us to answer those questions?

So I'm going to get into the statutory authority for 522 studies, and sort of one foundational question is the device itself must be a Class II or Class III device. This excludes all Class I devices, devices that are the lowest-risk devices. So in addition to being a Class II or Class III device, it must also meet one of the four statutory criteria that was in that first slide that I provided.

The first criteria is that the failure of the device would be reasonably likely to have a serious adverse health consequence. And we can see sort of below in my slides is that we've actually done a pretty good job and Mike has done a pretty good job of implementing the statutory authority through the device-specific regulations. And 21 C.F.R. 822 is going to be the implementation regulations that we're following in implementing 522 studies. And for most of the statutory criteria, we've done a pretty good job of identifying or defining how we interpret the terms appropriate for the statutory criteria.

Device failure, I think, is pretty straightforward. The definition is up there, but I think it's worth spending a little bit of time on how FDA interprets the term serious adverse health consequences, and there are two real key terms within the definition that we've provided. We define serious adverse health consequences as device-related events that are life threatening or that involve permanent or long-term injuries or illness. So while this may sound pretty broad, it is excluding a lot of adverse events that don't meet the definition of this.

So when FDA is thinking about whether or not there are unanswered questions, the next step we need to go through is to sort of identifying, well, do we have a hook, do we have a regulatory hook to impose a 522 order? And this is probably the first place that we look. Would the device failure result in life-threatening or permanent or long-term injuries or illness? And this is actually, I believe, the most common hook we use in ordering 522 studies.

The second statutory criteria is relatively new. In 2007 we were given an additional prong upon which to issue a 522 order, and this is when the device is expected to have a significant use in pediatric populations. And what you may notice in this statutory criteria is that there is a little bit of flexibility intentionally written into the standard itself. The first is the term "expected." The definition of "expected" allows us to look a little bit more broadly than some other terms that could've been used there.

So I think we look at what do we actually expect to be happening once the device hits the market? And it does not necessarily require us to identify sort of a specific number or any other more specific criteria.

The second flexible term in there is significant use in a pediatric population. Rather than telling you about what this does include, I think it's a little bit easier to think about what it doesn't require of us, and what it doesn't require is that the device be specifically labeled only for a pediatric population or even specifically labeled to include the pediatric population. What it allows us to do is that when a device hits the market and it is labeled for use that could include use in a pediatric market, we've met the second hook identified here. And, again, sort of here we apply sort of the significant pediatric use on a case-by-case basis. And over the last four years, since we've had this authority, I don't think has been heavily relied upon. I think we have other mechanisms in place where we can collect this information in pediatric populations, but it is a tool within our tool bag.

The third statutory criteria are devices intended to be implanted in the body for a year or more. I think this is pretty straightforward on its face, and I believe this is actually one of, I think, the second most commonly utilized hook in issuing 522 studies.

And the final statutory criteria is that the device is intended to be used -- the device is intended to be life supporting and used outside of the user facility. Again, our implementation regulations at 21 C.F.R. 822 do a pretty good job of defining how we interpret the statutory criteria, and I think it's pretty clear, both on its face and our implementation regs.

I think the part that we need to focus in a little bit more on is "used outside of user facility." And we do give a pretty thorough explanation of what we consider a user facility and outside of a user facility. The way I think about these is sort of examples. The automated external defibrillators you'll see around this building and in a lot of the offices you guys work on would fall within that category, as well as with home-use dialysis machines. There's a distinction between the where the device is actually going to be used. The dialysis machine used in a user facility would not meet this criteria, whereas ones that are going home and being used in the home environment do fall within the statutory limit.

Another important thing to recognize, in terms of our 522 authority, is that the statute gives us the discretion to issue 522 orders. It's not an automatic hook that every time you trigger the statutory limits, you're automatically going to be in a situation where a 522 study is required. I'll go into it in a little bit more detail, and I think Mary Beth will as well. But there is a decision-making process within FDA as to when a 522 study would be appropriate. And to sort of boil it down is, are there unanswered questions that we think we need to know to adequately protect the public health? And that's how we identify when a 522 study that does meet the statutory criteria is going to be issued.

The second is that, for the most part, these studies are ordered after the device is on the market. With the exception of pediatric studies, we need to do it once it's on the market. Pediatric studies, we do have the ability to issue them at the time of approval or clearance. But so far, I believe our experience with this authority is that we've done it, we've issued the clearance or approval letter and then afterwards identified the need for the 522 study.

With regards to the duration of these 522 studies, in most instances, we're limited to collecting data over the course of three years. There are two exceptions to this. The first is, again, in pediatric studies. We have the ability to order the studies to take place longer than 36 months. The second is with the agreement of the firm itself. Often there are questions that FDA and the firm would like to answer, and we recognize that the limitations of collecting data over three years may not allow us to answer those questions within that time frame, and the firms can agree to conduct the studies longer than that and would write that into their study plans.

I think, again, Mary Beth will get into this a little bit more, but sort of in terms from a process standpoint of issuing a 522 order, all orders issued under the signing authority, the Director of OSB within FDA.

And there are four major elements that need to be included in the order itself. The first is linking it to a specific premarket application, the 510(k) or PMA, to put the device on the market. The second and in my opinion the most important is the unanswered public health questions. What is the study intending to answer?

Again, the third element is sort of what is the legal justification we have or the legal hook for requiring a study? And then, finally, FDA will make some recommendations as to what the study plan should look like. But, again, it is the manufacturer's responsibility for developing that study plan usually in consultation with epidemiologists. But our order should give some basic ideas of our initial thinking as to how the study should be designed to answer these questions. Again, it's the manufacturer's responsibility to design the study, but referring to the recommendations in the order are going to be most helpful.

Again, Section 822 of the Code of Federal Regulations does identify the -- I think there's about 15 required elements of a study plan. Rather than going through them all right now, I've provided them in slides and they are in our regulations. But I think it's a pretty exhaustive list.

Again, the review process. There is some information here. I think Mary Beth will get into a little bit more. But our epidemiologists do review all information that comes into the Center with regards to 522 studies. The major milestones are going to be the submission of the study plan itself. There's going to be a lot of interaction on that. And then the periodic reports that are occurring usually biannually, every six months for the first couple years and then annually thereafter.

In addition to looking at the types -- the way the studies is designed, FDA is also looking at the way the study is conducted in terms of meeting the proposed design. We're looking to make sure that the study is being carried out the way it was expected to be carried out, at least from our standpoint, and it can be modified over the course of the study and those modifications would go through FDA as well.

And the final element that I wanted to talk about a little bit is what our hook is for failure to comply with the 522 order. An order is a legal requirement. Noncompliance could lead to regulatory action. I think, luckily, we haven't had to go down these paths too often, but we do have overriding tools in our tool bag. The first action in almost any regulatory action is going to be a warning letter to the firm, identifying the regulatory violation and how to come back into compliance. And if that doesn't get the job done, the device could be deemed misbranded by FDA. And then we have all of our tools again to take action against a misbranded device, such as seizure of the device, civil money penalties, injunctions from producing and distributing a device, as well as criminal prosecution.

So, in a nutshell, that's sort of the 522 statutory authorities. I'll turn it over to Mary Beth for a little bit more of the process-oriented stuff.

MS. RAYNER: Thank you, Phil.

(Applause.)

DR. RITCHEY: Good morning. I am not Nicole Jones; I am
Mary Beth Ritchey. And with that, I'm going to talk a little bit about the process, how we do the 522s, and I wanted to start out referring back to what Dr. Maisel said, this is one of the postmarket tools that are in our tool belt.

Postmarket requirements typically include reporting adverse events. There's also the potential for inspections, recalls, and then, for Class III devices, there's also the potential for a post-approval study and then also the 522.

522 is also one of the many programs in the Division of Epidemiology. The 522 program, the post-approval studies program, the epidemiology research program, and then also systematic evidence appraisal are all in the Division of Epidemiology. And right now there are 24 epidemiologists in that division who all work on those four programs. And so this is one of the many things going on.

I'm going to talk about a lot of different things. I wanted to give you the references to refer back to from this talk because it's sort of a nuts-and-bolts type of thing. So let's get started.

The 522s are meant to answer postmarket public health questions about Class II or Class III devices, as Mr. Desjardins said. And we can order a 522 study with a prospective study duration for up to 36 months. Data that are collected via these studies can reveal unforeseen adverse events, the actual rate of anticipated adverse events, or other information which is necessary to protect the public health.

The 36 months is for prospective surveillance. There could be a study that was ordered in order to look at a retrospective surveillance or to do some sort of cross-sectional surveillance as well. Those are typically the studies that would look at a longer duration of time, say, eight years in the past, but still collecting the data prospectively so that the study itself does not last longer than three years.

So with this, for pediatric studies, we can extend the study duration, and the act also specifies the regulatory actions that we may take if there is noncompliance on the part of a sponsor.

Phil went through the four criteria, so I won't do that now. But I wanted to reiterate that while 522 is applied to Class II or Class III devices when one of the four criteria are met, just because a criterion is met does not mean that a 522 will be issued. A 522 order is issued to a company and the company is notified that they need to begin one of these studies.

When that happens, that means that the FDA has gone through a full pre-522 process. In this pre-522 process, CDRH staff may identify issues that are appropriate for studying in postmarket surveillance at any point in the life cycle of the device. Such issues may be identified through a variety of sources, including analysis of adverse event reports, a recall or a corrective action, post-approval study data, review of premarket data, reports from other government authorities, or the scientific literature.

And examples of situations that may raise postmarket questions are listed on this slide. We may want to confirm the nature, severity, or frequency of a suspected problem reported in adverse event reports or in the published literature. We may want to obtain more experience with a change from hospital use to use in the home or another environment. We may want to address long-term or infrequent safety or effectiveness issues in implantable or other devices when the premarket testing provided limited information. Or we may want to better define the association between problems and devices when an unexpected or an unexplained serious adverse event occurs or if there's a change in the nature of the serious adverse events postmarket.

The pre-522 process follows a series of steps. First, there's identification of the issue, then a cross-Center team is convened and they do a full evaluation, there's determination that yes, the 522 is the best option, and then the order is issued.

The convened teams discuss numerous elements, with the ultimate goal of each member of the team making their own recommendation as to whether or not a 522 order should be issued. And the team discusses things like:

  • Does this particular device meet the statutory criteria?
  • What is the public health question?
  • What's that question based on?
  • Is it specific to a single device or to a device area?
  • Is there another source of data or another action that would be more appropriate, or is some combination of another option and a 522 most appropriate?
  • Are there other ongoing studies that we know about that address this public health question?
  • And if we are going to issue a 522, what design do we recommend?
  • What study is what we're suggesting to do?
  • Or what type of combination should be considered in addressing the public health question?

So an order is issued if the Director of the Office of Surveillance and Biometrics decides that that's what we should do. And then the order itself identifies the premarket submission that's involved, public health questions, the rationale for the order, and the study design recommendations.

And this looks great in theory, but I'd like to walk you through an actual order so we can see what this looks like.

So first, the 522 number, the PS number, is linked to the 510(k) or PMA from premarket. And next, the statutory criteria are outlined so that you can see how a Section 522 study came to be. The information for this particular device and how it meets the criteria and the rationale for this particular 522 order are delineated. And then the questions are written out in a particular section and the recommendations are written out in a particular section. The last thing is the timing of the response is stated in the order. We expect a response within 30 days of receipt, and these responses, unlike other documents, do not go to the document mail center. They come directly to me, just for fun.

(Laughter.)

DR. RITCHEY: So after that, when an order is issued, there's 30 days and we typically do a logistics call to talk about what's going on, to talk about what's expected, the elements of the study plan, the fact that we have this 522 website, and because we have this 522 website, we're going to provide a little bit of information to the public, what that information is and the definitions for the study statuses that we use on website. We also offer at that time a content phone call. Many companies want to discuss the specifics for their device, for the study design, for the methodology, so we offer that as well.

These are part of what we go through in those calls, the elements of the 522 study plan, and they're all listed here.

So after all of that, something comes in at 30 days and, upon receipt, FDA evaluates the proposed study plan for administrative completeness and then to see whether they would result in collection of useful data that will answer their surveillance questions.

So after a study plan is approved, there's interim and final reporting. A typical schedule for this is interim reporting every six months for the first two years and then annually thereafter.

And then the contents of the study plan, the contents of the reports and the contents of the final report are able to be pulled via FOIA so that anyone could see the redacted versions of those. FDA does post some information about the postmarket surveillance studies on the 522 webpage, and we report a little bit of information about that particular study, the 510(k) or PMA number, what device area this is in, such as orthopedics, how the study's going and the reporting status.

For each report that's due in, we say whether that report was received on time, or that it's overdue and we haven't yet received it, or that it's overdue and received.

The study progress is also listed, and the progress is compared back to the original timeline that was established as part of the study plan.

And progress could be plan pending, that is, an order has been issued but no plan has been approved; plan overdue, and that's an order has been issued, no plan has been approved and it's more than six months since the issuance of the order; the study is pending, that is, the plan itself is approved but no patient has been enrolled yet; the progress is adequate, that is, a patient has been enrolled and the study plan is continuing according to the timeline that was set and agreed upon; the progress is inadequate, the study is moving but it's not moving in accordance with the agreed-upon timeline; the study is completed, it's finished; the study is terminated, it stopped early or something happened and it wasn't completed, another action was taken.

And then "other" is used for a couple of reasons. The most common is that the device is cleared or approved but not currently marketed. "Other" is also used if the device is being acquired by another company.

So we do we recognize that there may be disagreements, and so the appellate process for the 522 is slightly different for the order and for a report decision; for the order because it's issued by the Office of Surveillance and Biometrics Director, and that's the person who signs it. The first meeting in the appellate process would be with the CDRH Deputy Science Director. Because reports are signed at the division level, the first meeting there would be with the Director of Office of Surveillance and Biometrics.

This is the number of orders that have been issued since the Division of Epidemiology took over the program in 2008, by year. It looks like a lot, especially last year, with 149 orders that were issued. It seems like a big jump. But that's my workload, and I've learned that knowing about my workload isn't really that important for everybody. So I wanted to talk about the device areas that orders have been issued for.

So in 2009, there was one device area for which we issued orders. In 2010 there were two areas, 2011 there were three areas, and thus far this year we've issued orders for one device area. So that huge number of orders wasn't a huge number of 522s. It was three different device areas.

So the current status of all of those studies. We've issued 283 orders, and there are 274 studies that are ongoing. This is because some studies will have more than -- some 522 orders will result in more than one study. So we may issue an order that asks about explants and clinical data, and the company may choose to look at that via two different studies. So this number of studies is a very high because it may be that multiple studies come from one order. However, most of these studies are in compliance, and there are a few that are either overdue or progress inadequate.

So that's sort of the nuts and bolts of things, and I think Danica is going to give us our vision.

(Applause.)

DR. MARINAC-DABIC: Good morning. I would like to welcome you all on behalf of my division. This is really a growing program within our group, and we're excited to come here to be part of this discussion because the steps forward and the vision for the 522 program for the future involves really collaboration between all stakeholders, including industry, including academia, payers, patients. We would like to create a program that will have an impact on all of these stakeholders, but also the program that will have input and the processes that would be tailored to what's capturing the input of all of these stakeholders.

So let us pause after these excellent talks that laid out the foundation for the discussion, with a look to the future. And I'd like to introduce these three concepts to you and spend some time discussing how a 522 program fits into the vision that our Center has for the national surveillance infrastructure for medical devices.

It is very important to view this program not in isolation, not as one of the silo programs, so that you understand the decision to make -- issue the order about a 522 is really based on the current state of the data that is available to us for a particular device. So the stronger the postmarket infrastructure we have, there's going to be less need the way of how we see for the future 522s. This is why we need your help to build that strong national infrastructure for medical devices.

And, again, you have heard that we have issued hundreds of orders last year, and there is a reason for that. The devices that received the orders, really we didn't have a proper national infrastructure to capture the knowledge about how these devices perform.

So you heard Dr. Maisel this morning introducing a program that we are working on and the document that is going to be rolled out sometime in April on how the Center envisions the postmarket surveillance infrastructure for medical devices.

So as we move towards planning how this is going to happen and what are the major elements, it will be of utmost importance to keep engaging the stakeholders so that we can introduce the concept of shared responsibility. The safety for medical devices is not only industry's responsibility. We have to have a strong national program, a strong national infrastructure for surveillance, that whatever the question is, if it's at FDA or at industry or the hospital level, there is going to be the place to reach to the surveillance system to address this question.

Concept number two is that the way of how we look at the postmarket surveillance is that the actual total product life cycle begins with the on-market knowledge, with the postmarket knowledge. On the postmarket side there are many, many activities that we perform, ranging from working with the MDR reports; advancing new methodologies for active surveillance; advancing the methodologies for enhanced surveillance; looking into all of these required postmarket studies that we issue at the time of the approval order for PMA devices; looking into 522 studies' developing the methodologies for audit and synthesis; how one can take advantage of not-so-perfect data in the postmarket setting, but still be able to provide at any time of the total product life cycle the best estimate of the risk/benefit profile. This will benefit the FDA, this will benefit the clinical community, this will benefit the patients, but they are most important customers.

And, finally, the knowledge management throughout the product life cycle is very important. What do we do with that data? Historically, FDA had been very criticized for making decisions based on the silos of information. For example, you would be required as a manufacturer to complete certain preclinical testing. Then you would move to the next stage. Then you would complete that.

Certain decisions are made and FDA forgets about the data that are collected in the premarket setting. We move to the postmarket arena. We issue the order. You complete the study. We issue the report to you and say, you know, you completed your requirement. And it seemed that this data had been forgotten, it sits somewhere else in the silos, and then we do not really incorporate that in the decision making. What we are trying to do today at the FDA, really to connect the dots, to take advantage of all of these data that are available and the certain different data sources, not only at the FDA but those in the outside, very capable data sources that are collected in payer systems, in hospitals, researchers' published data, and put together the model that will combine the data so that we can, at any time of the total product life cycle, have the best available knowledge.

So if we have all of these things in place, then the question that I would like to pose: Are we going to still have a need for additional data? In this particular case, are we still going to need the 522 studies? This will depend on the quality of the data that are going to be part of the system.

And as you've heard from Dr. Ritchey, there are different design methodologies that you can use to address the 522 question. So, basically, even if we have, you know, data sources that are silos and not at the FDA, and the manufacturer is able to access those and use them to respond to the question, then we might actually impose the 522 order. But addressing the order is going to be much easier if you have the very, very important postmarket infrastructure.

So that was a simple slide. Now let's move to the little bit more complicated slides here. I really apologize. I should've used summation and actually been able to bring this, but I'll show it to spend the next five minutes on talking about what I wanted to convey on this slide.

So the slide is, in addition, to illustrate how we see this national infrastructure moving from 2009 to 2029, meaning for the next 20 years, what is going to happen on the national landscape that will impact the way of how we at the FDA are going to be looking into additional -- am I running out of time?

So this is what's represented in this upper box, what you see on the top of the slide. So what you see in the colors are actually different parts of data sources that we use. So on the top -- and they're meant to be represented by the size and the direction, really how we see these particular data sources being utilized within the healthcare system in the years to come.

So for example, on top, what you can or you cannot see is the adverse event reporting or MDR. We will expect some increase because we are boosting the, you know, electronic medical record, all the apps, and we expect that we are going to receive more reports through to the FDA. However, we also know that as we develop more methodological approaches toward enhanced surveillance and active surveillance, what you see in the purple and under the MDR, it's enhanced surveillance, again, how this is going to essentially be utilized in these next two decades.

What you see in the kind of yellowish, brownish, on the left side and it's starting as a de novo data collection for post-approval studies. As you can see, we anticipate that to be really decreasing and what we are counting on -- and 522 would actually be in that category in some instances. What we're counting on is that we will be able, as a society, to pull together all of these resources and develop the postmarket surveillance infrastructure that will diminish the need for more new study collection that will impose for manufacturers both as a PMA approval order or a 522 for the de novo data collection.

Now, we have what is in red here. We have administrative claims data. As you know, the Sentinel Initiative was launched a couple years ago, and we currently have access, the FDA has access to over 120 million patients in the United States, and the primary focus of Sentinel is really to refine the signals that FDA currently receives. Certainly it's focusing on the claims data, but in the future, we know that Sentinel is going to move to the electronic health resources, to an account record and the medical record data. So we know that the utilization of this data source is going to increase.

However, at the same time we also have electronic health records, which are in green and, you know, they are going to be a useful source of information, especially once the UDI is implemented. So for medical devices, at this point it's going to become a very essential tool that we are going to use.

Now, you see that these areas are intersecting somewhat because what's going to be -- there is going to be some overlap, of course, between what's collected in the electronic health resources and what's collected in the claims data, and this is why the beauty of being an epidemiologist in this decade is really actually a great thing because we are trying to make sense of all of this data that comes from different data sources and try to figure out how we can develop better models.

Now, we also have device-based registries. And Dr. Maisel mentioned the International Consortium of Orthopedic Registries, one of the things that we like to brag about these days, where we are able to pool 30 registries from 14 nations to work with the FDA to essentially give us an extended access to over 3.5 million patients, which is of orthopedic implants. So these are the types of efforts that we are going to see growing because of the particular needs that we have in the postmarket surveillance.

However, again, because electronic health records and claims data will increasingly be more utilization once we have UDI, there is going to be, again, somewhat of an overlap of what's collected in the registries and what's collected in EHR, and what's going to become very important, that we actually integrate them and make sure that the device-specific information is collected in the registries for selected products. We're not going to impose registries for every single product. But if there is a particular need, we are going to continue facilitating development of the registries, but we are going to be continuing our methodological work to tie them with other data sources.

And then, what we have in burgundy color, you know, on the bottom is our methodological work of synthesizing the evidence, making sure that we continue to develop methods. You know, there are established methods for meta-analyses, cross-design synthesis. We're going to try to figure out, you know, how we can apply them best, in order to capture the data. And in some instances, when there's still need for a post-approval -- for more data and unresolved questions, we may actually establish some disease registries.

So essentially what we would like to see is that postmarket infrastructure, at some point around 2020, is going to be fully automated, maybe even before that. So there is going to be a lot of questions that are going to be addressed through direct access of these data sources. And then, for the subset of the questions, we may still ask industry to help us with addressing them for their particular device. But I also see us working together through partnerships, figuring out how we can leverage our expertise that we have at the FDA to help guide these efforts.

And so we would like still to think about 522 studies as being critical for CDRH decision making. What I mean by that is that we are asking only important questions. This is not the time to perform any academic exercises about what FDA staff would like to know about these devices. This is a real, clear signal to help identify, and you're going to hear about that throughout the day, how we actually come up with those important questions.

We would like to see these studies to be collaborative expertise driven, meaning that we would like to leverage the efforts from the best minds in the country to help us through our Medical Device Epidemiology Network and similar efforts to design these studies.

We would like to think about these studies as using innovative research infrastructure and methods. So be creative when you respond to us. We are not looking for old-fashioned, traditional studies always. Try to figure out, you know, the best ways, the creative ways, how the question could be addressed. And we are always available to provide advice on that.

We also like to think about these studies as performing dynamic integration with other postmarket data. And I talked about that already. You know, this is a part of the entire landscape of how we evaluate the device. And we simply would like them to be efficiently conducted and completed because without that we won't be able to use the study results. Throughout the process we would like them to be -- we would like actually to be very transparent.

And I probably ran out of time, but I'm just going to say two more things, that moving forward, it's very important that we think about this through a strategic engagement of all stakeholders: industry, patients, payers, clinicians, academic colleagues, people who have high stakes in working with us on development of this infrastructure. Leveraging existing data sources is not going to be an option. It's going to be a must for us. This is all co-active. There is no reason why we need to reinvent new wheels here, but we would like to be able to utilize them. And infrastructure building, methods development, partnerships, these are the focus areas. You're going to hear that throughout the whole day.

And this is my last slide. These are some very promising initiatives that we launched recently. I know that, during the afternoon, there is going to be talk by my academic colleagues on this and they're going to spend a lot of time. But these are part of the efforts that they would like to be able to offer as tools to leverage them as infrastructure, methodological and infrastructure-wide, to help you nest your postmarket questions should you receive the order.

And thank you very much.

MS. RAYNER: Thank you, Danica.

(Applause.)

MS. RAYNER: We're going to move ahead now to our question and answer period for this session. I hope that it will be an interactive and very engaged process.

And in the spirit of that, I wanted to start off by conducting a sort of mini-poll of all of you who have come to this session today. And, unfortunately, I don't think we have voting buttons for those who are on the webinar because I really would like to get a sense of your experience with a 522, the 522 requirements and 522 orders.

So just to open up, how many of you -- I'm going to talk about it as have been touched by 522 to date, specifically? Oh, lot's of hands. Okay, let's break this down. If you're with FDA, how many of you have been involved in the review of a 522 order or plans that come in? Okay. A fair number.

How about if you're from industry, if you're a manufacturer, how many of you have actually received a 522 order or have been part of a 522? If you're from industry, how many of you have not yet been touched by 522 but are trying to find out more about it? Okay.

And also, if you're a researcher or in academia, maybe some of you -- how many of you have actually worked with companies on 522 issues?

Okay. So it seems as though, for most of our audience, you have at least some direct experience with 522 and some of you are still waiting to have that experience.

(Laughter.)

MS. RAYNER: So I'm going to move forward with our questions and I wanted to open up with -- I wanted to go back to the data, Mary Beth, that you presented on the number of 522 orders that FDA has issued since 2009. And clearly we saw that dramatic spike in 2011, 2010 and 2011.

And do you anticipate numbers like that continuing and numbers going up? Do you anticipate the Agency using the 522 authority more in the future?

DR. RITCHEY: We anticipate using the 522 authority when it's needed, when it's the best way to address the question. So for 2011, we did issue 149 orders; 145 of those were for one device area. It was a question that came about that's been over global concern, and we felt that we really didn't have the data and needed data.

As we move forward, we anticipate that the need for additional data will change, based on the changing landscape for healthcare. And so when the 522 is needed, that's when we'll use it. We don't anticipate striving to use it more.

MS. RAYNER: Okay. And just to put you on the spot even more, because you were game to substitute for Nicole, and if my tallies are correct, since 2009 you've issued orders for, let me see, seven device types so far?

DR. RITCHEY: It's seven device types, yes, 283 orders.

MS. RAYNER: Right. And can you rattle off the seven?

DR. RITCHEY: Sure. In 2009 we issued orders for dynamic stabilization systems. In 2010 we issued orders for positive displacement needleless connectors and for a coil that's used to pack aneurisms. In 2011 we issued orders for an in vitro diagnostic device looking at ovarian cysts and masses, for temporomandibular joint devices and for total hip replacement metal-on-metal hips. And then thus far this year we have issued orders for urogynecological use of surgical mesh.

MS. RAYNER: There's no stumping you, is there?

(Laughter.)

MS. RAYNER: With that I'm going to ask if anyone from the audience has a question for Mary Beth or Danica or Phil. Don't be shy. Come up to the microphone and we'll be happy to either, you know, hear your comments and react to them or answer your questions.

MR. MAISLIN: What makes the distinction between a PAS and a 522, when one might be ordered and the other might be ordered?

DR. MARINAC-DABIC: So I'm going to talk about the post-approval studies, or also known as conditional approval studies, and we're ordering them at the time of the device approval. They're part of a conditional approval, and they're clearly spelled out in the approval order, including, you know, the study size, the design, and everything that the company agreed with throughout the premarket review process.

It's important also to mention that during our premarket review, epidemiologists work very closely with manufacturers. Our goal is that, by the time of the device approval, we will have already the protocol approved for the post-approval study.

522, as you heard this morning, there's specific four criteria that we use, that can be ordered any time during the total product life cycle as long as the device meets one of those four criteria. It doesn't have to be all of them, but it can be. Sometimes it's more than one.

MS. RAYNER: Mary Beth, I'm sorry, a protocol question. Do we need people to identify themselves when they come to the mike?

DR. RITCHEY: Yes, that would be great. If people can say both their name and where they're from, that'd be great.

MS. RAYNER: This session is being recorded, correct? So yes.

MS. COLLEY: Now can you hear me? Okay, hi. I was wondering if you could characterize -- well, let me back up. Mary Beth, you had talked about the process for the 522 and the identification of the problem and, I guess, the pre-meeting activity of the team and so forth.

And could you characterize for the different devices that have actually had 522s, how the identification and then the pre-order activity took place?

For one of them, you know that I know how it took place, but for the others -- so for the one that I know about, it's the one that was just this year, where there was a panel meeting last fall and the issue was identified and made more public. But for the other orders that were issued, how did that information come more to the light, to the public?

MS. RAYNER: Oh yes, I'm sorry. Ma'am, ma'am, before you leave the microphone, could you identify yourself, please?

MS. COLLEY: My name is Janell Colley, and I'm with Coloplast.

DR. RITCHEY: So the 522s that have been issued, the identification of the signal has come about in many different ways. We've had some that were identified via the medical device reports, the adverse event reporting system we have here at FDA. We've had some that were identified via guideline documents that were put out by professional societies. We had some that were identified from global concerns via MHRA and other governmental agencies. We've had some that were identified from the literature. We've had some that were identified by companies coming in and saying, hey, look at this. What do you think about this? Those are the big ones that I can think of.

I do want to say that not all pre-522s end up being 522 orders, and so there's a lot of different concerns that come up that never become this 522 order as well. And so I'm mentioning from all of those places.

MS. RAYNER: Great, thank you.

MS. PETERSON: Amy Peterson, American Medical Systems.

The last speaker talked about only asking important questions. Could you explain, then, further, in the guidance document, in Section (b), where it talks about the team review, it uses the word theoretical concerns and how we align those two statements.

DR. MARINAC-DABIC: So I'll start and maybe Phil can talk about a regulatory spin on it.

But the important question may arise from the review of other data sources and, you know, the signals can be identified, as Mary Beth had mentioned, through different ways. We might be able actually to pinpoint toward what the real risks are at that point, or we might be able actually to only pinpoint toward -- with all of these data sources that we looked at, there might be a risk and this is why we're asking the question. And, again, not all pre-screeners end up to be 522 studies. Not all signals that FDA identifies end up to be placed on the manufacturer's plate.

Sometimes when there are more overarching questions that go on different types of devices, we sometimes take it upon ourselves, as we have done recently with some of the systematic analyses of the different bearing surfaces, you know, in the orthopedics world, where we actually completed a systematic literature review and now FDA is funding the study through the International Consortium of Orthopedic Registries to look into performance of hips with different bearing surfaces, looking into performance of different head size. We're not asking the manufacturer to address these questions, but FDA's research dollars are spent to address the questions.

So this is the context in which I had made the statement that this is a shared responsibility, and we also very carefully tailor the question based on the data that we have at our disposal at the time when the concern was raised.

MR. DESJARDINS: I don't have much to add, only that the idea of a theoretical question, again, the 522s are used in a subset of instances when we have identified questions with unidentified answers. And sometimes those questions are more tangible and sometimes we've got precursors or signals that have escalated and raised the need for the 522 study. Sometimes there's just questions that have been raised, whether at FDA, whether by the manufacturer, or whether by the user community, and sometimes those questions could be theoretical. I don't believe we're prevented -- the statutory criteria does not prevent us from requiring studies to answer those questions.

But, again, our intent is not to go out there and try to answer every single question through the 522 mechanism. So that's why you may see some different terminology between the guidance document and sort of
the --

MS. RAYNER: Practice.

MR. DESJARDINS: -- more frank discussion here in terms of how it's being used in practice.

MS. RAYNER: Sir, would you identify yourself, please?

MR. ONYEACHOM: Uchenna Onyeachom, American Venous Forum. The question is for Dr. Dabic.

In one of your slides you did mention about leveraging existing data sources. I'm aware that in 2010 the FDA issued an adverse event warning on IVC filters placed in the U.S. and they are using a national database on IVC filters.

How will the FDA work to collect those, leveraging that database to answer some of the questions for public health?

MS. RAYNER: Sir, just for clarification, you mentioned IVC filters, which have not been subject to 522 orders. You're talking about the safety communication that FDA issued with --

MR. ONYEACHOM: Yes.

MS. RAYNER: -- respect to possibly the risk/benefit profile of removing IVC filters, removable IVC filters; is that correct?

MR. ONYEACHOM: Yes.

MS. RAYNER: Okay.

DR. MARINAC-DABIC: So we have a number of registries that we utilize in our public health practice. FDA facilitates the development of new registries. This might be a good example on how we are still in a continuing dialogue with the relevant professional societies to bring about the collaboration that will be fruitful for the FDA and in our postmarket surveillance and also to the collection of the data for other data sources.

We also help fund some registries. Certainly we are very careful how we spend the research dollars, but we do put some seed funding to some of these registry efforts. We also completed a lot of our expertise in the development of data collection tools for either to expand the registries or to actually add modules to the existing registries.

I'm sure Dr. Holmes is going to talk today about a great example of the work that we've done with the Society of Thoracic Surgeons and the American College of Cardiology and CMS, to ensure that the approval order that we issued to the sponsor to actually do the post-approval study would in fact have the national infrastructure as a venue for nesting that post-approval study. And I'm talking about a transcatheter aortic valve replacement by Edwards Life Sciences.

So, again, you're going to hear about these examples, and you'll see more and more FDA being on the forefront of a very forward-thinking development of the structure that we'll be actually able to use for addressing these questions, and perhaps really make the need for some of the traditional 522s decrease somewhat in the future because, if we succeed in convening the stakeholders, that will help put together this national infrastructure. In this particular case, that can be also the case.

MS. RAYNER: Sir.

DR. LYSTIG: Ted Lystig from Medtronic.

So, Danica, I found it very intriguing, the slide that you had showed which had, over time, a decreasing reliance on the necessity of collecting de novo information. That's an interesting concept in terms of how one can generate the answers either in terms of analyzing existing data versus the need to collect new information.

So, Mary Beth, I'm wondering if you can give some examples where an order has been issued that had said that the appropriate response is an analysis of existing data as opposed to the requirement for collecting new data, or if there's been -- if that's still a future state or if there's something along those lines.

DR. RITCHEY: So there are a couple of instances for a 522 where we have recommended either using data that already exists or data that would be best collected via a multi-sponsor registry.

For the temporomandibular joint devices, those are devices that are tracked, and because of that, much of the information that we were asking, we expected the companies to have already. And so we were hoping to see some retrospective data there.

Also, for the mesh orders that went out recently, we highly recommended within the order a multi-sponsor registry because there are societies, the American College of Obstetrics and Gynecology and the American Academy -- the American Urogynecological Society, AUGS, those two societies were already working on a registry, and because of where they were in that process, leveraging the work that had been done was really what we had recommended to do moving forward for those.

DR. MARINAC-DABIC: And we also have some examples from the post-approval studies world, again, that speak toward -- although they're not really applicable to this particular discussion of 522, but still speak about the FDA's commitment toward encouraging the sponsors to use existing registries.

We have some studies nested in the INTERMACS registry. We also have some of the orthopedic studies collecting the supplemental information from utilizing outside of U.S. registries, including the registry in Australia and Kaiser Permanente registry in the United States, in addition to the data that they collect in their own sponsored studies.

DR. STEINBUCH: Good morning. My name is
Michael Steinbuch. I'm with Johnson & Johnson. And first I want to thank the panel for your presentations this morning. I appreciate that.

And I wanted to ask a question relating to -- in response to the public health question that's arisen. You mentioned in the presentations that it could be sponsor specific, device specific, or device type specific, and understanding that there can be some logistical issues involved.

But, you know, in the spirit of transparency, is there an opportunity for the pre-522 team, for example, before issuing the order, to get additional context by engaging with the sponsor specifically? Is there opportunity to do that? Because there could be some real useful information that could be gleaned from that, you know, collaboration and discussion about it. I mean, I don't think that there's anything that should be secretive.

And the other benefit that I see is that when we talk about, you know, in the guidance, a 30-day time frame for responding, that it would give the industry sponsor representatives opportunity to start working so that they could actually meet that deadline and have the most robust type of design and well-thought-through study that they can possibly do. So I wanted to get your reaction.

MS. RAYNER: Does someone want to address the issues of interaction with the companies before issuing the order and also the question about if you get an order without any pre-warning, the 30-day time period for submitting a fairly rigorous plan?

DR. MARINAC-DABIC: So thank you, Michael; that's a really good point.

And internally here, we also, again, are looking at the ways of how we can make the process more efficient, meaning how we can have these studies, once the approval order is issued, be up and running sooner. Because if we really are serious about, you know, this is a really important public health question, then we should be ensuring that the process is in place, that we can have the protocol design as soon as possible and the first patient enrolled as soon as possible, or if we use other data sources, actually the utilization of it to start as soon as possible.

An analogy that I would like to make was probably, you know, up to 2005. Before that, you know, we -- before the post-approval study program was formed and the way how we started thinking about the post-approval studies at that time, prior to that we thought of them as really typical post-approval studies or postmarket program, meaning that postmarket staff would get engaged more once the approval order is issued.

But, you know, we then looked back and figured out it would probably much better and more efficient that we get involved throughout the entire premarket review. Then we can get familiar with the process and be able to help industry to design a good study.

I think from my perspective, we should be looking to suggestions like this to make sure that we've engaged the right stakeholders at the right time. So I don't think there is anything in the regulation that prevents us from doing that. I think it's interpretation of what good interactive role it would be between us and industry. And certainly I'm open to hearing more about this and discussing this with the FDA team.

DR. RITCHEY: So I want to make sure we're not leaving with the impression that we're not discussing with companies because for, I think, all of the orders that have been issued, there has been some discussion previously. For the most recent orders there was a panel meeting, a two-day meeting, where a group of experts got together and chatted through the concerns with that device, and that panel recommended 522 orders.

Whenever there are questions about MDR reports, additional information orders are sent back to companies and companies are asked to provide additional information from those. And then when we see other signals, we're engaging with companies.

And Michael, I actually have a question for you, then. Would it be helpful for us to say, hey, we're thinking about doing a 522 order, give us this information now. Or is the more, you know, collegial -- this has come up. Can you tell us more about that? The way we're doing it now is that it's not as helpful as it could be.

DR. STEINBUCH: I would propose the collegial approach. I don't think that the first one is really going to be all that meaningful. But just, you know, giving the opportunity -- and I recognize and I appreciate hearing that, you know, because I wouldn't certainly be aware of what other companies. I may not even know all of my companies because there are so many in Johnson & Johnson. But I just think that it would enhance the process overall if we can at varying, you know, stages engage early and often, I guess is what the message would be.

DR. MARINAC-DABIC: But not to issue orders often, right?

DR. STEINBUCH: I'm sorry?

(Laughter.)

DR. MARINAC-DABIC: Not to issue the orders often, just engage often.

DR. STEINBUCH: Exactly.

(Laughter.)

MR. BROWN: Scott Brown, Covidien Peripheral Vascular, formerly ev3.

I was reading the draft guidance on the 522s, and there's a line in there -- and I'm going to apologize in advance. I'm a statistician by training and by professional bent, and I don't think any of you are statisticians, so forgive me for being technical. There's a piece of language in the draft guidance for the 522s that I think mirrors language that's in the PMA conditions of approval language, which talks about sample size justified, statistically justified and based upon study hypotheses.

Now when I read that as a statistician, that means a very specific thing. It first of all states that every post-approval study under both of these mechanisms is going to have a formal testable statistical hypothesis, and then you run with that and you have your sample size and so forth.

So it's not the sample size itself I'm talking about so much as it is the idea that am I correct in interpreting that language to say that every post-approval study under both of these mechanisms is going to start with a testable statistical hypothesis? And that's part one of the question, and part two will depend on the answer to part one.

(Laughter.)

DR. RITCHEY: So we do expect to see adequate sample size in order to address the questions. That does not mean that we're expecting to see a statistical hypothesis for every observational study that's conducted. That means that adequate sample size is needed to address the particular question. For many of the questions, a hypothesis and a statistical, you know, robust sample size with power around that is needed.

MS. RAYNER: Did you have a follow-up question?

MR. BROWN: Given that the answer to part one was what it was, the answer to part two is thank you.

(Laughter.)

MS. RAYNER: But let me follow up. Do you ever anticipate that the approach to a 522 study may be something that doesn't involve, say, human data?

DR. RITCHEY: Oh, absolutely. The definition for postmarket surveillance in the regs does not preclude other types of data. So postmarket surveillance studies could be for additional bench testing, for animal testing, and for other lab tests. And so it's not just clinical tests as well.

MS. RAYNER: Okay, thank you.

DR. BLAKE: Hi, I'm Dr. Kathy Blake. I'm with the Center for Medical Technology Policy, but also a cardiologist, and I'd like to go to some of the clinical questions because the clinical voice has yet to be heard in this room.

But one of the important things for clinicians is to get the information about a medical device as absolutely as quickly as possible because, absent that information, devices continue to be placed. And for some of the devices that I've implanted over the years as a cardiologist, the risk of a removal of the device, an extraction of the device, does rise as time passes. And so I think that, particularly in the cardiovascular world, going to some of the existing registries and using that information is going to be increasingly important.

Those registries have variable funding and in, for example, the implanted defibrillator registry there's funding -- and I should say I'm on that steering committee, as a disclosure. But there's funding within the registry for about five to six studies per year and that's it. And so it is a rich resource, with now over 800,000 patients entered into that registry. Since version two is implemented, we've had pages and pages of lead-related data, but really an absence of a regular schedule of review of that information and perhaps then the ability to detect early signals.

So I'm interested in people's comments about how that kind of a resource can be used by FDA and by industry.

DR. MARINAC-DABIC: Yeah, those are very good points. And, again, we have a long history of working with ACC and STS and utilization of the registries for mandated postmarket studies.

And as to the comment about the clinical voice not to be heard this morning yet, just to remind you that over half -- actually more than half of the division that actually reviews these 522 devices are also physicians or have clinical background in addition to an epidemiology degree, and all our 522 teams actually have one medical officer or more, and the management from the premarket offices also include the clinicians throughout the process.

So the clinical perspective is essential to us throughout the process, and the MDEpiNet network had been created not to include epidemiology expertise, but also to include the clinical and the relevant statistical expertise to help us leverage these registries.

So the funding issue is very important. These existing registries are there. Most of the registries have limitations. Existing registries in the United States, even though they are national, they are not mandatory. So they do not have the 100% coverage such as registries outside of U.S., where you see Scandinavian, for example, countries, orthopedic registries have between 90%, 95%, sometimes 98% of the coverage of all patients, of all procedures, which clearly gives us a little bit more of certainty of what we're going to look for and the registry is going to give us, you know, meaningful results.

Another limitation of the registries in the United States is short follow-up. We are not typically satisfied with 30-day follow-up, as most of the cardiovascular registries in the United States have. So the longitudinal module that we are working with with the societies -- and I know the societies themselves also are working on making sure that there is an established way of linking the data between the registries and administrative claims data. Again Duke, DCRI, is taking really a lead in many of these issues, and we'll have Soko Setoguchi here, one of the speakers today, to talk about some of these efforts.

From my perspective, we are on the front line leading and advocating for the use of registries, and we already have the registries that utilize it.

So where the manufacturers may be also very helpful, some of the funding for these advances of the registries, the cost must be shared with multiple stakeholders. Hospitals cannot do that of themselves, nor the societies can fund it by themselves. So if there is an order, for example, and a company wants to do the 522 utilizing some of the registries, I think additional funding and the sponsorship from the manufacturer can be utilized to develop these longitudinal modules to ensure that we have long-term follow-up data. So it would be really using the national infrastructure, but adding to it the longitudinal profile for the patient. So that may be a way to go.

Another way would be to figure out if there is a way to utilize some of the FDA-sponsored and funded innovative collaborations such as Medical Device Epidemiology Network or International Consortium of Orthopedic Registries, where we already convened the leading experts, clinical, statistical, epidemiologic, in the world and outside of the U.S. And those entities at some point may become CROs for some of the 522 orders and leverage some of these resources in helping the company to address the question in a most efficient way.

Again, MDEpiNet is still not a fully functioning public-private partnership, but we are -- this is one of the goals for this year, and mid-May we're going to have our third annual meeting where we're going to talk about this. So once that happens, then the funding can be accepted by all stakeholders, including industry.

MR. BROWN: Scott Brown, Covidien Peripheral Vascular.

I don't believe our division of Peripheral Vascular at Covidien has currently any 522 orders. I know the company as a whole does, and I was actually just scanning the 522 website, and if you could tell us, it would be interesting. I know 522 orders can be identified and issued at any point during the product life cycle.

Just roughly speaking, out of the several hundred 522s that have been issued, when during those life cycles, broadly speaking, were those issues identified?

DR. RITCHEY: We've had issues that were identified premarket that were actually postmarket concerns. And so those 522s were issued within a month of the device being on the market. And then we have 522s that were issued for devices that were cleared in the '70s. So the full range.

MR. BROWN: Thank you.

MR. RISING: Hi there, my name's Josh Rising. I'm with Pew Charitable Trusts. I had a question about transparency in the 522 process.

You know, certainly FDA as an organization is increasingly committed to transparency and increasing the amount of information that's available on the website. I guess I was curious to hear your take a little bit on transparency of some of the study designs that are finally agreed upon between the manufacturers and FDA.

Certainly, you know, from some of the experience that I've done with looking on the website that's available, there's not a ton of information about the specific study design. Now, clearly there are important proprietary reasons that not all of the information can be released, but I was curious to get -- you know, to hear you talk a little bit about how you determined the right balance for transparency with the 522 process.

DR. RITCHEY: Right now, the 522 website includes information about the premarket number, whether the reporting is on time and the progress of the study. But we are committed to transparency, and we are moving forward to have more information about the studies online. Our goal is to have the same information on the 522 webpage as is on the PAS webpage, and there we have information about the study design and then, once the study is completed, information about the results of the study as well as strengths and limitations.

MR. RISING: Sounds great. Is there any time estimate for when that process will be complete?

DR. MARINAC-DABIC: So, as you know, for post-approval studies, we already have information on the web. We are now in the process of making our own FDA-funded research studies available on the web. So that's the next bucket. And we have completed our internal process, and now this is moving to actually making it part of the webpage. And then the next one is going to be 522.

So I would say by the end of the year or maybe early next year we will be able to do it because it has to go through certain processes. It's not going to -- the decision is not made at a division level. We're going to put together everything that we want, and then it's moving to approval.

MR. RISING: Thanks.

MS. RAYNER: And I'd like to pick up on that thought and drill down a little bit further. Can one of you address the etha-liability, the releasability of study plans, of study reports at various stages?

DR. RITCHEY: So plans that are not approved and reports that are not approved would not be available for FOIA. However, those plans that are approved, the reports that are approved and the final reports that are approved are available. And they're going to be redacted because they are from the companies and so company specific information would not be available. However, the results themselves are releasable.

MR. DESJARDINS: Just to add on that, I think that's one of the important reasons why developing the website and getting this information up there proactively is so important. We devote a lot of resources to responding to these FOIA requests. If we can proactively identify the information that's going to be useful and go through that in the beginning stages, like here's the useful information, here's what we're allowed to say about it, and get that up on the web, I think that's a much better utilization of our time, rather than going through 200 of the -- or 50 of the 200 522 studies that are out there and trying to redact the company specific information. That's one of the motivations for us of getting that information up there as quick as possible.

MS. RAYNER: Yes, please.

MR. SECUNDA: Good morning. Jeff Secunda from AdvaMed.

I wanted to follow up on the comment about postmarket issues being identified during the premarket review and that being the basis for a 522 that is issued immediately after clearance. Did I understand that correctly?

DR. RITCHEY: So it was identified in the premarket. It was not a premarket concern. It was something that was of concern and there were multiple devices in that area. And because there were multiple devices in that area and other devices were being considered, that device was considered as well.

MR. SECUNDA: So it wasn't really initiated because of the device under review, but rather devices that were already on the market?

DR. RITCHEY: So as with post-approval studies, there are things that may end up for the 522 as postmarket issues, particular subgroup concerns, long-term concerns, that type of thing. And so there were devices that were being considered and devices that were on the market there, and the ones that were being considered were what initiated the new concern, and then we found that we didn't have that information across the board.

DR. MARINAC-DABIC: So basically what was driving this was the review of the premarket data identifying potential concern, and one approach would be just to address this particular device and issue it through a particular authority, a post-approval study order or, you know, authorities that we have.

But, again, looking in a broader picture of how this will impact, you know, the types of devices and how the multiple stakeholder community can benefit from this, this is how the decision was made to use this versus the other authority, if that makes sense. Because the other option we would've had is to issue one under the post-approval study and then others as 522s.

MR. SECUNDA: Okay. So in that example, was this a cleared device?

DR. RITCHEY: That one is a cleared device, yes.

MR. SECUNDA: I'm sorry?

DR. RITCHEY: The one I'm speaking about is a cleared device. I think that there are those that are approved devices as well.

MR. SECUNDA: Thank you.

MR. KENNEDY: Good morning. My name is Joe Kennedy from Remedy Informatics. I have a question about registries, specifically ICOR, but I think it relates to all of them.

So I'm interested, when you're getting data from many sources, it can be international or even this is a problem in the United States, how you handle the different terminologies, for example, semantic harmonization, ontology, data dictionaries, how you're handling that currently but also as you glimpse into the future how you see things changing.

DR. MARINAC-DABIC: Maybe I can call Dr. Sedrakyan, but I'll start.

(Laughter.)

DR. MARINAC-DABIC: So we have issued -- we have actually awarded a contract to establish this infrastructure center to Cornell, and this is a very beginning stage of our work with ICOR.

So currently we have four projects underway. They have to do with classification of orthopedic devices, harmonizing the way of how they're being classified throughout these data sources. We are looking at the, as I mentioned in my talk, different bearing surfaces, how they fare against each other in different data sources. And these are the projects that we are planning to do and they just started, mobile versus fixed knee and orthopedic joints.

Some of the work that you're referring to, I know that Remedy might have some solutions to it, but, again, I'll talk -- I will let Dr. Sedrakyan talk about where we currently are with working with the partners within the ICOR to address some of these issues.

MS. RAYNER: Is that mike working?

MR. DESJARDINS: I don't think that one is on. You can use this one if you want.

DR. SEDRAKYAN: Right. So I met with your president last week, and we discussed what RemedyMD can offer in standardization of a lot of these terminologies. So we're in a process of learning what RemedyMD can offer to us.

But it's not that complicated to harmonize study by study, you know, study-by-study basis within ICOR. What we plan is really, within a particular project, we will be able to go through a potential number of variables that we need to standardize, and they're pretty well understood and collected in a similar way within registries.

What's challenging, of course, is the outcome definitions, and we're in a process of learning how to make sure the reporting and collection of information is the same throughout the registries. And very soon we'll start another initiative to standardize that reporting, give an annual reporting of registries. At least we're hopeful to achieve that. So we'll see how far we can go.

And if I can ask a question, too. Once I'm here, maybe I can ask -- but go ahead. Sure.

MS. RAYNER: Do you have any more questions?

MR. KENNEDY: No, I just thank you.

DR. SEDRAKYAN: A quick question about -- I assume when you discuss with manufacturers 522 orders, then by default, it's all harmonized, what you're requesting in terms of design and variables they need to look at and elements. Or maybe you can share your thoughts about that. If it's harmonized from the outset, then it would be, later on, easier for you to analyze the data, and if you consider kind of combining information, say, how do you class post-approval studies, it would be an easier task.

Of course it's still a burden for you, an enormous burden. If it's going to grow like this exponentially the way you have shown, it would be a lot of work for epidemiologists, of course, to do.

DR. RITCHEY: So companies, when they get an order, we order the questions. We don't mandate the study design. We provide our recommendations, and our recommendations would lead to harmonized results. However, many companies will choose to modify the recommendations slightly. So we do get similar data, but not data that is sort of in a common data model type of scenario.

DR. SEDRAKYAN: They might come without the definitions as well.

DR. MARINAC-DABIC: Well, this is why I think it's important that we hold meetings like this, to raise an awareness about what are the opportunities and what are the real gaps, because I know that each problem the manufacturer is looking into the order is something really important to address, and it may not realize that there are actually questions that FDA is facing that go across different products and what we can do together by utilizing more efficient ways, methodological ways.

So, again, you've seen in our orders, we are recommending a particular design. We are not mandating it, I think, but a lot of thought and internal deliberations went into actually agreeing, as a center, this would be the best suitable design for this particular question.

So I think if we take it from there and if we can bring the manufacturers to work with us -- and I know there are many, many more now, statisticians and epidemiologists, I mean, the societies can work with us. I think this is a really important task ahead of us.

It's not only executing the study that really costs a lot of money, but at the end, how we can best use the data, how we can publish the data, how this can become really, you know, a paradigm-changing exercise, that we were able to put together the multiple stakeholders to address important questions. This is what came out of this.

You know, this is actionable. And by action, I don't mean removing the device from the market, but there are actions that need to be taken, each different stakeholder, in order for the patients and clinicians and regulators to make the best use of this data.

MS. RAYNER: And we're about to wind up for this session. We've got about five more minutes, but we have time for one or two more quick questions.

DR. FRICTON: Thank you. This is James Fricton from the University of Minnesota.

A question about whether or not 522s or postmarket surveillance can be anticipated at the premarket level and discussed, because of the increase in efficiency in data collection, that there's a possibility that a 522 may be needed down the line. We have found in our studies, right now, that it's very difficult to retool and collect data after the fact, where it's much easier to do that in a process from the premarket point that's ongoing and that all clinicians are aware of that. Any comments?

DR. RITCHEY: I'll start. So the nuance to the 522 program is that these are questions that are safety or effectiveness questions that arise. In the case of premarket studies, whether it be a bench study, an animal study, a clinical study, everyone has months or years to plan for that study before it occurs. But when a new question arises that is important enough that we would issue a 522, that's a public health concern that needs to be addressed at that point. There isn't that planning stage to it. So that's sort of the difficulty in being able to plan for a 522.

DR. MARINAC-DABIC: So my take on this is, whenever we have the opportunity, we should think hard of how we can actually take advantage of it. But one thing to keep in mind is that a lot of devices are cleared without clinical data, so that planning and -- you know, it's not always applicable. But, again, we are flexible. We would like to take advantage of whatever works for the sponsor, for the clinical community. We would be giving it serious thought. It's not applicable always.

DR. FRICTON: I mean, one of the concerns, of course, is the negative impact it has on that particular device if a 522 comes up. There's a lot of concerns. Oh, should I use that device or not? And for that reason, it is much better, I think, to make it a smooth flow that yes, there's a possibility that we may want to do some postmarket in this particular situation, at a premarket level, and that they should consider that possibility so that it's not something that's a negative market impact for the industry.

DR. MARINAC-DABIC: Yeah, that's an important point. And, again, maybe it's the time to go back to the vision and how we envision the postmarket to work. So what would be good to be a take-home message from this session is let us all think about the ways how we can identify where are the gaps where we all can contribute to the national surveillance infrastructure building. At that time there are not going to be many of these surprises. You know, we're going to be capturing that in the national surveillance system. That will be equally useful to us and you and the clinical community and the rest of the stakeholders.

Of course, between now and then, we have to deal with individual orders, and you have our commitment that, as we move along, we are going to look at every opportunity to make the process more efficient and more meaningful. And we're going to bring on our end, again, the best experts through our external extramural programs. And we expect the same from industry, to really think about, you know, boosting the resources that are methodologically strong so that we can watch really strong ways to achieve the goal and vision that we presented.

MS. RAYNER: Thank you. Thank you, Danica, that's a wonderful segue into our next session, where we're going to talk about the challenges and opportunities for collaborative efforts as we deal with these important public health questions.

Thank you very much again to all three of you, and to you for your very engaging questions.

We're going to break now and reconvene at 10:15.

(Applause.)

(Off the record.)

(On the record.)

DR. MITCHELL: Okay, according to my clock it's time to get started.

Hello, my name is Diane Mitchell. I'm the Assistant Director for Science in the Center for Devices and Radiological Health.

Can you hear me? No. Hello. How about now? Okay.

So my name is Diane Mitchell, and I'm the Assistant Director for Science in the Center for Devices and Radiological Health. I'm very pleased to be here today to moderate this second session.

For those of you who were here for the first session, we had a very engaging session with lots of really excellent questions, and we hope to have more of the same for the second session.

For this session, which will end at lunchtime, we're going to be talking about challenges and opportunities for collaborative efforts. We have five speakers, and if I can just take a moment to introduce them to you, I would like to do that.

Our first speaker is Sergio de del Castillo. He works at CDRH and he comes from Johns Hopkins University, where he received a bachelor's of science in biomedical engineering. He has been with CDRH for over 10 years, mostly doing review work, but more recently, he has become a regulatory advisor to the Director of ODE. Sergio will be talking about potential 522 questions that arise.

Our next speaker is Nilsa Loyo-Berrios. Nilsa has a doctorate degree of Epidemiology from Johns Hopkins Bloomberg School of Public Health, and she has worked in ODE and OIVD. That's the Office for Device Evaluation and the Office for In Vitro Diagnostics. She is now the Associate Director for Postmarket Approval Studies, and that is in the Division of Epidemiology. While she was in ODE she served as a reviewer, a team leader, and a branch chief. Nilsa will be talking to us today about challenges and opportunities for collaboration.

Next, we have Michael Steinbuch, and he is from Johnson & Johnson. He'll be giving us the industry perspective. And Mike is from Johnson & Johnson, and Mike, I must admit, your title is a mouthful, so I'm going try and say the whole thing. R&D Scientific and Clinical Affairs group is what he is associated with, with the newly formed Safety and Surveillance Center of Excellence. He is an Executive Director, Epidemiology, supporting the medical devices and diagnostic segment in the design and conduct of observational research and postmarketing surveillance. He is also an Adjunct Professor of Clinical Pharmacy at the University of Cincinnati's James L. Winkle College of Pharmacy, and we're delighted to have Mike here.

In addition to that, we are also very fortunate to have
Laura Mauri. She will be giving the academic researcher perspective on this particular topic. Laura is the Chief Scientific Officer of Harvard Clinical Research Institute. She is also an interventional cardiologist and Director of Clinical Biometrics in the Cardiovascular Division of the Brigham and Women's Hospital, as well as an Associate Professor of Medicine at the Harvard Medical School. She has lots of experience in conducting and performing and participating in observational and randomized studies.

And, finally, certainly last but not least, our speaker is
Anthony Rankin. Tony. He will be providing us with the society perspective. He is the senior partner in the Rankin Orthopedic and Sports Medicine Center of Providence Hospital, and he's the Chief of Orthopedic Surgery at Providence. In addition to that, he is a Clinical Professor of Orthopedic Surgery at Howard University, and an Associate Professor in Community Health and Family Medicine at Georgetown University. Of note, he was elected to be the 76th president of the American Academy of Orthopaedic Surgeons in 2008, becoming the first African American to serve as president of that organization.

And we are also fortunate today. In the audience is
Jeff Secunda. If you could just stand for a minute, Jeff. He will join us after the presentations. Jeff is the Vice President of Technology and Regulatory Affairs for the Advanced Medical Technology Associates, also known as AdvaMed. And he is responsible for UDI and all the postmarket policy issues.

So what I would like to do, I'm going to come up and change the slides in between everybody's presentation, and I think what may be most helpful, because we have so many presenters for this session, is that you can ask a few -- if you have any clarifying questions in between the sessions, please feel free to go and ask clarifying questions and then we'll get into the meat of the discussion at the end of all of the sessions.

Does that work for everyone? Okay, I'm seeing some nods. All right, great.

And with no further ado, let's get started with Sergio.

Thank you.

MR. DE DEL CASTILLO: Good morning. My name is Sergio de del Castillo. I'm a regulatory advisor in the Office of Device Evaluation and formerly a scientific reviewer in the Orthopedic Spine Devices Branch in ODE.

I was invited to present the premarket FDA perspective regarding 522 postmarket surveillance studies, and I'll be approaching that topic generally from these four areas listed here.

As was discussed earlier this morning, Section 522 of the Act outlines the criteria by which the FDA may order postmarket surveillance on devices, and I simply just ask that you keep these criteria in mind as we go through the remainder of the presentation. Although devices may fall into one of these categories, this doesn't necessarily mean that they will be subject to a 522 order, but it's really the criteria by which we start.

So the questions included in the 522 order can arise and develop from a number of different sources of information, and I would say that these questions typically do not present themselves suddenly. Rather, there are initial questions or concerns that arise at a variety of different points in the lifetime of the device, both pre- and postmarket, and as more information is obtained by the Agency, these safety and effectiveness questions can evolve over time and in some cases become more important or weighted such that a decision is made to order postmarket surveillance to answer those questions.

Generally speaking, in my view, the 522 questions can arise from four broad categories, which I've listed here, and I'll discuss these categories in more detail momentarily. But before I do, I think it's really important to understand and appreciate why ultimately a 522 order is issued for a device.

On the whole, the questions outlined in a 522 order are intended to generate data that will help the FDA to answer these overarching questions that are listed here, from a premarket perspective; that is, are we asking the right safety and effectiveness questions when we review marketing applications for these devices? Are we using and requesting the appropriate performance data to mitigate the potential risks to health, as well as to demonstrate safety and effectiveness? Does the device labeling adequately mitigate the potential risks to health? And ultimately, are we reasonably ensuring that devices are safe and effective prior to marketing?

So as I said, the 522 questions can arise generally from four broad categories, which I've listed here again, and these categories are somewhat nebulous. So to help illustrate the kinds of questions that arise, I'd like to use a real-world example where the FDA issued 522 orders for postmarket surveillance, and that is dynamic stabilization systems.

And although I've chosen this device for today's presentation, please keep in mind that there are many other devices for which 522 orders have been issued that could also be used as examples for the same purpose.

So as a little bit of background for the example, there is a class of devices called pedicle screw spinal systems, and these devices are intended to mechanically stabilize the levels of the spine that require a spinal fusion treatment for a number of different indications. And in general, these are devices that are composed of metallic rods and screws, which are constructed as shown in the figures that are here.

For the purposes of today's presentation, I want you to remember that these devices are intended to inhibit motion of the spine, and they're typically manufactured from metallic materials, which are very stiff.

So as often happens with medical devices over time, we see an evolution of the designs, and we did see that with pedicle screw systems. Some of these design changes or evolutions fall under the term we call dynamic stabilization. Dynamic stabilization systems are essentially just like pedicle screw systems, as they are still intended to provide mechanical stabilization of the spine as an adjunct to fusion surgery.

However, the key technological difference here is that these devices allow some degree of motion. Theoretically, the addition of some motion will potentially aid in the development of a solid fusion, based on the properties of bone growth and healing.

And the motion of these devices can be achieved through a variety of different mechanisms and designs. And to help illustrate that, I have some photographs introduced here of devices that utilize some of the types of dynamic stabilization features that could exist, which will give you an idea of how they differ, potentially, from traditional pedicle screw systems. And I've chosen these photographs simply to illustrate the variety of designs that could be used to allow motion of the device, although there are certainly many more that exist.

For example, motion could be achieved through a hinge or a screen-like mechanism. Others could use flexible or less stiff materials such as polymers. And some devices may be designed with a combination of these elements.

So as you can see, there are some clear technological differences between traditional pedicle screw systems and dynamic stabilization systems.

As part of the premarket review process for pedicle screw systems, including dynamic stabilization systems, we would ask the questions that are listed here on this slide. And I want you to note that these questions are not specific to dynamic stabilization. We would ask these questions for any pedicle screw system, including dynamic stabilization. And so these are not new questions. However, the questions take on a slightly different context as a result of these new technological features that allow motion.

So for pedicle screw systems, we generally evaluate safety and effectiveness, including potential failure modes, using preclinical methods and that's mostly mechanical testing. And shown here is a typical setup for a mechanical test that would be used for pedicle screw systems, based on an FDA-recognized test standard. Based on this kind of mechanical testing, we know generally how pedicle screw systems can fail.

However, based on information, including publicly available medical device reports, we've seen reports of some failures of dynamic stabilization systems that we would not typically see during mechanical testing of pedicle screw systems, and it is here that we begin to see some deviations from the questions that we would generally ask of traditional pedicle screw systems.

When we see potentially different types of device failures, we have to begin asking ourselves if we are identifying all the potential failure modes for the device and if our current methods for evaluating a device adequately predict and/or mitigate these failures. Even if there are new types of failures, these do not necessarily have clinical significance. Nonetheless, we still have to ask if these new types of failures correlate with clinical outcomes.

Based on information, again including publicly available MDRs, we also saw reports of adverse events that made us question whether the adverse event profile for dynamic stabilization systems might be different than traditional pedicle screw systems. And in combination with the questions that I just described previously, these questions begin to take on a greater significance and may rise to the level of issuing a 522 order.

So there are some obvious challenges for both industry and FDA with respect to the 522 process and the issuance of 522 orders. So we recognize that 522 studies can be challenging and require a significant expenditure of time and resources. And sometimes the 522 questions themselves can be challenging, but certainly possible to answer. Also, as new information is fed back to the FDA, we may have to change our premarket review process and practice over time to accommodate this new information that we gather. We also recognize the potential impacts on device innovation and regulatory pathways to market.

Nonetheless, I think that there are some opportunities for benefits and collaborations. For example, there are various study designs and statistical methods that can be employed to minimize time and resources, some of which will be discussed later today. Also the FDA is always open to discussion and collaboration through the pre-IDE process, particularly to identify and discuss current premarket review practices, including those devices that are subject to the 522.

In particular, as data are generated from the 522 studies, there's a great opportunity to collaborate on the improvement of existing preclinical test methods and in some cases even the development of new test methods for evaluating these devices, especially bench testing. This collaboration could include the FDA, industry, and standards organizations such as ASTM and ISO.

I think more importantly, there are numerous benefits to the data that are going to be generated from a 522 study. We have a larger set of valid scientific evidence that industry and the FDA can draw more meaningful conclusions from regarding the safety and effectiveness of the devices. We can also have greater assurance that the potential failures and risks to health are adequately identified, and hopefully any correlation to clinical outcomes can be determined. Further, there's great opportunity to inform and improve preclinical test methods for evaluating the safety and effectiveness of devices.

We can also identify the technological characteristics that may not result in safe or effective devices, which is highly valuable information both for the FDA and for industry, particularly for consideration during the design development of new devices. And this, in turn, could foster innovation of designs for a particular device type or family.

The data will also present an opportunity to improve or create device-specific guidance, which fosters greater transparency of current premarket review practices and performance data requirements. And ultimately, again, the primary benefit of these data is to ensure the continued safety and effectiveness of medical devices to protect the public health.

Thank you for your time.

(Applause.)

MR. DE DEL CASTILLO: And I can take a few clarifying questions if there are any at this time. Okay, thank you.

MS. PETERSON: I have one.

MR. DE DEL CASTILLO: Oh, okay.

MS. PETERSON: Amy Peterson, American Medical Systems.

We don't make orthopedic products. But as an industry, we use tools like design FMECAs to look for risks and identify those, and then those cascade out to what kind of tests we do to provide the objective evidence that we've addressed those risks.

So when you talk about a larger set of valid scientific evidence, is it that the Agency doesn't agree with what we provide in the design FMECAs? Because oftentimes, on PMA products in particular, you have access to that information.

So where is the disconnect from the industry analysis versus how the Agency is looking at it? Is there something you can point to that we need to do better?

MR. DE DEL CASTILLO: So I'm primarily talking about 510(k) devices. And while we recognize that industry, of course, as part of their design controls, they go through the risk determination process, there are certain limitations, and many of the risks that are identified are theoretical in nature, and at the time that we're evaluating those risks, we may be assigning a weight to the risks based on the current knowledge that we have at that time. And we may say, oh, you know, these risks are relatively low or nonexistent.

However, as we gather new information, some of it being public and some of it being privy only to the FDA or to industry, we start to have to reevaluate some of those risks because now we realize that some of those risks actually are of higher level or it may require additional data or methods to mitigate or evaluate.

So it's not that there's necessarily a disconnect or a disagreement in how it's being done. It's just that sometimes the risks that are being evaluated are being done with a limited set of data at that particular moment in time. That's just the nature of the device process, I believe.

DR. LOYO-BERRIOS: Good morning. My name is Nilsa Loyo-Berrios, and I'm Associate Director for Post-Approval Studies in the Division of Epidemiology.

I just want to clarify that I have never worked for ODE. I've always, since 2005, worked with OSB as a reviewer. I work with submissions from the premarket colleagues, but I've always been at OSB.

So today I've been asked to talk about the FDA perspective and challenges and opportunities to collaborate when we need additional data from a postmarket perspective, and I wanted to start by defining what postmarket surveillance is.

Surveillance is the ongoing systematic collection, analysis, interpretation, and dissemination of health-related data to improve health and reduce morbidity and mortality.

And for medical devices, postmarket surveillance is the continued monitoring of the safety and effectiveness of a marketed device. And we do this through different mechanisms that you have heard throughout the morning. The first one I have listed here is the spontaneous reporting of adverse events. This is also known as the MDR system. We have the mandated post-approval studies, postmarket studies. That includes both post-approval studies that can be ordered at the time of approval for a PMA device or a Class III device and the topic of today, the 522 studies. And we also have the epidemiologic research that can be conducted for device type areas.

And my presentation, of course, is going to be focused on the challenges and opportunities related to the 522 studies.

You have heard throughout the morning about the FDA authority to impose 522 studies, and you have heard about that this can be ordered for Class II and Class III devices that meet the statutory criteria, but not all the devices that meet these criteria get an order.

So how do we determine there's need for additional postmarket data? And we go through different sources. For example, data from our spontaneous adverse event reporting system can provide a qualitative snapshot of device performance, through which different types of malfunctions or clinical events can be described in terms of severity of the event, clinical sequelae, and treatment that will be needed to address the issue. The data can also be used for signal detection. Unexpected events can be identified, and also change in severity of expected events can also be identified through these type of systems. A passive surveillance system may be limited in terms of underreporting and no denominator data. These systems are used to identify adverse events or unexpected adverse events.

Another potential source is the post-approval studies. While these post-approval studies are designed to answer a specific question, additional issues can be identified throughout the implementation of the study and the review of the interim reports or the final reports. And concerns can also arise from the research or throughout the systematic review of the published literature. And other sources can also include concerns from clinicians and from the public. It could be professional societies or patient advocate groups. And this is just to name a few.

This morning you have already heard from Mary Beth about the process that we go through, the pre-522 process. And when the decision is made that a 522 order is needed and the order is issued, then we start, you know, interacting with the companies and we face challenges.

First, the studies can be delayed by several reasons. Manufacturers, as you heard this morning, have 30 days to provide their plan to address the order. Once the plan is received by the FDA, it is reviewed for formal approval and it can take time to reach agreement on the study plan. There may be questions on what is the appropriate study design to use, what population has to be included, what endpoints have to be included, how are they going to be collected and how frequently, what hypothesis would be tested, data collection tools to use, data sources that can be used, et cetera, and also the what statistical analyses are needed to address the question. And agreement of all of these issues is needed before the plan can be approved, and as I said, these can take time.

Then, once we agree on the study plan and we get the approval of the plan, then there may be challenges also in implementing the study. The first one I have listed here is that it can take time to recruit study sites and obtain IRB approvals. Then there will also be a problem with enrollment of the study subjects. For example, if a patient can get access to the device outside the study, they may prefer to do that rather than doing the postmarket study. Studies can also have problems with the quality of the data and the completeness and also keeping good retention of study subjects.

And all of these can limit the internal validity and the generalizability of the study results. In the end, it limits the ability of the manufacturer to address the 522 order.

Then, despite all of these challenges, we see opportunities to make 522 studies successful, and I will first talk about the phase when we're designing the study, mostly like you should focus on the question at hand, But, you know, consider methodologies that can leverage existing infrastructure and databases. Later today you're going to hear about examples on data sources that can be potentially used to address 522 questions and also on methodologies.

Then when it comes to implementation, we would like you to consider the option of using a centralized IRB whenever possible. And we recognize that this may not be feasible in all instances, but we really think that having a centralized IRB can speed up the IRB approval process and have an early start of the study.

In terms of implementation of the study, we like to plan ahead. When you have a study plan, make sure that you have measures to minimize potential biases, for example, what study subjects will be selected to participate; have measures to ensure maximized study retention and minimize problems with data quality and data completeness. And this is not meant to be an all-inclusive list, but these are the most current situations that we have encountered.

So next, I'm going to give you an example of 522 study.

In 2006, the FDA became aware of several cases of corneal ulcers in pediatric patients treated with overnight orthokeratology lenses. The safety concern was identified through the MDRs and the published literature. Two orders were issued to address the public concern, and in this case the two manufacturers got together, they worked with a third party, an academic center with the relevant expertise, in this case, epidemiology experience with ophthalmic devices. And the microbial keratitis study was completed and the 522 order was addressed, considered addressed in 2011.

So it was not an easy road, there were bumps on the road, but I want to highlight the collaboration between the two companies and the success that they had.

And then I want to finalize the presentation with the last remarks.

In summary, a 522 study can be successful when you have the relevant expertise involved at the table. You should focus on the question at hand. Be relative when designing the study and whenever possible leverage the existing data sources or infrastructure. It is important to collaborate with interested stakeholders. This could be professional organizations, academic centers, patient advocacy groups. And we also recommend that the manufacturers work interactively with the FDA as this can streamline the formal review process of the submissions.

And in the end, we'll all have the same objective for having studies that provide -- that are scientifically sound and that can provide meaningful research that would lead to the protection of the public health by either providing reassurance of the device performance or by supporting regulatory action as needed.

And this completes my presentation, and I can take a couple of questions.

(Applause.)

DR. STEINBUCH: Good morning, everyone. My name is Michael Steinbuch. I'm not going to provide my title. It's not that relevant actually. But I do want to first begin by thanking the organizers of the workshop for this opportunity to speak today on this very interesting topic.

And I do need to begin by a disclaimer, and the disclaimer is that the comments that I will share today are really my personal views. They do not necessarily represent those of Johnson & Johnson or of the broader medical device and diagnostics industry.

So the other thing I should point out is that I usually like a very lighthearted talk, but this is a very serious topic, so we'll see how it goes.

(Laughter.)

DR. STEINBUCH: Okay. So on this slide, the postmarket surveillance studies program has a lot of -- you know, the guidance is very well laid out. I applaud those who worked so diligently to get to get to where we are today, and as was pointed out at the outset, one of our goals today is to try to see where we're going to go next, and how is that vision going to be realized?

And so I have here just a few quotes from the webpage of the FDA with regard to the 522 postmarket surveillance studies, and I just want to highlight because some of things that are interpreted by industry may be very different than the way other people interpret things. And so while I don't have any way of knowing for sure, these are my thoughts about how some things get perceived.

And so the first here is that, you know, while we know that, in 2008, when the Office of Surveillance and Biometrics and the Division of Epidemiology took over oversight, that they have full responsibility for, you know, mandating 522s. And what I think is a perception, that if somebody receives a 522, I think it's often perceived as something terrible, very unfortunate, but for some reason I see head nodding, so maybe some of you
-- that it's perceived sometimes as a negative and that somehow I didn't do something right, you know. And I don't think that is the intent at all of 522s, and I want to try to dispel some of the, you know, concerns about that. It's sort of like, you know, the bad piece of mail that comes. We need to sort of not think of it that way.

There is a purpose. You know, there is this automatic tracking system. There has been discussion today about how that could be enhanced to provide more information for increased transparency. And I want to point out, most importantly, that it is imperative and it's in the guidance that we conduct studies that are, you know, most efficient and conducted in the least burdensome manner.

So this process has already been shared with the group, so I'm not going to review this. I just want to point out that, you know, there's not
-- a 522 does not equate to a safety issue, necessarily. Right in the guidance, in fact, if you look right where it indicates that if there's a need, a perceived need that we need more experience with some device in a different setting or a different population, that could trigger a 522. That isn't necessarily a safety issue, right? It's that we now have a different use and a different model and so people shouldn't think -- and I know, you know, for people from the FDA, this is sort of like a no-brainer, but for people who are perceiving at the other side, I think that there's often this thing saying, well, what do you mean, you know, a different population? Well, it is realistic. I mean, from a epidemiologic perspective, things can be different in a real-world setting and we need to better understand that. So, you know, I'm all for that, but I want to make sure everybody's on that same page.

With regard to the team review of the issue, I know there's a lot of people out there who would love to be a fly on the wall at those meetings, but it's not going to happen. But I do want to say that I think it's important that -- and Mary Beth discussed it earlier, that there is some collegial sort of opportunity to engage from the industry side so that there really could be some, you know, back and forth, maybe some clarification, maybe some additional perspective from a clinical perspective about the particular medical device of interest. So that's something I think can be discussed later.

So what are key challenges throughout this process? Well, I think the first one that I note here is that we -- and it's sort of build. So we've first got to make sure that we're scoping the study to answer the public health question, and we want to do this in the least burdensome manner. And so the first thing that comes to my mind is that we might look for potential existing data sources to be able to answer those questions. And, you know, do we have preclinical data? Is there the clinical data? Is there administrative medical claims data? Is there scientific literature, or a combination thereof, that could help generate that information? And if there is, that is the first place to go. Or there could be some other options, such as revising the labeling, that might also shed some additional light.

If those sources, for whatever reason, are not available or it's insufficient to be able the question, then clearly the next step would be to generate new data. And so we think that generating new data as an option, we have to make sure that we avoid doing full-blown clinical studies that go beyond what we think is warranted as an alignment, you know, with the FDA and industry and the manufacturer about what is really warranted to answer that question.

But, you know, for those scientists out there, both at the FDA and other stakeholders, you know, there's this sort of inside thing, you know. God, wouldn't it be nice to X, Y, Z? And so there's, I think, a tendency to want to have more. More information is better. But we have to balance that against what potential consequences there may be for overburdening the system.

And one that I point out here is that, you know, if we go beyond what is probably warranted, that we run the risk of all these things that could have happened, one of which, I point out, there may be a voluntary product withdrawal of a product. But it's perfectly, probably, fine at the end of the day, but we really won't know that because there's really no ability, from a practical standpoint or a feasibility standpoint, to conduct a study that is being requested, perhaps. And so that would be kind of unfortunate because that would be withdrawing, you know, patient access to a study. So we want to avoid that, and I think that we want to make sure that we're not in any way impeding innovation.

So with regard to methodology, there's been discussion about, you know, what are the methodological issues? And those all need to be, you know, taken into account as well as, as discussed earlier, the advent of the Unique Device Identification will be a very big advantage in having more information available, and electronic health records, et cetera.

So we want to be on that plan, and we do need to get various cross-functional input and bring our experts in, on the FDA side as well as the manufacturer side, to make sure we're getting where we need to go.

So opportunities for collaboration. First and foremost, I think that we need to focus on the common goal, which is ascertaining valid data. We want to do this through open communication throughout the process of the total product life cycle. I think we need to do this by aligning on what the approach is. I think I list here some null approaches. There'll be more discussion today.

We often don't think about a large simple trial, but there may be a perfectly good way to do it. Electronic health records. Pooling disparate data sources. Dr. Mauri is going to talk about the DAPT study and the Medical Device Epidemiology Network, which I want to bring up here because this is an opportunity. Many of you may be very familiar with the MDEpiNet. But the goal is really to develop innovative methods in the device world so that we can improve and enhance the understanding of device performance. And I don't have time to go through this here, but this, I think, is a particularly good avenue to improve and end up with studies that could be done more efficiently, faster, and be more generalizable to a larger population.

So proposed actions. Just to wind up here, I think that we need to facilitate better communication between industry and regulators and we need to build, I think -- and the way to do this is build rapport and trust and fully engage in the process. We need to, I think, challenge ourselves to make sure that we're answering the public health questions and scope the studies appropriately.

And I think that we could use -- and the studies, by the way, I think need to be practical and keep them as simple as possible to make sure that we're maximizing the likelihood of success, and also I think using the Medical Device Epidemiology Network as a framework, that we will have timely updates for product benefit/risk profiles in the context of the scientific evidence.

So I firmly believe that these action steps would be very helpful in moving the needle, and which will lead to, I think, a more highly successful and satisfying experiences for all stakeholders in the 522 process.

And I ran out of time, so if you have any questions, I'll be happy to take them.

(Applause.)

DR. STEINBUCH: No questions? Are you stunned?

(Laughter.)

DR. STEINBUCH: All right then, we're ready to move on.

DR. MAURI: Well, first I want to thank the organizers for the opportunity to speak. I think that the academic perspective is an important one in being able to bring collaborations to fruition.

And I'll talk mostly about a very specific example that Michael alluded to, which is the DAPT study. It's unique in that it is a project that brought together many diverse interests and is a very large study, it's randomized, it's simple, but it is large and it's for a device that was -- you know, devices that were approved under the PMA process.

So this may not be generalizable to everything, but I'll try to pick out the points that I think are generalizable to good working collaborations with academia and where academia may provide some value, and specific, I think that there is a role for academia in facilitating transparency and being able to provide a link when there are multiple diverse stakeholders involved.

So these are my disclosures. Most of these are related to the specific study that I'm talking about.

So the DAPT study really originated after the 2006 coronary stent panel meeting looking at the safety of drug-eluting stents. But in 2008 the FDA issued a statement that there was a need at that point for a large, pragmatic public health trial to look at the interaction of medications, thienopyridines that prevent clots within stents, and that this should look at a specific topic of what the late risks of the rare adverse event of stent thrombosis, where and how that could be modified by treatment. And the anticipation was that this study should be able to change clinical practice and provide important information going forward that would influence the labeling of drug-eluting stents.

The background for this were data that were presented, as I said, in 2006 at the advisory panel meeting, and the background data were really quite limited and so it was clear that looking at existing data were not going to be sufficient to answer this question.

First, the observational data were not prospectively designed to answer the questions and didn't have sufficient information on the specific endpoints of stent thrombosis. But there were indications that longer treatment might be associated with benefit.

So here are some important results from a single center, Duke, showing that within the drug-eluting stents there was a benefit to longer treatment with thienopyridines.

That being said, the existing randomized trial data, which were reviewed at that time, didn't anticipate the importance of a late but small signal of events that that impact could have on the medical community and patients, and specific trials were limited, as most approval trials are, to a limited patient population. So it couldn't be extended, necessarily, to the broad patient population undergoing coronary stents.

So these data showed perhaps a small signal of events after one year. No significant difference overall, but here with the caveat being that these were studies that weren't powered to look at small signals of events after one year and were held in a fairly refined patient population.

So coming out of those meetings and over the ensuing couple of years, the manufacturers of drug-eluting stents, as a group, recognized that a definitive trial would necessarily be large and challenging for any one sponsor to conduct. And so the FDA request resulted in a unique public-private collaboration among four manufacturers of drug-eluting stents and four manufacturers of thienopyridine therapy.

The process was facilitated by AdvaMed. I think it's helpful to see a little bit of a timeline and the practical features of this process. And that was important to be able to convene the industry groups in selecting with a transparent process the academic CRO that would conduct the study under the basic parameters that were outlined by the FDA and by industry.

And so this is when I became involved in the project, submitting the proposal from Harvard Clinical Research Institute, and we eventually submitted the IDE to the FDA as the central leadership for the project. The IDE was approved under a fairly expedited timeline and the trial began enrollment, and we have eventually completed enrollment of this large study that's been conducted worldwide this summer. So the results will be expected in 2014.

And I'm not going to dive into the design per se because it's a randomized trial and it's fairly straightforward. What's simple about the study is really the data collection and really trying to scale down on the parameters and any secondary analyses that are being done so that we can answer the primary questions at hand without a lot of distraction, I think, from other secondary issues. As much as scientists like to be able to take advantage of such a large dataset to really get granular data, there's a tradeoff, I think, in terms of how expeditiously and the quality of the primary data that can be conducted and collected under such a large study.

So this is the largest randomized trial regarding coronary stents. It's the first randomized trial in cardiology that's supported by multiple different companies. There are other examples of observational studies in cardiology that have been facilitated by multiple stakeholder involvement.

And the challenge really is, you know, what were the practical challenges that we encountered? One of the things that I think really facilitated the approach was to have an academic group at the center of the conduct of the study, both as a sponsor and also taking full responsibility for the scientific conduct and analysis of the study. Without that, I think every decision would have required discussion and agreement, whereas the process is really facilitated by an active working group. It does help to have that one central somewhat disinterested party, I think, to conduct the study.

The study is also a global study in terms of the involvement of investigators worldwide, which I think has really facilitated, number one, the rapid inclusion of subjects. But number two, some of the external validity of the study and being able to have this study has impacted much more broadly. And the site involvement has been something that's really been remarkable.

Because this wasn't a question of significant clinical interest, even though the study was conducted really as a pragmatic simple study with not the same types of incentives that would normally be given for an industry sponsor study, there was significant sort of grassroots excitement about enrollment into the trial to be able to get as rapid of an answer to this question as possible.

And so the study concluded enrollment in July of this year. This was also facilitated by a unique design, not just in terms of the funding of the study coming from all of these different manufacturers, but also data contributions.

So the manufacturers of the different drug-eluting stents were able to leverage their post-approval studies, some of which were still underway and others that were begun during the time frame of the DAPT study, to be able to unify their protocols and then contribute data for the final analysis, so identical randomization, identical data collection, and a central data analysis being conducted based on different studies that were initiated by different manufacturers.

So the lessons learned really were that there was a great role for the FDA to play in collaborating across both the premarket and the postmarket side and facilitating a working group among these diverse participants. The academic role, I think, is quite central. And I'll speak a little bit more to that in summary for this specific talk. And then the industry has continued to play an important role in terms of sharing operational expertise as well as contribution of actual patients to the study.

There has been some participation from other federal agencies for unique scientific questions, but for the most part, that's been limited in order to ensure the success of the primary study question.

And, operationally, this is executed in terms of really very frequent updating of the stakeholder group, in terms of both scientific and operational issues, but preserving independence of the analysis.

So I think everybody has to give up a little bit of what they think an ideal study would constitute. For the FDA, I think there was great streamlining of the processes. For academia that meant giving up some of the interest in secondary questions. For industry this meant really a willingness to support something that would advance clinical practice without a competitive value, and then, for patients and physicians, I think their willingness to collaborate in a very barebones, simple but large trial.

So I talked about some of the process that we've seen. This is not universal, that every postmarket study would follow such a complex rubric. But I think it can be important when we need to answer really broad treatment strategy questions that can't be answered by a single party.

What is the academic role within this framework? I think in this case the academic leadership can take in various different inputs and then conduct independent design and operations based on the feedback from these diverse and important groups. In the case, the convener, I think, of the industry groups was AdvaMed, which is important to facilitate the process. And then the academic leadership also takes responsibility for them disseminating the results so that these are available to patients, to the societies, to the FDA for the labeling, and to patients for their own information and choices of medical care.

So the desired results from an academic collaboration is really an unbiased answer to an important public health question, while coordinating the input of various different stakeholders. The pragmatic solution involves combining resources, combining sources of data, ensuring the different interests of the various stakeholders, and really unifying these under a single protocol with simple requirements, but then also ensuring rapid dissemination of results.

Thank you.

(Applause.)

DR. MAURI: I'm happy to take any questions now.

DR. DUGGIRALA: Hi, I'm Hesha Duggirala from FDA. I have two comments and one question.

First of all, you guys have done a tremendous job on this at HCRI. She makes it sound too easy, all of the contributions that they've made, but it's been great working with them.

I think it's important to point out that there was no 522 order for this trial, that this was -- it didn't have to come to that, and I think it's a testament to the sponsors and to AdvaMed to be able to come up with this study in the absence of a 522 order. And so I think that's a really good message for people here, is that it doesn't have to go that far. If you've been discussing an issue with FDA, if you know that there's going to be a public health question that needs to be answered, you don't have to wait for the 522 order to be issued and able to do a study like this.

Laura, could you just touch a little bit more about how the reporting works from HCRI to the sponsors and versus to the FDA?

DR. MAURI: So in the current phase, HCRI receives data and really is just receiving data, and it's the DSMB that's monitoring the ongoing progress of the trial because the randomization phase is still ongoing and the follow-up phase is still ongoing. And so the feedback that the sponsors receive is the basic metrics of enrollment and the safety and the DSMB and the compliance.

But going forward, the HCRI provides reports to the FDA. HCRI will provide a final study report to the manufacturers. But in addition to that, there is an additional value provided to the sponsors, which is that they will receive all of their product-specific information in a dataset that's open for their use going forward. And so there's more than just a report. They own the data that is specific to their product.

And I think that's a good point that you brought up, Hesha, regarding the 522. This was an example where there was really a streamlined and very open discussion among the sponsors and the FDA and academia.

DR. MARINAC-DABIC: Danica Marinac-Dabic.

Yes, there was no order issued. However, there was a discussion and had a discussion at that time and actual orders were written. They were never delivered, but --

(Laughter.)

DR. MARINAC-DABIC: -- it speaks to the point raised earlier about what we can do as a community of stakeholders to actually raise this to the level that everybody will work together without particular orders.

So FDA was deliberating about the issuance of the order, and it was very close to issuing the order. However, we are pleased to see the progress, certainly, and that had been really a great example of how many, many groups can work together, even across device versus drug communities.

MS. CONTE: Hi, I'm Jennifer Conte from the American Gastroenterological Association.

And we, for gastroenterology, are looking to sort of spearhead these types of collaborations with different groups, and I'm very interested to learn more about the role that AdvaMed played with coordinating with the academic centers to try to spearhead and pull together this type of collaboration study.

DR. MAURI: I think I'll answer that briefly. I know there will be time for more questions later when Jeff Secunda will be on the panel representing AdvaMed.

But in this specific project, AdvaMed played an important role in being able to bring the industry groups together for discussion for the selection of an academic group to go forward. Once that selection process was completed, then the academic group -- our research group actually formed the focus for the continued collaboration going forward.

DR. MITCHELL: I think we have one more question.

MS. WELLS: Michelle Wells with Gore & Associates. This question is actually coming from my colleagues in Flagstaff.

The question is what's the total cost to these studies? Can that be shared? Average or median cost to the studies. And is FDA funding any of these studies?

DR. MAURI: This study is entirely funded -- almost entirely funded by industry. It's shared cost, and the approximate cost had been publicly disclosed to be approximately $100 million total for the specific study. And that's a large sum obviously. That's why the study couldn't be conducted by a single manufacturer in any realistic way.

And that being said, I don't think that when you say these studies, this is a very unique example. It's a very broad event. It was clear that there's not a good way to use existing data or even prospective observational data to answer this question. And so it was a situation where this really was the simplest way to go forward. But I don't think that that is the universal solution to questions that arise in the postmarket arena.

MS. WELLS: Thank you.

DR. MAURI: Um-hum.

DR. RANKIN: That wasn't timed. Good morning. I'm
Tony Rankin, and I also thank the organizers of the workshop and also thank you for the opportunity to present on behalf of the American Joint Replacement Registry.

The American Joint Replacement Registry is a collaborative, multi-stakeholder, not-for-profit 501(c)(3) organization whose goal is to enhance patient safety, improve the quality of care, and reduce the cost of care related to hip and knee arthroplasty procedures. This is modeled on similar successful programs in other countries.

The AJRR was established as a collaborative multi-stakeholder effort supported by the American Academy of Orthopaedic Surgeons, the American Association of Hip and Knee Surgeons, the Hip Society, the Knee Society, hospitals, health insurers, consumers, and medical device manufacturers.

The board is a multi-stakeholder board. The American Academy of Orthopaedic Surgeons has four representatives; the specialist societies have one each; industry, through AdvaMed, has two representatives; there are two payer representatives; and we have a public advisory board that has currently one representative on the board and possibly a slot for a second.

Although this is a new effort, the academy really has been involved with trying to get a joint registry started for well over a decade, but we're delighted that we've finally gotten it off the ground. A business plan was finalized in May of '11. We have secured startup funding through the next four years. We did a pilot study with 8 institutions and 11 hospitals that was concluded in July of this past year. We had a multidisciplinary workgroup in August that selected final registry production software, and we engaged Remedy Informatics for that. This product is enabled for Levels I, II, III, and IV data collection and storage. We've hired a research director.

In the past fall, we established affiliations with international, state, and other registries to facilitate national data collection. All 11 pilot hospitals continue now to submit data. We have 200 new sites that have been contacted since August. We have 80 hospitals and health systems that are actively engaged in the enrollment process. We have 11 new business associate agreements that have been signed, and we've hired a research associate, a software engineer, and an administrative assistant. We currently have five employees. We're currently collecting Level I data, with plans to move into Level II, III, and IV in the near future.

What's the value of registry data? This was an article in the Journal of Bone and Joint Surgery last fall, and I quote from here that "The few clinical trials of orthopedic implants and of orthopedic surgical procedures that take place are conducted in a relatively unique environment defined by highly skilled surgeons operating at high-volume centers. Randomized studies are rare and not easy to implement in many surgical and device trials. Thus, large U.S. and international registries are best suited to fill the resulting gap in the available evidence and meet the needs of the FDA."

Registries versus clinical studies. Clinical studies can control for multiple variables, including patient-specific factors; can measure endpoints, other than survivorship, at multiple time points; can compare relative effectiveness; they're sensitive in differentiating modes of failure and causes of failure; but there's a perception of bias, and they're typically smaller datasets, with a narrow range of surgeons.

Registries have large and broad datasets, including a wide range of surgeons and centers; there's no bias; there's a potential for earlier safety signals; they're useful for guidelines and informing surgeon decision making; they represent real-world experience with large and broad data access across patients, surgeons, hospitals, and devices; however, they cannot compare effectiveness of different devices; there are no endpoints other than survivorship; there's no data on non-device related reasons for revision; and there's limited demographic information and patient factors.

The potential opportunities, however, are, with enhanced surveillance, this can provide a trigger for further study needed on certain devices, and registries can provide manufacturers access to large amounts of data and rapid access to a cross-section of patients. Also there's diverse surgeon representation as well as diverse range of hospitals types, community, research, and academic centers. There's a large amount of aggregated and de-identified longitudinal datasets.

The American Joint Replacement Registry data system and dataset can be used as a tool set to assist with surveillance studies. We may be able to create custom data queries, develop data forms, and provide a user portal to assist with data collection for postmarket surveillance.

Points for further discussion are that the AJRR, from a premarket study using historical data, can provide general data on predicate devices; can provide central data collection for investigators engaged in premarket clinical studies; commissioned studies can provide a source for data on results of equivalent or predicate devices by utilizing the AJRR as a data collection tool.

In the area of postmarket surveillance, overall surveillance of implant survivorship on all implants may be able to be provided, and commissioned studies as a source for data on specific devices may also be something that we hope to do.

Thank you.

(Applause.)

MR. MAISLIN: Thank you for that talk. It's really exciting,
Dr. Rankin. Could you clarify what you meant by --

DR. MITCHELL: I'm sorry, I should've asked this earlier, but would you mind identifying your name and affiliation?

MR. MAISLIN: Yes, I'm sorry. This is Greg Maislin from Biomedical Statistical Consulting.

Could you clarify what you meant by saying that the randomized clinical trials have no bias, while the registries do have bias? I think I understand what you mean, but typically that's the opposite of what current thought is.

DR. RANKIN: You're quite correct; the bias is with the randomized trial and not with the registry.

DR. MAISLIN: That's my question. Typically, randomized clinical trials are thought to have no bias in terms of the estimate of the treatment effect because all variables are controlled at least in expectation; whereas a registry, I think you made a caveat that you can't compare among devices, which is difficult. If you have the right data, there are statistical methods that may allow you to do that, potentially, with some caveats. But I thought what you meant by that there's no bias in randomized clinical -- that there's bias in randomized clinical trials, was that it's a specialized set of patients and specialized set of practitioners.

DR. RANKIN: Yes.

DR. MAISLIN: Whereas a registry is in the hands of the general practitioner.

DR. RANKIN: That is correct.

DR. MAISLIN: Okay, thank you.

DR. RANKIN: Thank you.

DR. MITCHELL: Okay. So maybe we can have Jeff join us now. And Jeff, if there's anything specific you'd like to say or note before we begin, please feel free.

MR. SECUNDA: No.

DR. MITCHELL: Okay, great. Otherwise it looks like we have one, and I think there was a second question over there that I asked to be delayed. Did you still want to ask it? Go ahead, Michele.

DR. BONHOMME: My name is Michele Bonhomme, and I'm in the Division of Epidemiology, and I'd like to thank the panelists, first of all, for very good presentations.

I have a question for Dr. Mauri. Your study is often talked about in the division as being as one of the success stories. So any lessons we can learn from your experience, they're very valuable.

My question is -- I have two. One is can you tell us what existing infrastructure that the manufacturers, AdvaMed, or Harvard had that facilitated the more timely design of the study and facilitated its implementation?

DR. MAURI: I think the study came at a time point where we really had a lot of experience in having seen multiple premarket studies having been successfully completed. Even before the study was designed, there had already been academic collaboration, called the Academic Research Consortium, that got together to standardize endpoints for clinical studies of drug-eluting stents. So we had that behind us and accepted both by the FDA, by industry, and academia.

So a lot of the building blocks that I think we've heard as challenges in terms of standardizing endpoints -- some of the methods were designing clinical trials -- had already been water under the bridge, you know, had already been crossed. And so I think that greatly facilitated the design. I would say there was a common rubric.

DR. BONHOMME: Can I ask my second question?

DR. MITCHELL: Okay.

DR. BONHOMME: My second question relates to an earlier comment that was made about the adverse impact that 522 orders have on industry. And your study was unique in that, because AdvaMed and the manufacturers took the initiative to address the issue, FDA ended up not having to issue a 522 order.

So I wanted to ask you whether you could comment on how the way that this study was carried out and the initiative taken impacted the manufacturers in the long run, you know, and their product, the utilization of their product on the market.

DR. MAURI: Um-hum. You know, it was a long process. I think if one actually looks at the timeline, there was, you know, quite a lot of work spent over the period of time between 2006 and when the study actually began. The IDE was approved in 2008. So there's quite a lot of building of agreement in terms of how to conduct the study operationally, more so than the design. So I think the study did have a major impact on the manufacturers. It is a very large burden in terms of the cost of the overall study.

That being said, I think it was a really positive study in that it builds, I think, a sense of collaboration, I guess, among FDA and the manufacturers and across the manufacturers that perhaps had not been present before. And I think some of the practical ways that we've been able to execute the study are a testament to that. I mean, there's been a lot of sharing of expertise, not just directly to HCRI but among the manufacturers that I think has greatly aided the project to go forward.

MR. SECUNDA: Jeff Secunda, AdvaMed.

I just want to dispel the thought that if industry is willing to collaborate and get together, there won't have to be a 522. And I think that this particular study was unique in that it had -- as was said, there was a lot of premarket work that had been done, a number of studies are already completed, plus there was a limited number of manufacturers involved, which really allowed AdvaMed to facilitate the coordination of this.

But that being said, I would put in my push for the increased collaboration with FDA prior to the issuance of the 522. And I think, as was mentioned earlier, a panel meeting or other forums are very important, but they don't get down to the design of the study, the selection of parameters, of objectives, and I think that's something that would be extremely beneficial for both FDA as well as industry to be able to have that interaction before the issuance of the order. Then you have a 30-day window to submit your preliminary study plan and then there's even -- after that there's a -- I don't know if it's statutory or just known that at six months you've got to be running, and that's still a very short period of time when your first introduction is a letter in the mail.

So please.

DR. MARINAC-DABIC: This was more in response to the previous question. I think what was also unique about the study is that the cost was shared between medical device industry and pharmaceutical companies as well. So not to say that that was easier, because the cost is the cost, but I think it was unique in that sense as well.

And to the point that you're making, Jeff, absolutely, we heard the message. I think it is important for us to really look into our processes and to ensure that the 522 order processes are more inclusive. So we will look into this and examine what we can do to, in fact, have some of these interactions which will help us all not to start from scratch when the order is issued.

MR. BROWN: Scott Brown, Covidien Peripheral Vascular. A comment and a question about DAPT.

So, again, you know, my division is peripheral, and we do peripheral stenting, and of course the space isn't nearly as large or as well developed as coronary stenting. But the idea of a multi-stakeholder trial like DAPT is fascinating.

And in terms of getting everyone on the same page, Dr. Mauri, you had mentioned access to a sponsor's own data. I was curious what access they have to one another's data in some properly de-identified way. And you probably see where I'm going. It's probably a much greater motivation to work together on this if eventually get the full 20,000 patients' worth of data as opposed to the three that you contributed yourself. So I'm not sure if that's something that you were able to provide for, or if that was a consideration in the design.

DR. MAURI: Yeah, there is access to the overall data. I think it's probably most useful to them to have their own product-specific data, but there is overall access.

MR. BROWN: Thank you.

MR. DILLON: Dan Dillon, MED Institute, and my question has to do --

DR. MITCHELL: Can you just identify yourself?

MR. DILLON: Dan Dillon, MED Institute.

My question is along the same lines. Just some more detail for both the registry and for the DAPT study about data rights. I mean, who owns the data? Who has rights -- and ownership would imply caretaking of it, maintaining its integrity. Who has rights to it? To what extent do they have rights to it? To what extent are they allowed to extend those rights to others? You know, it's one thing to say you can look at this data, but only to look at it. You can't publish it, you can't give it to anybody else.

Those are important questions when you're talking about collaborating on studies. I'd like to certainly hear a lot more about data ownership and access.

DR. MAURI: Yeah. And I can speak to one of the specific concerns, I think, which arose very early in the phase of sort of building this group to be able to conduct the study, was each individual party had some concerns that the data would be used to compare different products, even though that wasn't the design of the study. And so that was something that we specifically avoided doing, except in the case where it was important for clinical reasons.

So essentially the real challenge in putting together a consortium like this for us was the legal phase of agreeing to all the things that you're alluding to, including data. But I think in this case we were able to overcome that by putting the appropriate controls in place.

The Research Institute does own the data, but there is joint ownership from the manufacturers, to the degree that it's not identified to the stent type so that those comparisons can't be made.

DR. MITCHELL: So, Mike, I have a question for you. At the end of your slides you have a comment. Throughout device TPLC, provide periodic review of benefit/risk profile in the context of new scientific evidence. I wonder if you can expand on that a little bit.

DR. STEINBUCH: I'd be happy to. I had run out of time.

I think that when you consider, you know, what is all of this about, at the end of the day, at the time of the approval, you have demonstrated safety and efficacy at that point, that your benefit outweighs the risk of the product or else you wouldn't have gotten approval. But because of the, you know, complexity of medical devices, because of how they're used in real-world settings, these issues arise, et cetera.

And, you know, if there is some question, as we've discussed, and there is an opportunity to conduct some type of study, hopefully a simple study, to answer the question, and then at that point, at these various intervals throughout the total product life cycle, there's an opportunity to reassess that benefit/risk balance. And if the benefit/risk is still in favor of benefit, then we're great and you go forward. If for some reason that risk elevates to a point where it's no longer -- if the benefit no longer outweighs the risk, well, that tells a different story.

But my point is that that has to be in the back of people's minds. I think it is, but it doesn't always get communicated. But I mean I have to believe that that's what we're thinking about. We're looking at benefit/risk for the product over time.

DR. MITCHELL: Tony, do you -- now, I know the registry hasn't been around for very long, but do you have any examples of how it's being used or certain ideas you have for the registry moving forward?

DR. RANKIN: Well, you're right, we've just gotten beyond the pilot, so we don't have a lot of data at this point, but we're certainly -- I think, basically, the things that I outlined, we want to improve patient safety by being able to identify early simple events, things that you'd probably get earlier than you would on other studies.

DR. MITCHELL: Jeff, I have a question for you. There was a comment that Mike made early on, and that is that a 522 study, receiving one is perceived as a negative. And some of the discussion here has been about how we don't necessarily have to look at a 522 study that way.

I wonder how AdvaMed is speaking about 522 studies when they're talking about them, when they're talking to industry and to the FDA.

MR. SECUNDA: I think the potential for the 522 is to clear the air. There are questions that have been raised, they're not secret, it's probably well known to the public, and the 522 has the opportunity to answer those questions one way or the other. So that's a very, very positive aspect of them. It's coming from a central source, from the FDA, so it's not, you know, everybody running out and doing their own studies to try and answer the questions. So in that regard, it is very positive, and that's how we view the 522.

I think the not-so-positive aspect is that, as I've already said, I mean, you don't know what's in that order until you open that envelope and read what FDA is looking for, and then you'd have a very short period of time to put together a study design and to meet with FDA, where there is a very vigorous back and forth as to how that study protocol is going to be approved. So I mean, part of that is necessary. It's part of the scientific process of coming up with a valid scientific plan to answer the questions.

But, you know, there's a lot of anxiety, I would put it that way, about how the 522s are created, what the agenda is, and how extensive the questions are. And I don't want to ramble too far, but I think, as Dr. Mauri said, part of the success of that study was that it was very limited in scope, in terms of what the primary objectives were and minimizing the secondary objectives, which allowed for a clear-cut answer and also one that was achievable. $100 million is nothing to sneeze at even if there are drug companies involved with it. But it became an achievable study. So I think that's my view of 522 and what the industry perception is of it.

DR. LYSTIG: Ted Lystig from Medtronic.

I thought it was interesting. A lot of the collaborative issues that were brought up right now were more along the lines about how you do collaboration at a fixed point in time, saying, right now, if you want to combine the data sources or you want to work together to get something done, what do you do?

And there's obviously a concern about how much data you collect, how you minimize the secondary endpoints, Tony Rankin, about how much you collected at baseline a key characteristic population.

And I'm wondering if the panelists could comment on the appropriate collection of data that will allow you to do future integrations and collaborations, because it's one thing to say, I know how I want to combine my data now, but it's another thing saying, you know, if I want this data to continue to use in the future, what sort of things should I be looking at to allow for that future use?

DR. LOYO-BERRIOS: This is Nilsa Loyo-Berrios from DEPI. I think it's a very good point, but it's hard to foresee every single question that we will have in the future. I'm referring then to his presentation. This is one of the reasons why it is so important for us to establish the infrastructure and the collaborations now so when an unexpected question arises, then we're ready, we have the infrastructure in place.

DR. MAURI: I think I'm more familiar with the cardiovascular space, but certainly in the cardiovascular and even with up-and-coming devices, there is a tremendous effort to get that collaboration among the societies, the clinical societies, the FDA, and industry in order for there to be sort of a universal path forward to what are the important endpoints? How do we define them? What are the important patient characteristics that need to be defined up front?

Even though you can't anticipate what the rare problems may be, starting out with any sort of shared framework across industry with buy-in from academia and FDA, I think that part can be anticipated early on in a product life cycle.

DR. STEINBUCH: Well, I can't obviously anticipate rare events. What I can do is do a very thorough, robust job of being very proactive in surveillance of products internally and position myself by, you know, integrating the data access that I do have, detail a thorough literature review, not the four-minute MEDLINE search but a thorough literature review integrating the sources that I may have available to me, registry data.

You know, a lot of industry is global, so think globally. We just don't think, you know, just in your own geography because a lot of what, in fact, I think the FDA has identified initiates in other geographies and it gets bubbled up to be less.

So we need to, I think, think very broadly in that regard, and I think if we are more proactive in our surveillance, I think we'll be well poised to identify potential safety signals early with the right medical input and the right clinical expertise engaged in the process cross-functionally with industry as well as the FDA.

DR. LOYO-BERRIOS: I just wanted to add, for PMA devices now, when we are involved in the premarket review, we actually encourage the manufacturers to start thinking about their plans for the postmarket surveillance, and what you just described will be like the perfect case.

DR. MITCHELL: So I've heard a real, genuine interest for FDA to commit to engaging with regard to the 522 studies before the 30-day time clock starts. And I've also heard my colleagues at the FDA commit to really looking at that and seeing if that's possible. But I'm wondering if the panel can speak to any other ways that FDA and industry and academia can collaborate because it sounds like, you know, these questions we ask are often very clinically relevant and need an answer. The clinicians need some help figuring it out. Industry wants to be able to answer the question in the most limited fashion that it can.

What can we do to make this happen in a quicker way, outside of that one suggestion that we've received so far?

DR. STEINBUCH: Well, one is -- been discussed, and I think Danica Marinac-Dabic will talk about this. And we have had two, I guess, workshops.

The MDEpiNet framework is one way that you can bring together, you know, regulators, industry, academia, and other stakeholders to try to make that happen. But there's probably others, but that's one that is already kind of evolving.

DR. MAURI: I think I might add that, you know, I think FDA has probably heard a lot here from industry about how to facilitate conversations earlier on. But I think the converse is also true. And I'm saying this completely as an observer and an outsider.

But, you know, I think one of the things that facilitated the process for the DAPT study and the questions regarding drug-eluting stent safety was really the initial observations came from industry to the FDA, rather than coming later on from the clinical community. So intrinsic to that is having the kind of surveillance systems that you refer to internally to be able to identify problems early so that they can be addressed preemptively.

DR. MITCHELL: Jeff, would you like to speak to the idea of intrinsic surveillance systems on the part of industry to look at these things?

MR. SECUNDA: Well, a robust quality system is going to have active surveillance as part of the total product life cycle. You know, so certainly that's something that we should see. How a company will execute that depends on the nature of the product and available sources of data.

But, you know, companies do very often have their own clinical trials and, you know, some of them are more robust than others and may or may not be applicable to a shared view of what's going on with the general product. But as a rule, I mean, that's the way it should be done. The companies should be looking after their products after they've hit the market.

DR. RANKIN: We believe the joint registry, once it achieves steady state, will provide a huge database for the manufacturers to be able to utilize and to look at the issues we've discussed.

DR. MARINAC-DABIC: This is more of a comment, but to build on Mike's comment with regard to MDEpiNet as a potential venue for working together to address some of the specific issues, I think another aspect of MDEpiNet, from my perspective, that's very important is it has to do without really having a particular question, as communities of stakeholders, to get together in a practical way to evaluate what current methodologies we have in place to address potential questions. And we can go through different buckets of devices, whether they be implantables or diagnostic devices or aesthetic devices, and then make some standardization of those, agree on a certain set of approaches for certain questions, and then identify where are the gaps where we need a little bit better methodologies to actually pool from different data sources and really continue working together. I think that can achieve many things. It can achieve building trust, you know, as groups of professionals centering around a topic.

And then this is also setting the foundation for when we have these ad hoc questions coming up, you already have the people, the resources, the infrastructure in place, and then it's going to be much easier to take the study off the ground than receiving the order and responding in 30 days. That's one comment.

The other comment that I wanted to make is, you know, Epi has a lot of established venues to work with the manufacturers, in terms of, you know, pre-IDE. Different things can be brought to the FDA in a pre-IDE setting, and a lot of interactions can help, and maybe we need to think about a pre-522 venue to actually be able to refine some of those questions.

And clearly in this, you know, atmosphere, as we're trying to be more inclusive and more transparent, this is a scientific question that affects not just the manufacturer or not just the FDA, and I think it ought to be a more transparent way of getting everybody's input. So just a thought as I was listening from some of these useful comments from the panelists.

DR. MITCHELL: Our time is almost up. Are there any other questions?

MR. DILLON: Dan Dillon again, from MED Institute.

Actually it's inspired by some of the comments that have just been made. We talked about better interaction before the 522 is issued. What about during the 522 issued, as you're sorting out what is that protocol, once the protocol is in place, as you're sorting out how is recruitment coming, what are the recruitment strategies for that?

I think on the typical submission side, 510(k), PMA, there's a strong emphasis on interactive review. When it's necessary, get on the phone, get back and forth, get this thing done the right way because we learn things quickly when we talk to each other.

And I can see some valuable lessons. If you have 40 protocols in front of you and half of them have a sample size of 100 and a couple of them have a sample size of 1,000 and a few others have a sample size of 500, maybe there's good reasons.

Now, maybe those are inherent in the design of the product, but maybe that's because a few of those people have some really good arguments about why that sample size should be that way, and rather than having to go back and forth and back and forth in a slow process, try to make that a fast process so everybody learns quickly, so interactive review during the 522 process.

DR. RITCHEY: Part of me feels like I should get some of those people in the back of the room to answer this question. We use interactive review quite a bit for the 522s. At the study planning stage, we try to go back and forth as much as possible within the time frame, in order to look at reviews, to ask the questions that we have that can be addressed via interactive review.

In addition, whenever a study report comes in, and we get a study report every six months for the first two years and annually thereafter, whenever we have questions with reporting, we'll also address as many of those interactively as we can. And then we work through that process.

We look to see what's going on with the study, how it's progressing, what's going on. When a study is not progressing well, we do also collaborate with the sponsor to try to improve the progression of the study so that it can be adequate, that it can be meeting the study time frame. And then, with the final report, we also will do interactive review, go back and forth and discuss what's going on as well.

DR. MITCHELL: Well, our time is up. I want to thank everybody for attending. I want to thank the panelists for some excellent presentations and discussion. I don't know about all of you, but I have a laundry list of challenges and opportunities. Most of them overlap, though, which is good news.

Lunch is now for an hour. And where can get they get lunch? Lunch is going to be right outside.

So thank you very much, and we will see you again at 12:45.

(Applause.)

(Whereupon, at 11:45 a.m. a lunch recess was taken.)


A F T E R N O O N S E S S I O N

(12:45 p.m.)

DR. GATSKI: So it's 12:45, and I think we'll go ahead and get started. I want to welcome everybody back from lunch and thank everyone for participating in the afternoon session.

We're ready to start the session titled Role of Networks, Registries, and Observational Studies. Our first presenter is going to be
Dr. Fricton, and I'm just going to go ahead and let him get started.

Thank you.

DR. FRICTON: Thank you very much for organizing this very interesting and relevant symposium, and it's a pleasure to speak to you today also.

My role at the University of Minnesota is the Director of the TMJ Implant Registry and Repository, and I'd like to talk a little bit about how we involve patients in the process of our registry.

I have a number of disclosures. I also work with HealthPartners. I have a private practice, a clinical practice in pain management, and also work with a company, Biomedical Metrics.

The TMJ Implant Registry and Repository was funded originally by NIDCR about eight, nine years ago now, and it's really a collection of clinicians, patients, and researchers who are all interested in promoting and encouraging the success of TMJ implants.

We do allow and encourage direct communication with patients for follow-up. Most of our outcome measures are based on pain and function, which are patient-centered outcomes. And so we focus a lot of our attention on that. It's also integrated with electronic health record because much of our data also comes from claims data, primarily ICD codes, CPT codes.

But we do feel that to enroll and involve clinicians in the process, as well as patients, they need to have their own portal. They need to be able to see their data, enter data, follow up on how their patients are doing compared to aggregate. So we developed an information system that allows that.

Now, TMJ implants are typically a problem with patients with severe pain and dysfunction. Often these patients have had multiple surgeries and the TMJ part, considering the implant. There's an early history of failed implants and continued pain after placement in many situations. And so they've got kind of a bad reputation, I guess you could say.

There is limited early clinical and basic research on TMJ implants, and they typically followed more of the orthopedic model in their development. But really they're very different than other joints of the body. But more recent research and follow-up studies show that we have really excellent outcomes associated with TMJ implants. But there hasn't been any effort towards postmarket surveillance, which is something that, as director of a neutral academic-based research registry, I thought was needed.

And so as far as the goals of our registry, it was to maintain a registry of clinical information on patients that have TMJ implants, and we also served as a research repository for a resource of both well-characterized and analyzed biological TMJ specimens, but also data, data on the outcomes of patients over time, both short term as well as long term.

We made this data available to researchers, and we have collaborated with researchers all over the world in terms of providing both data and tissue specimens for understanding TMJ implants better. And we also felt that it was important to educate patients and clinicians about TMJ disorders and implants in general, and to improve the understanding and role in treatment success.

So because of the problematic history, we really had to do a big PR piece with regard to making sure that people felt that implants, TMJ implants particularly, were safe and effective.

So we collected patients over about an eight years' period of time. We have 2,258 patients that are involved in it and we have -- also we collected some patients for a comparative analysis that were TMJ subjects without implants, and we also had a general population that we compared. So we had a variety of studies that we integrated the data together in.

We also collected a fair number of TMJ specimens, which included blood, saliva, TMJ tissue, and explants. And we have a considerable number of specimens, and we wanted to do some genetic analysis, which we are doing some work on that right now for looking at both the possible influences of pharmacogenetics as well as genetic risk factors that would predict chronicity.

So we also did a number of other studies with this data so far. It's still a relatively young registry, but we've done a lot of systematic reviews on different treatment strategies. We've also done outcome and risk factor studies. We've looked at biomarkers in the tissues. We've done some pharmacogenetics and genomics studies and proteomics. We're involved in the whole new innovation associated with bioengineering, and we have developed a more generic integrated research information system for patient registries, clinician networks, and postmarket surveillance.

So we published considerable publications, funded grants and other research projects that we've been involved in. We've had 31 researchers use the tissues and data for 21 requests, specifically. So we've been pretty busy in trying to utilize this very rich source of data and specimens.

I'll now go through a couple of the studies, one of the long-term outcome studies with TMJ implants that had been done recently by Mercuri and group, who followed these TMJ implants for 14 years and found that the success was relatively good; a significant reduction in pain scores, which is a patient-centered outcome; mandibular function and diet consistency; improvement in range of motion, an objective measure; quality of life is improved; and that the outcomes were related to the number of previous surgeries. And so this is a predictor of negative outcomes.

We also looked at some outcomes or some risk factors that we collected at baseline to determine how these factors played a role in the success or failure of TMJ implants. And so we followed up on 487 subjects, and this was 68% of the patients that we had at the time. And it was interesting that the system was mainly connecting them electronically. So we did send out some paper questionnaires for those people who did not respond by electronic communication. The vast majority of the contacted patients were very open and willing to be contacted electronically, and the data is transferred encryptedly with good security factors.

So with this study we found that the baseline prognostic factors included -- we were looking at whether history of implants, pain characteristics, comorbid conditions, and psychosocial factors predicted negative or positive outcomes. And we did find that these are some of the factors that played a significant role. And I included anxiety just kind of as a control. But we found that patients at baseline were depressed, had comorbid conditions of migraine and fibromyalgia, or had a previous implant, were big risk factors for treatment failure.

Now, this information is very rich information for clinicians, who are putting these implants in to know, well, if these patients had these conditions, how do you treat them differently? How do you enhance their outcome?

So we found that when you look at the progression of chronic pain, which is what you want to prevent -- but although pain onset occurs here, what happens frequently is it starts out as acute, but it's these risk factors that play a significant role in causing the pain to continue over time. And we identified these risk factors and we look at them longitudinally to determine really, how do they play a role in this progression that has changed?

And when we have this knowledge, it's very relevant for all evaluation of patients that are prospectively going to get TMJ implants. And so it also points to the importance of this longitudinal understanding, the prospective longitudinal changes that happens with a patient.

There is a variety of other information that consumers really express to us as part of their participation in the TMJ registry. I mean, they really want to know how successful the device or drug is. They want to know how the outcomes compare to other treatments. They also know what kind of adverse events could occur and, very importantly, what risk factors would lower the prognosis, what are the risk factors that would lower the prognosis, and what can they really do to improve the success? And what are the early signs? I mean, patients really wanted to know, well, if this is not going to work, what are those factors that they need to look for? They're the first ones, they're the early warning system that something's not working well. We need to educate them on that factor. And so, of course, if it is failing, if there is a problem, what should they do about it? And obviously they're going to call their clinician, but are there other things that they can be involved in?

So there are a lot of reasons why we believe that it was important to include patients with regard to our registry, patient input, patient outcomes. One is they have the most to gain in reporting the data. We found that they were very compliant. They actually did a better job in filling out forms than the clinicians did. They can be very compliant completing forms, as I mentioned. They are the first to suspect something's wrong. They can activate their healthcare providers relatively quickly. And they want to learn more about the device, new devices or changes in the device and the drugs. And they wanted to really focus on managing their own health.

So they're a wonderful group of people to engage within these types of studies. And, generally, they can be easily accessed with e-mails, smartphones, or mail. And like I said, we had a very good response rate from patients typically involved in it.

So we believe that there's an opportunity to do a patient-centered postmarket surveillance system that has a variety of different characteristics, one that encourages collaboration between FDA and industry, clinical practitioners, patients, and academics. And we've had some wonderful presentations about that so far. But it's consistent with the 522 studies and requirements that are required, as well as require a high percentage of consented patient participation, especially with patient-centered outcome assessments.

And it's a system or a process that is not a financial burden. With TMJ implants we have very small companies. There's not a lot of implants done out there, and any type of 522 study is going to be a financial burden on them. So how do we manage this in a very cost-effective, efficient manner?

And, of course, how we can collect scientific information, including not only outcomes and adverse events but also risk factor data? Do not forget about risk factor data.

And it also provides some evidence-based education, not only for the clinicians that are involved, so disseminating results back to clinicians, but also don't forget about the patient.

So we developed an information system that we used as part of the TMJ registry that was a web-based system that integrated research information from investigators, staff, patients, providers, and electronic health records to conduct practice-based research. So this was not designed as a postmarket surveillance system as such, more of a practice-based research system. And we've been working with a variety of different device companies, pharmaceutical companies, and care providers to better understand outcome, patient outcomes, adverse events, and extend patient care into the lives of the patient. And that's a very critical factor that we found, is the clinicians are very engaged. If they can collaborate with their patients, they can see their own outcomes and compare to aggregate outcomes.

Now, the goals of the system was to support research networks of investigators, healthcare providers, and patients; to increase efficiency and cost; to use a very user-friendly web-based system to support all aspects of a research cycle; that it needs to very critically -- and this is probably the most important factor with regard to patients -- was to meet privacy and security standards and rules of engagement in research; allow diverse data entry and data integration and on-demand reporting and clinical communications. We want to develop reports that clinicians can automatically get any time they want, on demand, if they want to see how their patients are doing, either individually or in aggregate.

So we have four different portals, basically, that are integrated together into one database. So you have a public login portal that just talks about, in general, for the general public to know what are they doing with regard to this particular network, and then a practitioner portal and a subject portal as well as an investigator or a sponsor portal. So they see the aggregate data, usually de-identified. Practitioners see identified data, and subjects see their own data.

Now, here are the different portal descriptions. What they do, the public one is to secure login, study information, educational information, study forms.

Health provider intranet portal: online registration, secure communication, online data entry, automated case reports, and results dissemination. Very important. The practitioners who want to participate do definitely want to know what's going on with a study as it's going.

Also the investigator portal also can register patients, register clinicians, staff members, forms development. They can manage their sites, they can do dashboards for enrollment, outcomes, and data export for analysis.

And finally the subject intranet portal, which is informed consent. We do this online. We get approval for that from our IRB at the University of Minnesota to do online consents. We do have this as a secure communication so a clinician can send out e-communication to their group of patients that are enrolled, if they need to, and a variety of other activities.

So how does it work in terms of the flow? Now, what we do basically is that we have a network of clinicians that we participate usually as part of an organization. We have the American Society of TMJ Surgeons as the organization. And there are identified patients to enroll in the study or a particular registry. At the time that we do a -- so we set up a registry of patients. So you have a group of clinicians, and underneath that everyone has 10 to 20 to 100 patients. So it can quickly grow. We do the eligibility checking, consent and registry in the system itself. Patients complete online forms at baseline.

We also integrate data from the electronic health record or from clinician input if they don't have electronic health records. And then this IRiS follows the patient with secure e-communication. So we do a longitudinal study so we can track and follow the patients to see how they're doing. All of this goes into a central database.

Then there's a network of investigators that can leverage, and typically the sponsor or the industry partner is considered to be an investigator. They can look at their data primarily. Research requests, provider portals. And so it just kind of goes into this cycle over time.

So I think I'm out of time right now, but I appreciate your attention, and I'm happy to answer any questions now, or should we wait for that with regard to --

DR. GATSKI: We'll wait.

DR. FRICTON: Okay. All right, thank you.

(Applause.)

DR. HOLMES: It's a great pleasure to be here. It's been a wonderful session this morning. We're going to shift gears and talk a little bit about a different approach that relates to societal registries and the interaction with the FDA and CMS and industry as well as the other groups involved.

I don't have any relationships related to it, other than the fact that I'm the president of the ACC, so I'm involved in these registries.

As we think about registries, we've heard a lot about them. We know that science tells us what we can do, guidelines tell us what we should do, and then registries tell us exactly what we're actually doing so we can then potentially do something about it.

This is a wonderful slide; I love this slide. If you do not know what you are doing, then you shouldn't be doing it. That is incredibly important as it relates to the specific subject that we're going to be talking about.

There's global interest in registries, as we've heard. There are international registries. That is a growth area, for sure, a specific growth area in terms of the international registries. There's some benefits that we've talked about, and then there is the whole issue of quality improvement deliverables, which are incredibly important as we think about development of specific registries going forward.

One of the things that registries can do, they can capture high-quality clinical data efficiently. They can use it to track patients' longitudinal care. We just heard about that with TMJ. We can track drugs and devices. We could link this with biological and imaging data. We could complement and support randomized clinical trials. Both have an incredibly important role to play in the field of new devices as well as in new drug therapy.

There are some crucial issues that I will talk about in the example of TAVR. The first is that all the relevant data has to be captured. The second is the scope has to be as broad as possible. And so the TAVR registry has focused on enrolling all patients who get the device. The concern has been, if you have a study that is after a technology comes out, that deals with 200 patients and yet the denominator is 10,000 patients, all you can tell is about the 200 patients, nothing more. And so this registry is designed to include every single patient in the United States that's going to get this technology.

You have to harmonize definitions. You have to be sure that there is accurate, complete data entry. The issue of garbage in/garbage out is very real, and that's terribly important as we think about these registries. Data has to be audited. We need to avoid shoestrings. Some of the times registries have been brought along, and they are sort of marginal from the funding standpoint, from the personnel, to begin work. That doesn't work. It's a nice idea. It just doesn't work very well. It has to be data analytics. It has to be a responsive organization. And there has to be, then, oversight from the, in this particular case, professional societies.

What could they offer? What could societal registries offer? Well, they could offer sophisticated analytics without conflicts of interest. That has always been a concern. If you have an industry registry, there's always a little bit of concern. It may or may not be true, but there's a little bit of concern about conflict of interest.

We talked about capturing all the procedures. There is a hope with this registry to capture medically treated patients. Not just patients who have surgery or patients who have a new valve placed percutaneously, but all patients treated medically. That would be a really big deal to have a disease-based registry. It'll allow you to track device iterations and allow you to then evaluate changes in patient selection criteria, sort of a selection criteria creep. And finally to serve as FDA and CMS-mandated studies. Danica told me to tell you for sure that you're not a 522.

So ACC has been heavily involved with this, beginning in 1998. These are the different registries. They now have atrial fibrillation. We have peripheral arterial disease. We have structural heart disease. And we now have in many of these registries millions of records. Millions of records.

As we think about two different societies working together, STS, the surgical society, and ACC, there have been good precedents. We have the INTERMACS registry with NIH. There is the registry dealing with expanded indication for drug-eluting stents. We have the ASCERT trial that will be published and presented in a couple weeks' time. And we have other precedents for different societies working together.

This is the technology that we are focusing on, and we have worked incredibly closely with Danica about this and she has been incredibly good, as has the staff here. This is aortic stenosis on the left and then a percutaneous aortic valve replacement on the right. That's a new technology. It's never been seen in this country before. So it had the issues of how do you bring along new technology that we don't have any experience with? Nothing. We don't have anything comparable to it, and how do you study that? It's indeed transformational. It was brought along in the highest-risk patients, patients who had no other treatment options.

The second piece of information, as you can see on this slide, is that it has been used in 50,000 patients worldwide. It's of extraordinary interest that in the United States this technology was approved and we were the 44th country in the world to be approved, right after Albania.

(Laughter.)

DR. HOLMES: It's true. Right before Taiwan.

So even though we have 50,000 patients worldwide, we only had one randomized clinical trial, and that formed the basis here. So we're now just getting started, and the technology is complex and it's changing rapidly, and the patients are sick, and frailty is a substantial issue as we think about registries.

The two societies have gotten together and identified the fact that optimal application requires healthcare teams from multiple different specialties. We have to have echo people, we have to have surgeons, we have to have interventional cardiologists, we have to have hospital intensivists, we have to have geriatricians.

The risk/benefit ratio is going to vary depending upon the specific iteration of device, the specific approach that that iteration of the device is used in, and the specific patient in whom that device is used. Our goal from the societies has been rational dispersion to optimize patient safety and outcome in this new technology in which we didn't have any experience with. Zero.

This has been the vision for that registry. It's a national registry. It links STS and ACC-NCDR databases. The STS database has more than four million records. It's used in 95% of all the heart surgical centers in the United States. The NCDR registry has more than seven million records in it. So these are very, very robust registries that we can turn to.

We developed within a very short period of time, with expertise from multiple different groups, a TVT registry over the course of three months, and it's now been initiated and the first patients have been enrolled. It is linked with administrative databases for long-term outcome. It's linked with an outpatient database, which has never really been linked before, so that we can look at, hopefully in the future, management of all patients with aortic stenosis.

This is what it looks like. As you develop a registry, you have to have some sort of forum upon which to build that registry. There's a steering committee, there's a science and module creation committee, there's a research and publications committee, and then there's an advisory committee that includes industry, a terribly important player, a terribly important player. So we need to bring them along in this initiative.

These are the partners that our two societies work with: industry; the regulatory people, FDA, CMS; patients are involved with this because we talk with patients about some of their issues; and finally the physicians and healthcare teams.

This is what the registry looks like. It has included people from STS and includes people from ACC. Danica is on it. Jyme Schafer is on it. The advisory group includes, as you can see, industry. There's a data module construction warehouse with NCDR, and then there's a data analytics with DCRI. It's a full-service bank in terms of the development of a registry.

What are the elements? A terribly important thing to include both clinical and administrative data. You have to be clear when you develop this, as you begin to think about inclusiveness versus practicality. If you have data fields that are several books long, it's not terribly practical. If you have data fields that do not include the essential components, it may be worthless. So you have to do that balancing act.

It's going to allow us to track drugs and devices with Unique Device Identifiers. The data analytics questions and issues that you have to address are who has access to it? At the present time there's just one single industry that makes this approved device. If you then publish the results of this registry, people say, well, it's Edwards data. Now, in the future we would have Medtronic and Abbott and other players in that, and so then that will not be an issue, but it's an important thing to consider.

The funding is going to be by site and patient and government and industry. In contrast to the previous speaker, about sort of low-cost things, this is not a low-cost thing. It's a high-cost procedure. It's a high-cost group of patients.

And finally there is the issue that industry raises about double jeopardy. If, for example, the industry has to pay for a PAS study and they also have to pay for the registry, some people would call that double jeopardy. At least industry calls that double jeopardy, and I think it's real.

We would hope that this registry is going to form a platform for Phase III and Phase IV research studies. You're not going to hear about premarket because that's already been done. We're going to talk about postmarket. We'll be able to use this for different post-approval studies and different post-approval registries. And it will be part of that process mandated by FDA and CMS, by virtue of all patients being included and all procedures being included. It will allow us then to generate approaches for these different registry developments.

A final piece of information is to say there is a post-approval study that has been approved. We're working on that. It describes five-year durability. It looks at quality of life outcomes. It looks at quality of life measurements. And so with some of these registries we're going to have to bring to bear the science on quality of life. It's not just enough to do a body count. Not just enough to do a myocardial infarction count. How does it impact on the patient? What role does this have to play in improving the outcome of a patient?

This specific trial and this specific technology has major implications in that regard because the average age of the patients in this trial, in this group, may be 85. You don't have 15-year data. You don't have 30-year data, at least presently. And so that's going to be an issue going forward.

A final piece of information is to say that goals are to include the numerator and the denominator. The primary goal will be to assess the efficacy and safety when applied in clinical practice for all of the patients who receive this device, not just a subset. We've been working on the technology. We've been working on the registry forms. As I mentioned, this occurred in about a three-month period of time just last fall. It's now launched. It's here. It's funded. It's working and gathering data as we speak.

These have been our goals: high-quality patient care; efficient and appropriate access to new technology; ensure an appropriate patient selection for and the safe application of this technology; it empowers collaboration and cooperation among all the groups. All the groups. The regulatory, patient, physician; there's always paramedical people; with these goals in mind, to make sure that we use this technology at the right time, in the right patient, by the right people.

Thank you.

(Applause.)

DR. GATSKI: I'm hoping that we have Keith Tucker on the phone.

DR. TUCKER: Hello.

DR. GATSKI: Hello.

DR. TUCKER: Hello?

(Laughter.)

DR. TUCKER: Can you hear me?

DR. GATSKI: We can hear you. Whenever you're ready to start, we have your slides up and I'll be advancing your slides for you.

DR. TUCKER: Thank you very much indeed. Is the voice okay or is that too low or too soft?

DR. GATSKI: It's perfect.

DR. TUCKER: Okay, thank you very much indeed.

Well, good afternoon and thank you for allowing me to take part in your conference. It's a pleasure and a privilege, except I'd probably prefer to be with you. But I'm actually completing operating, and actually that was that was one of the reasons for staying over here.

So you've got the first slide, International Perspective of the National Joint Registry of England and Wales.

The next slide is my declaration of interests. This became a little bit more important last week when we've been having problems with metal-on-metal hip replacements, and one of the newspapers was chasing me for this last week, but anyhow, that's another story.

(Laughter.)

DR. TUCKER: Now, Danica and Art -- the next slide -- gave me a brief for this discussion, which is below, which is on your slide now. They wanted me to give an overview of our registry and discuss the linkability with other data sources, discuss the value of the registry for postmarket surveillance, and how we link with other regulatory bodies like the MHRA, and to point out some of our strengths and limitations and discuss the potential for international collaboration.

So the next slide.

The National Joint Registry England and Wales is now at 1.3 million uploads, which is a pretty big number, and our compliance and consent rates are now very high. So that means we have linkability. That means we can link primary and revision procedures.

Let's go back, then. Why a registry? Well, in this country, most surgeons or many, many surgeons had wanted a registry for a very long time. It goes back to the days of Charnley, McKee and all the greats of hip surgery in the UK, and also the knee surgeons. But we'd get actually nowhere without governmental department help.

Until, you know, sort of the mid to late '90s, we had a thing called Capital hip. It was Charnley that provided that. Charnley was sort of the big hip replacement man in this country. But these ones made by 3M, they were just slightly different and they were cheaper and it put a lot pressure on a lot of surgeons to use them, and they were bad. And reports started coming through that they were failing extremely, in a big way. And so we wanted to contact the patients who had gotten these things to bring them back for review, but we couldn't. We had no idea who they were and where they lived.

So there was a lot of embarrassment and all of that, and that really -- that kicked in the National Joint Registry and -- and I suppose this business we've been having, this business we've been having with metal-on-metal recently, it has produced a second impetus to actually improve the registry.

So we can encapsulate the purpose of the registry in several ways. As I mentioned, we've got recall. Desperately important. And it's also a huge vehicle for communication between the surgeons, the implant manufacturers, and everybody else. And with that, we have developed an outlier implant performance group so we can actually look at implants which are not doing well, and we can look at surgeons who are not doing well. And we're now where we have over the last few years produced quite a few in-depth studies, which I think will contribute very significantly to orthopedic knowledge, particularly knee surgery, in the U.K. And it's not just hips and knees now. We're actually into ankles, elbows, and shoulders.

So some examples of recall. In 2007 we had a problem with inappropriate mixing of heads and cups due to poor labeling, and that just needed to pull everything back quickly. We did that. In 2007, again, there was mismatch between some alloys on tibial and femoral knee replacements, and that was dealt with quite promptly. And then I suppose the big recall was when the ASR hip system replacement was -- when the letter went out it was withdrawn from the market. It took only about two hours for the National Joint Registry to inform all the hospitals, the units where these things were put in, so they could pool with all of the identifiers of the patients so that they can actually get a hold of people.

The next slide, please. Your slide is called Communication at the top of your slide.

Turning again to communication, and our biggest communication is within our annual report, which you can see underlined. It's a pretty big document, and we start working on it right about now and it comes out in September with the British Orthopaedic Association meeting. And it's got a lot of Level I and Level II data and we're getting into Level III as well.

Our other forms of communication are really the feedbacks. We have performance feedbacks. The first one on that list is a trust feedback. That means to say that this is new this year. We send back to each trust, which is the group that controls the local hospitals, details about their own success rate, the numbers they've been doing, and their mortality rate with some. It's a pretty brief Level I document. What I should note is we had some revision work introduced as Level II as well.

Then there's clinician feedback. Surgeons are able to go online, and they can see how they're faring in terms of their revision rates and their general performance. And in this country, and I'm sure it's the same with you, we all have to have appraisals, annual appraisals, and our surgeons use that feedback from NJR on clinician feedback. It's part of their appraisal.

And trainees have their own feedback. So when trainees get to the point of becoming specialists, they can actually show what they've been up to and how good on that they are.

And then we have supplier feedback, which I'll come to in a moment.

This is just sort of one of the things you see on clinician feedback. I'm sorry, that's -- did a funnel plot there, which actually shows where that particular surgeon is in terms of knee replacement. And it gives a lot of other data. There's quite a few filters, and you can see how you're getting on.

The next slide says Supplier Feedback. And it's our aim that we don't have any implant outliers, and we have to achieve this with supplier feedback.

The next slide, please.

So this means that the companies can get real-time information about the performance of their device, and they have a unique access code which they can plug into our mainframe. They can only see their own data unless it's a mix and match. But they need to have several components mixed with a component with another company, they would pick that up. But, basically, they get their own data and they have to sign up to a code of conduct, and breaches will lead to exclusion. We've got to be pretty tough on that, and it's extremely popular, and I think all the companies are now using it enormously.

And this is what they see when they come on the National Joint Registry, and if you could do that, just go to Supplier Feedback, obviously you have the code. But you're going to see stuff like this. And when the companies get into it, they get this one.

This is the next slide, which shows a report extracted to an Excel. This gives details of the patients that have had their implant. They can see how many patients with P1 or P2 -- some say four. There's a lot of data they can get, which is Level I data. And then they can drill down into the performance of each of their own products. This would be DePuy.

The implant performance committee -- that's the next slide -- is the committee which I chair, which actually sort of oversees most of the implants on the registry. It's one of the latest committees that was formed because it needed maturity of the registry to be successful. And we used PTIR as a measure of judging performance, and we look through the data about twice a year.

This slides say PTIR-Hips. A potential outlier is one where there's an implant which has got twice the group PTIR. So we have PTIR cemented stems. You'll see at the top there, that's the average PTIR for cemented stems, and if it was an implant which came out at .7, that would be an outlier. And you can see that in each of these groups. We do the same for knees. And you can see at the bottom there, stem resurfacing has got a hugely higher PTIR.

The next slide.

You asked me to talk about linkage with other databases. Well, in the U.K. we have HES, which is the hospital episode statistics, and that's a database, every patient who comes into an HES hospital is entered into it. So we can actually cross-reference. We can look for infection, length of stay, and return to theater. But the accuracy within it is not as high as it is with NJR.

We link with ODEP, Orthopaedic Data Evaluation committee, and the Office of National Statistics, so we can see how these patients are doing.

We also interface with blood transfusion. That ended up with bringing down the trigger for a transfusion -- we brought it down to about 8.5 through work with the NJR. And we're looking at cancer registries as well.

Now, postmarket surveillance -- it's the next slide -- is terribly important, and it's something we all worry about. And I think that I probably covered most of that. You've seen how the implant performance committee goes about its business, and we've got the annual report, and we've got supplier feedback. But it's still not mandatory, and we're actually at the present time trying to improve it. We're trying to introduce a step-by-step introduction to market. We're trying to introduce risk assessment of implants going into the market. You know, I think we all agree it's essential, but we're certainly not there yet.

When it comes to the next slide, linkage with the regulatory bodies, well, we link with the notified bodies. They're the people who give the CE marks. We link with ODEP, as I mentioned. We link with MHRA, the Medicines and Healthcare Products Regulatory Agency, which, I suspect, is the nearest thing we have to your organization. And a member of the MHRA sits on most of our committees and serve on the implant committees. So we work pretty close with them, but we are completely independent.

The next slide says Reports to MHRA. That's one of my jobs. We've had 16 Level I reports to date, that's 16 implants which are outlying, and 15 Level II reports. And recently we've been having some issues with mix and match, and that's just under review, but we're going to have the Bayesian report shortly.

So strengths, that's the next slide. We think that we've got some strengths. We've got large numbers. We feel that we can produce high-quality reports. We think we can pick up outliers. And we're very popular with clinicians, researchers, and even with our government. A few years ago colleagues would say to me, Keith, you know, why have you joined this thing? It'll be the eye in the sky if you actually find that. Now, everybody says to me, Keith, when are we going to do this? When are we going to do that? When are we going to look at this? And we're into Level II and some Level III activity. We do see some good stuff and everybody wants more. And we feel this registry does one of the most important things, it supports the good.

Weaknesses. Well, it's the next slide. Compliance has to be high. Sadly, at the moment, it's still not mandatory in the U.K. and it uses death and revision endpoints, which are not a little bit blunt. And it is a little bit important to integrate some of the PROMs data, and as I said, it's mainly reactive.

The next slide says International Collaboration. Well, that's why we talk to you. But seriously, I think countries can communicate outlier activity and pool numbers so we can trigger early alerts. I think that I was only just on the phone yesterday to New Zealand and Australia about implants awareness and can get a lot of information from other places and therefore get alerts issued earlier through their program.

And I think the other thing is small numbers can become bigger numbers with collaboration. When I was looking at the results of hip replacement -- a few countries doing it -- we can get better figures quicker.

But I think the most important thing -- and I've had the pleasure of enjoying the company of ICOR the last few months. And we can all learn from others' mistakes and that means we don't have to -- we don't all need to repeat the mistakes.

I'd like to record my thanks to a couple of my colleagues who helped me put this together, and I would like to thank you for allowing me to take part in your conference today.

(Applause.)

DR. GATSKI: So at that this time, if there any questions about this specific presentation, Dr. Tucker will not be able to stay on the phone until the panel discussion later.

Okay, there doesn't appear to be any questions. Thank you very much.

DR. TUCKER: Thank you. Best wishes to you all.

DR. GATSKI: Bye.

DR. TUCKER: Bye-bye.

DR. RITCHEY: Dr. Tucker?

DR. TUCKER: Hello.

DR. RITCHEY: Hi. Can you either mute your line or hang up the phone, please?

(Laughter.)

DR. TUCKER: Sorry, yeah.

DR. RITCHEY: Thank you.

DR. TUCKER: Are we okay?

(Laughter.)

DR. RITCHEY: Can you put your phone on mute, please?

MR. TUCKER: Okay, sorry. Thank you.

DR. RITCHEY: Thanks.

AUTOMATED VOICE: You have been disconnected from the meeting.

(Laughter.)

DR. LYSTIG: Well, I hope I'm not disconnected.

Thank you very much to the organizers for inviting me to speak today and to the audience here for attending. Today I'm going to talk about Medtronic, the post-approval network and methodology. More particularly, I'm going to speak about methods as it relates to observational studies.

Some of the themes that came up earlier today was this concept that the studies we considered in the post-approval space should very much be driven by the relevant scientific question. And I think if you have consideration of the appropriate question, that will very much guide the design for both the individual studies as well as the infrastructure needed or to collect the data to support those studies. So I'll be talking here more broadly than just the 522, post-approval studies in general.

So there are a number of potential goals one might have for post-approval studies, and given these scenarios is that there's already established reasonable assurance of safety and effectiveness.

Now, after that's already been given, some potential goals could include evaluating device performance and potential device-related problems over an extended period of time in a broader population. And a broader population should apply both in terms of patients and to physicians. You can look at evaluating a learning curve for implanting physicians; evaluating the device in a particular subgroup, in particular, subgroups that did not have large numbers in earlier data; monitoring adverse events, particularly rare adverse events; or addressing issues and concerns raised by panel members or by other bodies that had given important input.

So I'm going to say that active surveillance is a means of answering a lot of the -- addressing a lot of the issues that arise. And if you compare in general the surveillance, how it meets and addresses certain characteristics as opposed to a traditional post-approval study, it has advantages in terms of providing real-world views of patient-provider having generalizable results, hopefully to obtain results sooner by virtue of not having a restrictive entry criteria and slowing down enrollment.

There's a data collection tradeoff in terms of how you would get at data from a surveillance study as opposed to more traditional post-approval studies, in that, for surveillance, you're best served by taking advantage of standard of care data collection.

The statistical inference, again, should address the relevant scientific question, and I think in many cases the default question should be one related to estimation and characterization and not necessarily one that is hypothesis driven for a particular comparator. Again, if you've already established your reasonable assurance of safety and efficacy, usually at this point, then, the next question is to get more information about device A as opposed to device A versus device B.

The future data use is something I mentioned in an earlier question to a different panel. You would like to have a sufficiently broad population that you, in the future, can make use of your data. There is the slide that Danica showed, that in the future hopefully we'll not need so much de novo data collection. We can make use of data that's already been collected. So you should be thinking about that in terms of having a sufficiently broad population that allows you to do subsetting or to match against other data sources to answer questions that you don't know at the moment.

So Medtronic has a vision for how -- related to the post-approval network, and here it is to capture reliable real-world product performance data and clinical outcomes by establishing a world‐class post-approval network to enhance safety, drive quality, and promote transparency for the benefit of patients worldwide. And two things to really emphasize here is the fact that real world is going to be key, and the point of why we're doing this is to benefit the patients.

So if you think about some of the goals and requirements for post-approval studies, if you want a broad patient population, a requirement for that should be to have minimal enrollment criteria. You don't want to have screening ahead of time to restrict the population you're working with.

The broad physician population would require a large network with multiple provider types so you can get at your different types of physician.

The real-world usage will also include, for example, you'll have exposure towards off-label information. So you need a mechanism to say that we're going to capture everything that's going on out there and not within a restricted area.

The long-term performance, in order to have long-term, it needs to be longitudinal, and you need to have high ascertainment over time. I think ascertainment is prob ably one of the biggest challenges where, if we don't know what happens over time to a broad set, our inference can be really compromised.

The subgroup performance, if you want that, we need appropriate baseline variables to describe our subgroups.

And for the adverse event monitoring, you'll need a signal detection process for adverse events. Again, you can address these by a carefully constructed adverse active surveillance approach. The components include a large network, observational surveillance data collection, and appropriate statistical methods.

So the methodology involved here, again, you want to drive towards this real-world sample of patients and providers. You would like to target sequential enrollment of patients at sites, with a minimal inclusion/exclusion criteria, but how minimal? Essentially you want patients that consent and are available for follow-up. Anything beyond that is going to hurt you.

You want a protocol that does not create barrier to participation. It's been mentioned by some previous other speakers that if you have forms that are multiple pages long, that sites aren't going to be interested. It's going to be too onerous. They will not give you the data. So you have to find that appropriate balance between getting an update for you to make future use, but not so much that you're turning people away.

You'll need a range of providers. You'll need sample sizes driven by desired precision. So if you don't have a hypothesis test, you still need a rigorous means to determine the sample size. One means of doing that is through precision. You'll need systematic reporting of adverse events, which would be device and procedure-related, hospitalization, and key SAEs per therapy, and sufficient numbers to detect rare events.

So you should have the appropriate study design to answer your scientific questions of interest. Your base protocol should be well structured, including demographic and clinical background to describe your population. You should have an active, prospective data collection where the default is characterization as opposed to comparison -- I'll talk about that a little bit further -- as well as the estimation framework instead of testing. You need that longer duration follow-up.

Within your study design you should be collecting both the safety and efficacy data and capturing data on the implant procedure and immediate postoperative period. You want to be able to describe this population going forward and match it with others.

So in terms of characterization versus comparison. So here, what I'm trying to get at is this concept of how much emphasis should be placed on getting information on multiple devices or multiple therapies versus getting more extensive information broadly about one particular therapy.

So for the characterization, the pros for this should be that it addresses the most relevant scientific question of how the therapy works in a real-world setting. That's the extension you'd like to do now. You had information in a fairly restrictive setting, and now you're going to the real world. And it should be better positioned to answer future additional queries.

The cons to this would be that the characterization approach is not necessarily the best approach when the burning comparison question exists. There can be situations where there is a particular binary type of question that would best be suited by running a particular statistical hypothesis you want to test. So in that case characterization would not be optimal. If you want to do characterization, there was talk about it before, you need to anticipate your data needs. You can't anticipate everything, but perhaps you can prepare yourselves for a reasonable variety of things that you want to protect against.

With comparisons, it tends to offer a specific answer to a narrow question, a very precise answer to a very precise question. You tend in this scenario also to have replication of your reasonable evidence of safety and efficacy.

Now, the cons with this is that if you have a very precise answer to a very specific question, you don't have material to answer other questions that you might have wanted to look at later. You have a great answer to one question but nothing else. And if you have done a replication that is relatively scientifically redundant, there's not a lot of additional scientific knowledge that you'd gain by virtue of finding something you'd seen previously.

In terms of estimation versus hypothesis testing, the pros for this would be it can be a more relevant scientific approach, given that your safety and efficacy have been established. The formal statistical justification does exist for sample size requirements via precision.

The cons for this is that most people aren't as familiar with the estimation focus, and sample size calculations based on precision are unfamiliar. Something to consider, though, is Lehmann had a pair of books which were testing statistical hypothesis and the theory of point estimation, and it seems that there's a lot of emphasis on the testing of statistical hypotheses and sort of ignoring the whole concept that there's a lot of work out there on the theory of point estimation, and that is very relevant.

With hypothesis testing, it can be a pro when you have a natural tie-in to actions made as a result of testing, and if there is a decision point you want to reach, there is the familiar mechanism for determining sample size requirements.

The con is that often there's not a natural decision point requiring action after the therapy has been approved. It's approved and it's on the market, and you want to know more information, but it's not necessarily clear that there's a natural binary decision point coming up after that. And, again, while the hypothesis testing is familiar, it's not necessarily optimal. So one of the cons for the estimation has to do with that certain aspects of it are unfamiliar.

So currently I'm working on a paper with Brad Carlin at the University of Minnesota and one of his students on a Bayesian adaptive design for device surveillance, and my point here simply is that there do exist methods currently for coming up rigorous statistical justification for sample sizes. But there should be additional work, and this is an example of some we're doing and plan to submit this soon, to take advantage of additional developments to have even better looks at how to do sample size.

So, for example, this is trying to set up, saying, if you want to have precision at a certain time point, then you need to think about at what point you know you will have such information longer term.

There are a couple of issues that come up with these types of studies. The data collection issues I mentioned before. You want baseline data to characterize the population and align the future studies or other data sources. You should be employing your common data standards like the Unique Device Identifier, CDISC, or MedDRA. And this goes back again not only to the concept of making integrations at a fixed point in time but permitting you to do integrations forward in time as well.

Meaningful endpoint selection. You would like an unambiguous endpoint such as death, or using standard definitions for SAEs. But they should not be onerous to assess if you want to get this broadly. And one thing important here is I think that the meaningful endpoint selection is a great opportunity for collaboration with the professional societies. They should be providing us information not only on information for a very strictly studied population premarket, but also the sorts of the things we should be looking at longer term.

The signal detection issues. Really what you're trying to do here is create a safety net. You need an ability to capture your unanticipated findings, and you should prioritize that capture by severity or known risks. You need an ability for periodic monitoring or reporting.

One of the trickier parts will come with the setting of thresholds, and some thresholds are going to be relevant in terms of differentiating temporal trends. There's been a shift over time either in terms of how the device is being used or perhaps small manufacturing changes versus when there's been mis-specified risk and a risk thought to be nonexistent or low is actually higher than you thought it was. You use different types of thresholds to detect those problems differently.

Two open problems are the proper use of covariates, particularly clinical, in the signal detection. I mean, it's one thing if you just say, overall, what happens with this device? But if there are important clinical covariates that are related to that use, you need to incorporate that in a proper way; and also determining that signals from distinct adverse event categories are related. There's a variety of means related to heart failure, and if it's categorized in a very specific way in your measured collection, how is it, if they're not nested within a hierarchical scheme, that you get important information? And you want your safety knowledge to be cumulative, ideally. You shouldn't base your safety knowledge solely on what happens in any one study. You need to think both longitudinally over time and then cross-section and across multiple studies.

The key statistical issues include a statistically justified sample size. I advocate that in many cases a precision of the estimate approach is a good way to go. You have a requirement for high ascertainment. There's a real dangerous impact of missing data in observational data, and we need to try to ameliorate that impact as much as possible. You should be using analyses that account for irregular spacing of your data, particularly if you're looking at more of an observational surveillance format, where patients can come in not at a very limited time window.

So looking at time to event endpoints, Kaplan-Meier or Cox proportional hazards or mixed model repeated measurements with an appropriate covariance structure, that does not require a high number of parameters but still allows to address both the dependency and the spacing.

And something that has been mentioned previously is drawing inference to the broader population. One way to do this is to get the entire population, which may not be the most efficient thing to do. Another thing to do is say it's true, we want to know about everyone with a device, not just everyone studied with a device. But how can we make that inferential transition between the persons we studied and the entire population? So you need to account for differential enrollment.

So looking forward, I think an interesting concept is, you know, post-approval, how much of it has to be studies? That's not the only way to generate new information. You could prospectively plan on doing different types of analyses. You could have the possibility of contributing to or taking data from other data sources such as large registries. I think there's a lot of room for innovative thinking for how we can make the most use of resources there.

Structured data collection. I think companies are well poised to track device modifications and iterations. I think that companies should be creating libraries, with broad populations in each device, so they're responsible for getting that information, and that would allow a broad library where much information is captured on each company, but then they can be assimilated. And if you're going to do a future simulation, you need to construct your data such that it's possible for future sharing of standardized data.

Thank you very much.

(Applause.)

DR. GATSKI: So for the last presentation, Dr. Cara Krulewitch will actually be presenting in place of Daniel Caños.

DR. KRULEWITCH: Good afternoon. I'm Daniel's alter ego as the other Branch Chief in the division. So I believe I'm talking about recommendations from FDA and a little bit about leveraging data sources.

When 522 orders are issued, oftentimes you'll see in the order that study designs are recommended. Sometimes you may see in the order the objectives and target populations. They may be a little more broadly stated to give a little bit of creativity for those who are responding to the orders. And sometimes we will give some of this information also through that interactive review that we were talking about earlier.

We do want to see some type of primary outcome measures, often both safety and effectiveness, with study hypotheses. But we have also talked about the precision hypotheses and also a follow-up duration that's given in the order. Secondary outcome measures. Specific groups that we're interested in. Sometimes you'll see some of the inclusion/exclusion criteria. And there may be other important information that are very germane to the particular device that the order is applying to.

And so a little more about the other information. Some of that other important information can be obtained in a variety of ways. We've just seen four very nice presentations that have given ideas about different ways of collecting data related to information that's already out there. There are external data sources. Experts in the area should be consulted, and sometimes collaborations, as we've seen in some of the presentations earlier today, with societies and professionals who are using the device, along with industry, and the specific elements that need to be captured.

Some of the possible data sources. I think this may be getting a little repetitive, but I think it's important to talk about the fact that there's a variety of places, and innovation can sometimes be the key. Enhanced surveillance may be appropriate; sponsor-initiated registries; some of the professional society registries that you've seen examples of here; or disease registries; sometimes the complaint handling systems may have some of the data; electronic medical records; administrative claims data sometimes can be of great value, especially for linking; and the healthcare systems and even medical device representatives.

After you receive your order, it's important to evaluate what data you have for the device, what's out there now. And if FDA recommends using external data, weigh the benefits and think about it. If you know of a data source that we haven't thought of, we want to hear. We're not just closed to the things that we've identified in an order. We're trying to be as broad as possible, and we want to be the most efficient and effective with the answering of the order.

When examining the data across sources, we consider the quality of the data source as well as its applicability with respect to the postmarket surveillance study questions that we feel are critical to answer. The assessment of data source quality is going to be in a future workshop, so stay tuned and we'll be talking about that a little bit more.

And just to go over a little example of 522, some of the issues and questions that may be asked, they're all here, and I'm going to go by them one by one.

First of all, in surveilling over the next 36 months, in how many patients is this device used? We often ask that question, and we're looking at possible data sources, including registries from professional societies, the disease registry, sponsor-initiated registries, enhanced surveillance, and medical device representatives, administrative claims data, and the healthcare system can likely answer those questions.

In relation to among patients undergoing the treatment for this disease, what proportion is exposed to the device? Well, again, I think we feel that registries are probably a likely source to answer this, or a registry created to gather this data. Administrative claims data and healthcare systems are also likely candidates to answer these questions.

And what is the periprocedural rate of the primary safety endpoints? And, again, many of these same systems will be able to answer those questions. Of course, sometimes we do know that a new study may be the right way to answer the question as well, but we're into efficiency and innovation in what we do and how the questions are answered.

So 4, the rate of effectiveness with the use of the device. Is it different from the rates seen in patients with a comparator after 36 months of post-procedure? Again, some sponsor-initiated registries and enhanced surveillance, and in some cases there may be some other information such as that from professional society registries or disease registries linked to administrative claims data or the healthcare system data and EMR.

And, finally, in those who receive the device, what is their quality of life at 36 months post-procedure? And the possible data sources are disease registries or sponsor-initiated registries as well as some of the other information.

I think one of the key points that I know Daniel wanted to point out, and I completely agree with him, is that registries which engage patients are more likely to get very good data. And you've seen some examples of that here, where there's been discussion about engaging patients and having patients possibly enter and be a part of that, and I think that that's helpful because they're the ones that we're studying and looking at, and when they're engaged, they're more likely to give you good information.

And, finally, FDA is open to working with sponsors very early and frequently when we're finalizing postmarket surveillance plans. And always consider the quality of the data source and its applicability with respect to postmarket surveillance and the questions that are being asked. Weigh the benefits of what FDA has recommended, and we're very open to discussions and hearing what's going on.

And that concludes my presentation. Thank you.

(Applause.)

DR. GATSKI: Thank you again to all of the presenters on the panel.

We'd like to open it up to questions from the audience. But before, I was just going to ask one question of the audience, and this can just be a show of hands. How many of you have experience developing or using registries as a data source? So I'm going to say about maybe 40%, 50%. Okay.

If you have questions, go to the microphone and just remember identify who you are and who you represent.

MR. MAISLIN: Hi, I'm Greg Maislin, Biomedical Statistical Consulting.

I guess my question is mostly directed to Dr. Lystig, but for all the panel. First of all, I appreciate very much FDA's focus on the registry here as a potential for reducing burden for 522 and other post-approval studies.

Dr. Lystig started his talk by saying the potential goals for post-approval studies, but before he gave the goals, he said, given a reasonable assurance of safety and effectiveness was previously established. And that's probably from, of course, the approval of the device. He then went on to talk about some work he did, he's doing with Dr. Carlin, which together makes me think about that assurance of safety and efficacy data that came from the pivotal trial, it seems to me a perfect source for an informative Bayesian prior, especially if it was pre-specified during the IDE design. If you can say that we're going to take this data and we're going to form the Bayesian informative prior, then the post-approval study, or potentially even a 522 study if it falls in with that outcome, could greatly reduce the burden. And that would be another strategy besides focusing on registries, especially if the registries aren't available yet.

I guess my question -- that was a comment.

(Laughter.)

MR. MAISLIN: My question is, have you thought about using informative priors, or has anybody thought about using informative priors data coming from the pivotal study in formulating the analysis for the post-approval studies?

Thank you.

DR. LYSTIG: So I'll start by commenting on that. I think actually one of the really interesting open opportunities is the use of the prior data in terms of safety analyses. So, in particular, if you're thinking of the very rare events for which you might have seen null counts and you're really trying to go forward and starting to limit the upper bound, or actually go forward and nothing's happened, so you have an infinite bound, anything could happen, and to what extent are you allowed to shrink that interval down?

In essence, it's opposite of what you do in an efficacy trial. In an efficacy trial you're thinking that there is a null point that you're trying to shift away from or you're trying to demonstrate you're not at the null. But in this case you're saying, I'm trying to bound how bad my alternatives might be.

So I definitely think that there is a lot of opportunity there, but I think it's always going to be an interesting call each time, discussion with Agency, the extent to which you allow the data to be informative prior, and that there's definitely going to be a collaborative opportunity there.

DR. NORMAND: I'm Sharon-Lise Normand. I'm from Harvard University, and I really enjoyed the presentations. I have two questions. The first is directed to both Dr. Holmes and Dr. Lystig, and it relates to statements both of you made.

So, Dr. Lystig, I think you said, in the postmarket surveillance study, there really shouldn't be a comparator. And so I'm going to jump on you because I think there absolutely has to be a comparator. And the reason why I say that is it's a decision and patients are making decisions and if you're going to look at safety or effectiveness, you need to compare to something, and comparing to something that happened in a trial three years ago is not relevant. So that's the first. If you talk about comparative effectiveness, you, by definition, must have a comparison group. So I'm stating that because that was my interpretation of what you said.

And Dr. Holmes, the reason why I think it was relevant to what you described earlier, you had characterized the TAVR registry, as you were saying, patients had no other treatment options. And so I'm thinking, well, you still need a comparison group, and I'm sure they didn't lead them out the door before this device was invented. So what comparison group will you be using in that setting? Again, my premise, which you may disagree with, is you absolutely need a comparison group because, otherwise, you're not measuring effectiveness. So that's the first question.

And maybe I'll just save my second question and sit down because I think my second question's a little quicker, and that is a question that's related to Dr. Holmes' presentation, and the question to the whole panel, and I would like to get your opinion on -- we all have our biases, and I actually hate that word "biases" because, you know, what does that mean?

But my question is as follows. What do you think about professional societies who would be monitoring their own study of devices for their own societies? And, again, we talked about industry being biased, but I submit to you as well, there would be an incentive, perhaps, from a society for use of these things. I realize maybe that's a bit off tangent, but I ask that especially with regard to 522 studies.

So thank you. And I'll sit down.

DR. HOLMES: If I could have a first crack. In terms of the second one, I think that's terribly important. Professional societies do have a conflict of interest because they work in a field that they like to work in, and they also are reimbursed for working in that field. And that's the important point of these, that you have to have an independent data analytics center. And so, at least for the TVT trial, Duke is that independent contractor. It has nothing to do with running it. They, then, are engaged at the table of deciding what analyses might be appropriate for the data, but are totally separate. None of the physicians involved in the DCRI group, or whatever group we might choose, have anything to do with the data other than as a data analytics center. So they do all of that completely.

In terms of the comparative things, it depends upon what you have to compare it to. For the TAVR study, the Class I indication that everybody, all the professional societies and all society in general, agrees is the treatment of choice for aortic stenosis is surgical aortic valve replacement because it's a mechanical problem, and medications for mechanical problems don't help. That's the data that's been available for 25 years now.

The concern when the pivotal randomized trial was then developed in this country was that there are 30% of patients across the world, not just in this country but across the world, who are not deemed to be suitable for that Class I indication. There is no other indication for the treatment of patients with severe symptomatic aortic stenosis.

So then your comparator is something that we have absolutely no data on. It's not a patient's choice. It's the patient has severe aortic stenosis, they come to see a surgeon and the surgeon then looks at them and says, you know, I'd rather not either because they're 93 or oxygen dependent, on steroids, and can't walk because they've had a stroke.

So the comparator study in the pivotal randomized trial was using a design where patients were either randomized to TAVR or to what we call, euphemistically, medical therapy, realizing that there isn't any medical therapy. But what are you going to call it? Are you going to say randomized to nothing, or randomized to something worse than nothing? I mean, the standard of care for severe aortic stenosis is surgical aortic valve replacement. Everything else is nothing.

DR. NORMAND: I guess I do want to push you on that because the patient's there, the patient's either -- there's a choice of the patient. It's not like I must have it.

DR. HOLMES: Sure.

DR. NORMAND: And so there's always a comparison group. But I just want to push that. So maybe it's usual care as the comparison group.

DR. HOLMES: That's okay. They call it usual care. Usual care for severe aortic stenosis is nothing, other than --

DR. NORMAND: That's okay, but that's either I do it or I don't do it.

DR. HOLMES: I mean, you can call it whatever you --

DR. NORMAND: Well, I don't personally care of the label, but there needs --

DR. HOLMES: Right.

DR. NORMAND: I guess I'm trying to push you because I claim that there needs to be a comparison group.

DR. HOLMES: No, there was a comparator.

DR. NORMAND: Yeah.

DR. HOLMES: It happened to be usual care, which doesn't work.

DR. NORMAND: Yeah. Well, that's okay. But I'm just talking about in the postmarket setting.

DR. HOLMES: That's different.

DR. NORMAND: Well, that's what we're talking about right now, right? In the postmarket setting you're going to look at the TAVR registry, and I'm just saying that, you know, what's the comparison group in that registry?

DR. HOLMES: There will still be patients in the postmarket surveillance registry that were never studied in the context of a randomized trial. Well, I don't know, the comparator there in that situation would be a group of patients who were very high risk, who were treated or were not treated with surgery, and that's then the comparator group. And that will be then the group of patients with severe aortic stenosis who are not treated by standard therapy, and that's the rationale in the TVT trial, is to move towards those patients that are medically treated.

DR. NORMAND: I see. Okay.

DR. HOLMES: That's the comparator group.

DR. NORMAND: Okay, thank you. That's what I was really asking, so thank you.

DR. LYSTIG: Yeah. So at the risk of causing strife from my table here --

(Laughter.)

DR. LYSTIG: So first off, in terms of biases, we always try to identify and eliminate biases, but I think sometimes just the identification is sufficient. Well, not sufficient but something that we need to not ignore the fact that it's going on.

So even if your analysis is being conducted by someone external to you, it is quite possible for any given organization to say that the persons within that unified organization are perceived to have collected data in such a way that might be different from some other organization. So, you know, even with DCRI analyzing the data, it is conceivable that the physicians might over-report how well persons are doing under their care. That would be a bias.

But sure, all kinds of biases exist. It could be that physicians that work in an industry-sponsored trial will report that devices, in general, work better than surgery. These things can exist. And I think probably what's important is to get a sense of the magnitude to determine like what impact such things might have. But, you know, obviously we just don't want to pretend that such things don't exist. In essence, it's a little bit like the Hawthorne effect, right? Just by virtue of being in a trial, you might see a differential effect.

Now, in terms of the comparator, yes, I did state that --

(Laughter.)

DR. LYSTIG: -- in a given post-approval study, that one default might be not to have a comparator. Now, I think it would be -- I don't think anyone here would claim that your entire evidence about that product would be contained in the additional information contained solely within the post-approval study.

So I think, absolutely, there is a proper place for continued integration of that information in such a way that it could allow, for instance, patients to do a risk/benefit assessment that's in terms of saying how this therapy might be compared to their other options.

I do not necessarily think that the best design, going forward, every time is to say that any given manufacturer should enroll a study that has both their own product as well as one, two, three other options at the same time. I posit to you that it might be more efficient for the individual manufacturers to gain more information about their own products to allow someone such as the FDA to combine that information and allow the comparison to be made.

So that's what I was getting to with the concept of a library. I think that the manufacturers have a good opportunity to get extensive information about their own products, which then would lead to the capability of doing a comparison. But I don't think it should be appropriately a default that in any given study sponsored by the manufacturer, that that should be how it's run. So I just want to clarify that.

DR. FRICTON: We're all from Minnesota, so we're all friends.

(Laughter.)

DR. FRICTON: But I think comparators are very important. Within the TMJ registry, there's been several randomized clinical studies comparing a variety of different interventions for TMJ, and interestingly enough, they all seem to come out the same, whether you do arthroscopic surgery, open surgery, or nonsurgical care. So a comparator is critically important.

And within the development of a registry, I think it's important not just to include people who get a joint or a device, but include those people who choose not to include the device, not to go with that care, because there is sometimes adverse events associated with the care or the surgery and not necessarily the device.

And I really think it's important to include anybody who potentially meets specific inclusion/exclusion criteria and then follow them over to see how well everybody does in comparison to the device. Now, that still can be done within an industry, within a particular company, but it's still enrolling everybody, and that's where the clinicians are involved because there are patients who opt out. They do not want to get the surgery. They want to just see what happens over time. So I think it's a very, very important point.

And also with regard to the bias question among clinicians, clearly we found patients are biased also. They always bias to want to please the clinician.

So there's bias involved in every type of data collection effort, and that's why we always compare it and bring in some type of objective function, too, like range of motion in the jaws, what we use to some extent.

But I do think that when you have a registry that is relatively neutral, that has a board that's broad based, industry, patients, clinicians, you have the potential of minimizing some of the bias, at least in the systems that you set up for that. So I think they're both very good points.

Thank you.

DR. LYSTIG: Because this comparison thing comes up a lot, I think it's a very important topic. So in the context of making your comparisons, you know, ostensibly you'd like to say, for a person that has multiple options, and comparing two different persons that could've had two options, then how did those therapies compare? But it can be the case that you are comparing someone that had the opportunity for a therapy against someone that did not have an opportunity for that therapy.

So even when you have the comparator group, especially in the context of observational data, it does not necessarily give you the kind of comparison you would like to make. You would like, say, what would be the impact of making a certain choice? What would be the impact of having a certain assignment? You don't just, you know, think about blood pressure medication. If you compare everyone with blood pressure medications or everyone without, that's not a valid analysis because what you'd like to say is how would everyone that needs blood pressure medication --

DR. NORMAND: Yes.

DR. LYSTIG: -- have it and don't? So you need to -- one way to get around this is to say that you want a metric to allow you to determine who could have received a certain therapy and use things like propensity score matching to get at that. It doesn't necessarily have to be done inside one study. You can go towards that, but there's a lots of mechanisms to allow you to make those kinds of inferences.

DR. BLAKE: Kathy Blake speaking as an electrophysiologist, and wanting to go to this issue of where do you get some of your priors for your Bayesian adaptive designs? And, also, how do you essentially broaden and continue a natural history study?

And I think you'll have some opportunities with the TAVR registry because that will be a procedure that is localized to particular centers that muster the resources so that patients can truly be offered both. What would be optimal is for that registry to be concurrently conducted at
non-TAVR centers because those are then going to be places where the natural history of aortic stenosis can continue to be observed. And hopefully, then, you'll have a mechanism for capturing the patients who either are not referred to a TAVR center or patients who say, Doc, that's a great idea, but I'm not going to the Mayo Clinic or to a center. I'm old, I'm frail, et cetera.

And so it goes back to saying that we need ways to put these multiple data sources together so that the folks that are never even seen by the investigators, or those entering people into the registry, have a chance to be measured and counted up.

DR. HOLMES: I think that's very important, and that is one of the goals that was pointed out by Dr. Berwick when we talked early on about this whole thing, is to say we need to have a disease-based registry that includes all of the different treatments, and so you can have a comparator. If that's the term that you like to use, that's fine, or whatever term you want to use so that you know what happens when patients opt out or when they opt in to one or the other therapies. That is still a ways off in terms of a specific disease-oriented registry, but that's clearly the goal. It's a great one.

DR. MARINAC-DABIC: One point of clarification, that under our post-approval study protocol, that specifies clearly that the sites that are going to be part of the post-approval study must not have participated as a part of clinical trials. So we're actually clearly trying to gather more information from the sites that had not been really part of the investigational trial.

MR. BROWN: Scott Brown, Covidien Peripheral Vascular.

I was thinking about Ted's presentation on active surveillance. So I just wanted to clarify that in my mind there's not a bright line between, say, a traditional but single-arm post-approval study, you know, which has registry aspects. There's not a bright line between that and active surveillance in that they mushy merge into one another after awhile.

If you make the exclusion criteria simpler and less restrictive, if you make the data collection less ambitious and easier to manage, at some point you've probably crossed an invisible barrier into an active surveillance scenario, as long as there is a consent aspect and an active requirement by the patient to enroll.

So I just wanted to check with Ted that I hadn't misunderstood that part.

DR. LYSTIG: Yeah, there's definitely a continuum of ways in which these things can occur, yeah. So I'm not going to argue with that.

MR. BROWN: Because, then, the real question then is this: If we do that, which, of course, I love the idea, you get just what I think post-approval studies are often really about, which is to your point of -- I don't want to join the estimation versus comparison fray, but to the point of gathering real-world data on a heterogeneous population of subjects.

Here's the very specific question that popped into my head when I was hearing that. We will almost certainly obtain patients who are being treated outside of the actual indications for use of the device, if we take that kind of approach. How does FDA feel about the sponsor inadvertently, perhaps, participating in off-label usage in a way that you normally wouldn't do, right? You'd normally set your inclusion/exclusion in such a way as to be very specific about this, and this just strikes me as a weird risk that we encounter if we go really real world.

DR. KRULEWITCH: And I'll respond a little bit to that. We don't promote off-label use and, however, we understand it does occur. It's practice of medicine, and we don't regulate practice of medicine. So if indeed it's occurring and it's being gathered in the same data collection system where on-label use is being gathered, that's a bonus in some ways. We would never go out and say we want you to collect this data for an off-label use approach. However, it does help us sometimes, and sometimes we ask for all-comer studies because we're concerned that that may occur, and it does give us information if it is occurring as well.

DR. HOLMES: That's an important issue for the TAVR registry in that there will be patients who were never studied in a randomized trial in whom the device will be used. Whether you call that sort of off-label or off-indication use -- that's another term that has just now started to be used -- the approach that at least we have understood from CMS is that the NCD is going to come out and say those patients can be enrolled in the TAVR registry. They will either have to be part of some sort of new IDE process when you take a look at a totally different indication, in terms of indication creep, or do I have to be part of a randomized trial, or you're going to have to follow it so that at the end of a couple years' time you can see how things are going with that group, so that then someone can make the determination of whether that's reasonable or that more data is needed, whether a randomized trial is needed. That's a huge issue for this new technology.

MR. BROWN: And I think, if I could, to Dr. Holmes' point, that registry has the advantage of being sponsored by -- again, we have the societal interest, and you have a third party managing the data, so you have that distance. What I was hoping to make sure of, and I think I've gotten my answer, is that an individual sponsor taking the same steps would be afforded the same benefit of the doubt, that, you know, hey, you happen to have off-label usage in here. It's not like you went and asked for it, but you've got it. Great. That's okay with us. And so if that's what I heard, then --

DR. MARINAC-DABIC: You heard it correctly.

MR. BROWN: Okay.

DR. MARINAC-DABIC: But one observation that I still would like to make is, again, going back to some morning presentations that we heard and the vision of how we envision the national postmarket surveillance infrastructure, many of these issues, I think what we're trying to do now to shape up how the future-required postmarket studies are going to look like, and if the premise is we're going to have these readily available, a linked structure that we'll be able actually to draw from the results from multiple data sources and use them as the priors and actually allow for comparison, then the sky is the limit.

But if we stick toward -- you know, we're going to be entrenched into FDA's regulatory hat, wearing our hat, and then you're going to have industry saying, well, whatever FDA is doing is very burdensome for us, and if academia stays within their own boundaries, then we're not going to get there.

And I think this is the value. All of these approaches, I think, are very feasible, and FDA is very open to using them. But I think there is also some investment that needs to happen before we get there, meaning building this structure that will be existing, not just in theory, but real in practice.

MR. BROWN: Thank you all. I'll sit down.

(Laughter.)

DR. SEDRAKYAN: Does it work?

DR. GATSKI: Yeah.

DR. SEDRAKYAN: It does. Just a futuristic view. Let's say five years from now, Dr. Holmes, you've got a registry. How would you sustain new products coming to the market that will have 200 different percutaneous modes?

DR. GATSKI: Identify yourself. We know who you are, but not everybody does. Say your name.

DR. SEDRAKYAN: Oh, Art Sedrakyan from Cornell.

I'm just thinking about what's the right infrastructure of the future? So you'll get products coming on and you continue monitoring them. Are you ready, in a voluntary -- is it going to be a voluntary system, the way you will set it up?

And it might be good also for you, Dr. Fricton, to address, too.

This is the important issue. Sharon-Lise brought up the issue of bias, and what's the right infrastructure to invest so that we have mandatory participation and validation of the outcomes and potentially give access to FDA, direct access to FDA? Not expensive kind of studies that FDA has to fund, but have an infrastructure that will give direct access to it.

DR. HOLMES: That's a great question that we have struggled with because, as you mentioned, there are going to be different iterations of this technology, and it's going to be an ongoing thing.

The societies are very interested in continuing to do comparative effectiveness research over the course of time, as the technology changes, to decide what is going to be better or what is going to be not as good compared to other strategies. And so that will be totally important, the way it is currently set up in terms of from the funding standpoint of this going forward. Whether that will change over the course of time, it's too early to tell.

At the present time, there's institutions that want to participate in this technology and will pay an up-front fee. And that fee then will support a whole family of registries, going forward, that will involve new technology, mitral valve disease, other things that will allow there to be a constantly changing mix of technology that is going to be assessed. That's one part of the funding.

The second part of the funding is industry funds. It is hoped that this platform will allow for postmarketing, even post-approval studies to be done by what we hope will be an independent group without some of the conflict of interest things, realizing the concerns about conflict of interest and bias and things like that; that this thing will take the place of that by virtue of the fact that CMS and FDA are part and parcel and are embedded in this so that this then becomes that process. Funds that would have been spent by virtue of federal statutes for the companies to do that, then that function will be performed by the societies together, working with data analytic centers.

Other countries have done it somewhat differently. The UK, for their TAVR registry, per patient, the centers pay money per patient for that, rather than an up-front fee and then a continued access fee. It's a great point.

DR. SEDRAKYAN: It's a tissue valve. There's another issue here, the old debate on tissue versus mechanical valves. At some point it's going to come back because these are going to fail faster than mechanical valves are going to be failing, and there's going to be a lot of products on the market.

So how do you anticipate that issue to be addressed in this context? Any safety checks we have in place or thought about? But also this -- yeah, maybe you can answer.

DR. HOLMES: No, no, no.

DR. SEDRAKYAN: If I'm asking too many questions, I apologize.

DR. HOLMES: No, no, that's a great, important point related to this. The approvable indications for this technology at the present time are high risk or deemed to be high risk for surgical aortic valve replacement, or who are prohibitive high-risk surgical cases. So these are patients that do not have the Class I indications.

Now, already there has been a creep in Europe to do it, to perform the procedure in patients that are less risky. That is true. Now, it's very different to look at a device in somebody in whom you've placed it when they're 85 than if you place if when they're 50, and that's a huge area of concern by the regulatory agencies as well as the professional societies. How do you make sure that the data is good and can be relied upon and then, in 30 years' time, it'll still be good? And so that will be, then, the subject of randomized trials going forward in that.

There's another part of that. You mentioned the whole reason is about mechanical versus bioprosthetic valves. At the present time that's already a changed practice in the United States. There are now some patients and physicians who, in the past, would've received mechanical heart valves because we've had those for 30 years and they're really good when they work. They don't always work, but they're really good in general.

There are some physicians and some institutions which are now switching their aortic valve replacement to bioprosthetic valves, with the intent that in 10 years' time, when those bioprosthetic valves fail, then you'll do a valve-in-valve with a percutaneous valve. That's another issue that societies haven't dealt with very well, and physicians haven't dealt with very well either. It's a big deal.

MR. RISING: Maybe I'd like to comment on that. I think it's a very good question about sustainability of registries and whether organizations sponsor it, industry sponsors, and where do you get the money to do that?

In our industry it's the very small companies that are involved in this. They do not have very much money to sustain a registry over time. NIH-NIDCR supported the initial development of it. But we've had a struggle to sustain it over time because industry just has little money.

So what we've had to do is creative strategies for which to engage the clinicians to not just track their outcomes. I mean, that's kind of why FDA is interested in adverse events and outcomes. But how can you improve care? And the care, key factors in the care are comorbid conditions or risk factors, comorbid treatments, comorbid conditions, age, demographics, lifestyle factors, and how do those factors, when you assess them at baseline, play a role in the outcomes? And when you focus on risk factors and you give that information to clinicians, it's a whole new ballgame with regard to how they provide that care. It's not just about the device. It's about the whole person.

And so the clinicians seeing this as an advantage are very willing to ante up some funds on a regular basis to participate in registries, as long as they can enhance their care and not just document outcome.

DR. HOLMES: The logical extension of that is the move towards payment for quality of care. So there isn't any question about the fact that the future is going to involve payment for quality. We hope that that'll be the case. And one of the metrics for quality of care, at the present time and in the future, is going to be involvement in projects like this, where you track how patients are doing and then change it to make it better. And then the payment models will be improved. But that will be part, then, of the cost of doing care. Because you're providing quality care, you will be reimbursed for quality care.

MR. RISING: Yeah, I think it's really building upon this discussion. I'm curious to get Dr. Lystig's take on it as well, kind of the vision for an active surveillance network that really looks at these devices. It seems like there's going to be a range of capacity in industry for companies to be able to do that, with some of the larger companies having both the dollars and the in-house skills to be able to do it, and some of the smaller companies kind of having some more difficulty in doing some of the things you were talking about.

Any sense for how to kind of implement that vision kind of across the industry?

DR. GATSKI: And just state your name.

MR. RISING: Oh, apologies. This is Josh Rising, and I'm with Pew Charitable Trusts.

DR. LYSTIG: Sure. So one of the things I had indicated in my talk is that if you want to get a broad, both patient and physician, representation inside these studies, you're going to need a large network to draw from. And how you find out the entire population, you know, there's different approaches. You can either get the whole population or you can try and get some representation across the population.

But even if you get representation, I think that that's difficult and that requires work and, you know, certain players in the industry are better poised to do that than others. It is an area that, you know, lends itself to different types of collaboration in the future, you know, because we are trying to do this, you know, not just as an academic exercise, but we're trying to find out how we can improve the quality for our patients. So, you know, what's the best way to do that?

So apart from saying that, you know, I think one of the things we need to think about carefully is how we understand the entire population and whether that is in terms of capturing every single person in the population or getting some estimate of what goes on in the population. It might be that different approaches have, you know, different benefits to going after it.

MR. RISING: Thanks.

DR. BLAKE: Kathy Blake, cardiologist and electrophysiologist.

So another thought that I have had repeatedly, as I've seen many, many professional societies trying to start up registries, is they're not all as large, for example, as orthopedic surgery or cardiovascular medicine.

But one idea to consider is the whole area of allowing smaller organizations to rent space from bigger registries, so that all of the infrastructure is already set up. You've got the analytics. You've got many, many decisions made that otherwise can result in a couple of years' delay if a small society is trying to emulate, let's say, what the national cardiovascular data registry has done, what orthopedic surgery is doing.

I was delighted to see the temporomandibular joint example because I've been asked by people who put in these little tiny joints in fingers if I can help them set up a registry, but they don't do very many and there's minimal sponsorship to be able to do it. But if they could rent space and expertise, then they'd be able to get those questions answered.

DR. FRICTON: That's a very good point and we are -- we've kind of broadened our use of this integrated research information system to allow -- to do exactly that, so that it allows for any organization to participate or develop a registry and use the information system that was designed. There's usually some customization that needs to be done, but not much. And it's web based. It's secure. It meets federal standards. All of that infrastructure is designed to make it easy to set up registries.

Like working with the International Childhood Glaucoma Association, very few patients out there, but it meets their needs or hopefully it will meet their needs as it gets developed. So that's a very good point.

DR. LYSTIG: Yeah. And, you know, again, I think that's an intriguing idea about how you could sort of nest something small within a larger one. But while I'm in favor of that and I think it also makes sense to be efficient and not necessarily capture the entire population, one of the things we're looking at here is we're trying to get real-world usage, right?

So why are we concerned about that? Well, we're concerned about it because the environment in which the approval study was done is considered artificial in some manner. People had to become specialized. They really know how to run these clinical trials. They can get high enrollment and so on. And it's been the case -- you know, maybe it was the case, I don't know, that 50 years ago, when a trial was done, that the persons involved weren't so selective. Maybe it was just people that were active. And over time I think that, you know, the trialist center has become specialized, and so that data is considered possibly qualitatively different from the general practitioner.

And so even if we have a large registry where we have lots of sites involved, you know, we would also have to guard against this concept that, okay, now instead of two levels we have three. You know, we have the pre-approval people, we have the post-approval people, and then we have everyone else. And so either we need to avoid that or at least we've got to quantify it, and I think it's an interesting challenge.

DR. HOLMES: It's been seen repeatedly in interventional cardiology with drug-eluting stents. The early drug-eluting stents were carried out in incredibly circumscribed populations and showed that they resulted in essentially immortality.

(Laughter.)

DR. HOLMES: That may be too much to expect.

The next generation of drug-eluting stents, larger groups of patients, a little bit less selected than the first group of studies, showed that it had a specific finding that there wasn't any difference in death and myocardial infarction between drug-eluting stents and bare metal stents.

And then you went to the huge registry studies, like the NCDR registry study of 500,000 real-world patients, that showed that there in that group there's a mortality benefit. So then you're faced with discordant data, and that has occurred multiple times, at least in the field of cardiovascular disease. I don't know whether in other fields, but that's a huge deal. They are different by virtue of being in a study.

DR. LYSTIG: Sure.

DR. MARINAC-DABIC: I just want to address the question that was posed before, about FDA access to the registries. And I've been on the record on numerous occasions saying that, from my perspective, FDA should not be paying for access to any data. We are the public health agency that's responsible for monitoring the safety and effectiveness of these devices, and we should be having free access to all the data we need to perform our job.

And, you know, in one of our steering committee meetings for TVT, we discussed somewhat this point, and I was very pleased that many of the stakeholders, actually all members of the committee, agreed that, you know, having FDA having access, we didn't define -- really, we're still debating on what the access actually would mean. But for the surveillance performing, meaning, you know, refining the signal, identifying the signal, help us with signal management and risk management, that's our public health practice. And we all agree that this is something that we should be having access to.

Then again, there is going to be another platform, research activities that FDA staff is engaged with. And, again, as any other entity, we understand this is costly. We will have to, you know, put together the proposal and be treated as any other entity. That's a different layer. But from the perspective of having access, I think it's crucial that we advocate more and more.

Well, nonetheless, even with having said this, you know, we have helped fund registries in the past, and we will continue to support it, whether that be with the seed money or with the effort on the FDA staff, and we still will continue doing this. But, again, the issue of paying for the access of the data to do your job, protection of the American public, I think this is an issue that we more and more are being vocal and requesting that we get that from all data sources, not just the registries.

DR. GATSKI: Are there any other questions from the audience?

(No response.)

DR. GATSKI: I'll ask one very quick question to the panel. We talked about the concerns that may arise if professional societies are monitoring studies that basically are using their own devices. And I was just wondering if the panel has any comments in terms of networks, registries, and observational studies, if there are other concerns that we should consider when using these types of data sources for postmarket surveillance studies.

DR. HOLMES: I think there are lots of potential biases. It's been talked about in terms of patient biases. There are biases from statisticians, about different approaches to statistical --

(Laughter.)

DR. HOLMES: No, no. I mean, there are physician biases. There are all sorts of biases. I think that what we can do to the extent possible, we need to make sure that we're as transparent with those things because I think the bias thing is a very real thing, a very real thing. If somebody makes their living as a contractor doing research, then they certainly have bias with that.

So I think we just have to make sure that we are as far removed as we can be and then talk about the other issues related to bias. I think bias is a very real one. That's one.

Number two, I think that the access to the data is a very real one. If, say, for example, industry is supporting the trial, do they have access to their data? Do they have access to somebody else's data? They shouldn't. How do you interface with the fiduciary responsibility that they have, to have a company report for SEC? Does that put them at the head of the line for a data request or at the back of the line for a data request? Because the data analytic center -- I mean, I think there are a lot of things.

I think that the most important thing that at least we have found is to try to get a manual of sort of processes. How are you going to do the data? Who has access to it? How do you put in a proposal? How do you account for the time? How do you prioritize it relative to the other data request? But you hope there's going to be a lot of data requests.

So I think the processes of interacting with data analytic centers are really important to settle before you get involved online with the data registry and the data analytics.

DR. KRULEWITCH: I have just one comment to that because I'm sitting here listening and I think that one of the -- it's a consideration more than anything else, and that's that if a certain society, a specialty society, is handling the registry and there are others that are not that specialist who may be using the device, it may actually create a closed system whereby those other non-specialists may not be participating because they're kind of outside of the specialty. And that can be a challenge.

It's just something that -- and I can think of some specific examples that I won't go into. But I think that it can create a discomfort of turf or whatever term we might want to use. But we need to make sure that it's an open registry as far as how it's bringing in information because that's another potential source of bias.

DR. FRICTON: Yeah, that's a very important point, particularly in orthopedics, in the TMJ, where typically it's the surgeons who are sponsoring or supporting the TMJ registry, but it's the non-surgeons that provide probably 90% of the care out there. And in our randomized controlled trial of surgical versus nonsurgical treatment for the joint problem, there is no difference between the two groups.

And so our response to that is to really make it an open registry, and in fact we encouraged and solicited non-surgeon clinician specialists in the area to participate within the registry, including some control groups that were not getting -- received treatment at all, for whatever reason, financial. And we always tried to include as many consecutive patients as possible. And then, of course, you collect as much data as you can on the patients who opt out or do not consent, if they're willing to give some of that information.

The other thing is that we really do try to cross-validate subjective data with objective data, both patient data as well as clinician data, you know, so that we see if there's a general direction, a bias one way or the other. So there's a variety of ways, and I know statistically, there's a lot of ways to analyze that for a bias, too.

DR. LYSTIG: We talked earlier about sort of the difference between persons in a study and out of the study wanting to make an inference about the entire population. But there's obviously also the issue about how the data is heterogeneous among persons that participate, and this could be by virtue of them being from a specialist center versus not being quite so experienced. And, again, you can always come up with these additional ways and try to find the appropriate balance of assessing when that is enough of an issue that it's a noise level to account for.

But tied in with that, I think there's lots of interesting opportunity to look at data reliability, which is to say, not only how answers differ from one group to another, but if you -- particularly when you're interleaving multiple data sources and registries, you know, what does the heart failure endpoint mean in one of group of persons versus in another? Or even in terms of how much auditing has been done or how much monitoring has been done. You know, what's your error rate like?

Again, what you're trying to do is get a bound on how close to the truth your answer is. And there are sometimes you can't get very close, but you can say, well, it's no greater than a certain percentage. Then you're that much more in terms of saying, you know, how extensible this is.

DR. BLAKE: I would just add something that -- Kathy Blake -- something that's not been discussed heretofore. It just has to do with also the challenges related to who owns the data and who has access to it, because the way things are currently set up is the data usage agreement is often with a hospital, and therefore the data is reported back only to that hospital and that hospital receives the specifics about the individual clinician procedures.

But a system like that, which works very well if a clinician only practices or puts in implants at one hospital, breaks down or might lose power if that clinician is working at multiple facilities, which is fairly common, because there is no way to combine their outcomes from all of those facilities.

And as another hat, through the Heart Rhythm Society, is we are developing performance measures, that we want to use registry data to allow physicians to know where they line up on the curve. One of our biggest challenges is we can't bring the data from multiple facilities together to power up for a given clinician.

DR. FRICTON: A very good point. We have found that physicians, or clinicians in general, do not want anybody else collecting data about them without their consent. Like hospital data. You know, there's a lot of websites out there that are collecting sort of patient subjective data on clinicians. It really is upsetting to clinicians.

and so we make a very strong point that we anchor all of our data analysis on that one clinician who registers, signs up, and then all of their patients are connected to that particular person. Now, they may see other clinicians and patients can be connected to a "group or a clinic" also, if that groups wants to do that way.

But in that way we make sure that the clinicians are very much involved in the process and benefit from it, and they can see their data or their group's data, and they can compare their data, then, to the aggregate of all clinicians that are involved in it. But we try to start with that network of clinicians first.

DR. GATSKI: So thank you, everyone. Thank you to the panel.

We're going to take a break, and the next session will start at three o'clock.

(Applause.)

(Off the record.)

(On the record.)

DR. LYSTIG: Okay, it's a little bit after three o'clock, so if we could please have people by the doors coming in, we'll get in our last session. So thanks to everyone here for continuing to stay in for the end of this very exciting and interesting day.

The final session is Methodologies and Scientific Infrastructure to Promote Innovation. We have a variety of excellent talks today as well as excellent speakers.

So our first speaker will be Art Sedrakyan. He is an Associate Professor at Weill Cornell Medical College and acts as director of the patient-centered comparative outcomes research program. Before joining Cornell, he was a senior advisor at FDA and was appointed senior service officer/senior advisor at the Agency for Healthcare Research and Quality from 2005 to 2009. He was a lead advisor in interventions, including surgery, implantable devices and cardiovascular and orthopedic content areas. He's one of the initiators of the Effective Healthcare Cardiovascular Consortium and supervised two Centers for Education and Research in Therapeutics, cardiovascular CERT and orthopedic device CERT. He has been serving on Medicare Evidence Development & Coverage Advisory Committee since June 1, 2010.

Our second speaker will be Natasha Chen, and Dr. Chen is a research fellow in the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women's Hospital. She received her bachelor's degree in pharmacy from National Taiwan University, her master's degree in health administration from the University of Pittsburgh, and her doctoral degree in pharmacoepidemiology from the University of Florida. She has been involved in several projects utilizing a cohort of Medicaid use to examine psychotropic utilization pattern in attention deficit hyperactivity disorder. She has experiences with large, longitudinal population-based databases, including public/private insurance claims data, clinical registry data, national survey data, and hospital administrative and clinical data. She is currently working on an AHRQ/CMS-funded project which involves linking multiple clinical registries with the CMS Medicare administrative dataset to evaluate the real-world effectiveness of cardioverter defibrillator among elderly heart failure patients.

Our third speaker will be Soko Setoguchi. And Dr. Setoguchi is a pharmacoepidemiologist and cardiologist by training, an Associate Professor of Medicine in the Duke University School of Medicine, and works primarily in the Duke Clinical Research Institute, DCRI. Combining her clinical background and advanced training in epidemiology and biostatistics, she has published widely in her research areas, including assessments of health service utilization outcomes and comparative safety and effectiveness of medications and devices in patients with cardiovascular diseases, cancer, and rheumatic diseases using large claims databases and registries.

And I'm getting thick in the face, so I'm going to condense.

Then we have Sharon-Lise Normand speaking, who has given me a very long paragraph. I'm going to cut you off partway through. She is Professor of Healthcare Policy (Biostatistics) in the Department of Healthcare Policy at Harvard Medical School and professor in the Department of Biostatistics at the Harvard School of Public Health. Her research focuses on the development of statistical methods for health services and outcomes research, primarily using Bayesian approaches, including causal inference, provider profiling, item response theory, latent variables analyses, multiple informants analyses, and evaluation of medical devices in randomized and non-randomized settings.

And our final speaker is Mary Beth Ritchey, who has been up here several times previously.

So with that, I'd like to have Art Sedrakyan come to the front. He will give us his overview on ICOR and IDEAL.

DR. SEDRAKYAN: Thank you very much, Ted.

So the goal for my talk is really to share with you two important initiatives that we started in the past two years and hopefully, here, if you have any feedback about this data on the kind of infrastructure systems that we're setting up.

So I thought I'd have an advantage of talking first in the session, but then I realized by the time we're done, you'll probably forget what I talked about.

(Laughter.)

DR. SEDRAKYAN: So it's somewhat generic, but I hope I will inject some thoughts that would be of interest to you.

So what are the current issues with FDA device evaluation paradigm? So to cover that, CDRH particularly, I mean compared to Center for Drugs, recognized the unique aspects of this device evaluation from the beginning, so it has different paths. Devices have different paths of approval and potentially different needs in terms of scientific research.

So the important issue is that device technology is really changing over time, and what's important, to keep the entire life cycle of innovation under control and have that perspective that is a total product life cycle, the way Danica has defined what FDA is interested in currently.

So we know that current paradigms are a little static. The PMA, 510(k) and then there is post-approval study, 522 studies. So they are all kind of separate from each other, even though they're interconnected in a regulatory environment. And some of these are recognized to have some problems. And what's interesting in this regulatory debate and thresholds that we need to set up for approval, it really boils down often about clinical data versus no data, trial versus observational study.

So we initially thought about this, and the concept we came up with was well-designed, a fit-for-purpose concept study that is not about randomized clinical trial or observational study, but really what is fit for a particular purpose, a particular study question, so to recognize differences between devices and drugs, interventional context for a device use, and challenges and opportunities for a specific device, but also take advantage of advances we have in many methodological areas: EBM, health services research, epidemiology, and statistics.

So the second important part that we embraced is this issue, when you have an observational -- when you have well-designed randomized clinical trials, they are potentially of high quality and can be much more informative for your purposes.

But if the quality is low, then you can think of it as an adjust based on the kind of quality criteria that you will have in mind for a randomized clinical trial, while for observational studies, if you see strong and consistent dissociation with little confounding, you can potentially think of it as high-quality evidence. And if it's particularly badly designed observational studies, they're very low quality.

So this was a great criteria that are recognized today for evidence appraisal overall for individual studies, but also for the body of evidence as well.

Then we kind of moved to this framework that we published about how do you evaluate the evidence, existing evidence, and then think about real-world evaluation? And a couple of important factors to pay attention to, the device issues, the patient characteristics that when you're designing a real-world study or a post-approval study, to pay attention to: think about interventional characteristics, interventionists and hospital characteristics. Those are factors like median access issues. So these were also kind of, in our heads, a little static.

So then we started collaborating with a group that is called IDEAL, and IDEAL stands for Idea Development, Exploration, Assessment and Long-term study. This is a European framework for evaluation of the surgery and different stages of surgery.

So devices were not part of this original thinking in a device group, so we collaborated with the IDEAL workgroup to think about how can this model be applicable to device evaluation because it's a more dynamic process as opposed to the static concepts we're working with?

So in December of last year we held our first meeting with IDEAL, and we had participants, methodologists and surgeons from around the world, and since, and some published some of the summaries and publications that were part of these discussions.

But a key issue here within IDEAL is that surgical and device innovation and evaluation should evolve together in an ordered manner from concept through validation by robust studies and post-approval studies or real-world studies.

And it's very compatible with the FDA vision of total product life cycle approach and can really help advance the ideas behind MDEpiNet because MDEpiNet is potentially envisioned to be that entity that will have the dynamic view over total product life cycle.

So the stages that are outlined within IDEAL are 1 to 4, and if you think about them -- I don't know if you can read here, but hopefully you can also read on the handout. It comes naturally. If you're just at the concept/invention stage and you just came up with a device, the device will have to go through a variety of changes and those are good to capture and possibly videotape.

A type of study you will have at that stage is case studies. You'll have a criteria for reporting case studies, comprehensive case studies, and then as people learn, as physicians and interventionists and surgeons learn how to use this device, you can study the learning curves. So that would be, say, Stage 2a or Stage b when there is an early majority of people who will embrace this technology. So at that point you can think about feasibility, a randomized clinical trial, or capture within a research database.

At the Stage 3, when you really have a well-established technique that is sustainable and is likely to survive without major changes, you can think about a randomized clinical trial or a comparative well-designed observational study, or you can even think about a registry context for that. And Stage 4 would be long-term surveillance of the device.

But what's also important is that the concept of existing infrastructure in a registry can be useful throughout its life cycle as well. Because if you have an existing data system that Danica talked about in the beginning and the previous panel addressed, if you have that registry in place, starting from that invention, the new product that enters into the market, you can start collecting information starting from case series; move to the next stage; do potential comparative evaluation, how is it performing compared to usual care? And then as you move forward, you can design nest trials within the registry to do comparative, say, assessment at the assessment stage. Then also continue as the diffusion of this technology is happening and people are using this technology to replace the existing technology, you can see if there are new safety concerns that you will have.

So that was the essence of the IDEAL workshop that we had, and there are some recommendations from IDEAL that, again, might be tweaked and advanced to be able to make it applicable to the device setting, but if you see that data collection and registry is a critical concept from the beginning to the end, and all of them involve some kind of infrastructure system starting from trial design to surveillance. Early stage studies. First in man studies to do the surveillance.

Now, how to implement this kind of a data system that will have a total product life cycle approach in it. And a couple of important issues in this context are that, again, advancing device innovation and evaluation requires an efficient infrastructure creation, such as a national registry. And it can help monitor patients starting from first in man to the entirety of IDEAL cycle. I'm repeating myself.

So who can be potential partners as MDEpiNet is evolving and is interested in implementing total product life cycle? We thought that integrated delivery organizations and hospital systems and surgical researchers that are part of professional societies are probably the right partners to engage and advance MDEpiNet infrastructure.

Why put emphasis on integrating delivery organizations and hospital systems? Again, here it's important to recognize that there's a registry creep. An example from Sweden that I can share with you, there are 70-plus registries in orthopedics alone. So all of them are basically requiring data from the same hospital, so the same hospital needs to be part of the 70-plus registries. That's an enormous burden for a participating hospital.

So some efficiencies and how can hospital systems be more efficient in leveraging their data system, electronic data system, to participate and maintain so many registries should be potentially our focus. Because if we start from organizations that are not directly working with the hospital systems but they're requesting data from them, it might be at the end that we will contribute to this registry creep rather than help to develop the right infrastructure nationally.

So the second issue of how MDEpiNet can help in this context of evaluation is really make sure that it can be a neutral party that can analyze 522 results or help advise on conducting these 522 studies, so summarizing the evidence from 522 studies.

And, finally, the important issue that MDEpiNet and data infrastructure centers and methodology centers that can do it within MDEpiNet, is to facilitate corroboration such as international corroborations like ICOR. And that's the segue towards how ICOR was established.

We have, again, 30-plus registries from 15-plus nations getting together in a research network to think about how orthopedic devices can be validated and evaluated faster as they enter into the market, but also address the big questions that we're facing in orthopedics, such as bearing surface problems, metal-on-metal issues, fixed versus mobile needs, questions around safety of mobile needs.

And also think about implementation of Unique Device Identifier, a program that FDA is going to announce very soon, and think about the implant database that can be built based on unique identifiers because within orthopedics, with product codes, we can get to unique identification today, and this can be a good pilot and experiment that can help shape up how UDI can be implemented in our settings as well.

So we have FDA administration and coordinating center at Cornell and Kaiser, but also have an executive committee with participation of worldwide registries and three ongoing studies within ICOR.

So registries worldwide have, as you can see, all of them have information about diagnosis, laterality, implanting surgeon. About 83% of the registries have that information. Age, gender. And we also have implant information within all of these data systems and registries. So it's a rich infrastructure to build 522-type of studies, to embed and nest 522-type of studies within ICOR.

So it can address the questions throughout the life cycle of the device innovation from Stage 1 to Stage 4 of the IDEAL. We have some of the best experts in orthopedics to develop consensus statements when evidence is not adequate, again, a neutral party to evaluate and summarize 522 studies and potentially also help facilitate rapid adoption of evidence. If there are any products that are performing particularly well, the registry system can be the right way to disseminate this information and facilitate adoption of the evidence.

Thank you very much and happy to address questions you have.

(Applause.)

DR. LYSTIG: So we'll hold most of the questions until after the speakers have finished.

So our next speaker is Natasha Chen. If I can escape out of here.

DR. CHEN: Thank you, Ted.

Good afternoon. My talk today will be a little bit dry because it's more technical oriented, but I hope that some experience on how we link data sources for post-marketing surveillance study in our ongoing project could be helpful.

Record linkage describes the process of joining information regarding some individual across data resources. Why we link a record? It is simple: because no dataset is perfect. Like in our project, it is a project to look at the comparative effectiveness and safety of implantable cardioverter defibrillator.

The following slide describes and presents other data sources we used for the entire project. One of the major sources we have is the CMS-ICD Registry, which is content information on all Medicare beneficiaries who got ICDs since 2005.

One of our projects is to study long-term safety of ICD. Although device complication is documented in the registry, it is limited to those that happened during the hospitalization for ICD implantation. There's no follow-up after discharge. Therefore, we link the registry to the Medicare claims, where we can obtain information after discharge. By doing so, we are able to track information for an average of about two years after discharge. So we enhance our ability to study long-term safety of ICD by linking the two data sources.

And now let me shift gears to talk about how to link. One method is called deterministic record linkage. This method uses a single common variable or multiple common variables between datasets to identify links. Like in the example, registry data and Medicare data have three variables in common: date of birth, ID, and gender. So two records can be identified as matching by different combinations between those three variables.

People often ask whether or not we can link records validly without using unique personal identifiers, or UPIs, such as SSN or insurance ID, because this information is typically not collected or were not released to researchers in most datasets. The answer is yes, without UPIs, records can still be linked by non-UPIs, for example, demographic information. Or in the context of linked medical records, service information, such as admission date, diagnosis, or provider information, is also helpful.

I apologize. We have wrenched the order of my slides, so it might be a little bit confusing on the handout, but the content is the same. So back to the presentation.

So the key here is to have multiple -- to have a good combination of non-UPIs which can make each record between datasets unique.

The performance of record linkage using non-UPIs has not been well studied because most datasets don't have UPIs. In our project we are fortunate to get UPIs, so we conducted a validation study. We linked the CMS-ICD Registry and MedPAR files, which contains Social Security number, to test linking rule using four non-UPIs, which are date of birth, gender, admission date for ICD implantation, and provider ID.

We tested five rules. The first rule required a match on all
non-UPIs. The second and third rule represents the situation where there's imperfect information on certain linkage variables. And the last two rules represent the situation where it's using fewer linkage variables to link. And we calculated sensitivity, specificity, and PPV, comparing to a gold standard using SSN, provider ID, and admission date to link records.

And we observed the validity is the highest in our first rule, which required a match on all non-UPIs, with over 95% on all three measures. And we see validity decreased when there's imperfect information on the linkage variable or under a situation where we use fewer variables to link. But we see specificity were, in general, good among the linkage rule, while sensitivity and PPV are more dependent on which rule we use. So we conclude, linkage rule using multiple non-UPIs produce valid linkage.

It is worthwhile noting that we didn't reach our expected linkage rate even when we used our gold standard. We think that is because we have false negative linkages. The reason for false negative linkage could be that there's missing values or errors in the variable used to link records. And deterministic record linkage is more sensitive to this type of issue because it often requires an exact match of one or more variables. So it could lead to unnecessarily high false negative linkage.

In this situation, it's probably more suitable to use another method called probabilistic linkage. This slide will be on the first page of the handout.

The first step to conduct probabilistic record linkage is to identify common variables between datasets. We will then calculate an agreement weight, and this agreement weight is based on two parameters, one called u probability or discriminating power of the variable, another called m probability or reliability of the variable, which usually is approximately equal to one minus the error rate in the linkage variables. And we will then use the two weights to calculate a total weight to reflect a probability two records referred to the same person.

So in our example, we will calculate total weight for 16 pairs between registry data and Medicare data. And our pair, R1-M1, has different DOB, the same ID, and same gender. So the total weight for M1 equals to the disagreement weight of DOB, plus agreement with ID, plus agreement of gender. So, lastly, we will determine a cutoff under weight. Pairs that are above the weight were considered as a match.

I will then use the following two slides to depict two challenges in conducting probabilistic linkage.

First, we supplement where we intend to calculate the total weight. We need to supplement the error rate in order to derive the m probability. However, the truth is we never really know the error rate, and it takes time to explore from the data to get a good estimation.

A second challenge is regarding choosing the cutoff for the total weight. If we knew which records will match in our dataset, we were able to plot the distribution of total weight with a non-matched pair and matches pairs, like in this graph. We can see exactly where we can explore the cutoff point, which will be the overlap region here for the two distributions. However, we can never have a graph like this. Otherwise we don't need to link the records. We do linkage because we don't know which records are matched.

There's guidance of different suggestions from literature, how to identify that region to explore a good cutoff point for the weight, but there's no concrete, a good way, a basic way to do that, so it also takes time to explore this.

So to sum up, although probabilistic linkage provides some advantage over deterministic linkage, in terms of how to do it practically, it's more time consuming.

So the last part of my talk, I want to briefly share what we answered when we compared to these two methods using our dataset.

First is a little bit specific, how we used these two methods. For deterministic linkage we required an exact match of five variables, that is, Social Security number and the four non-UPIs. We used the same variable to do probabilistic linkage, and we used statistical software developed by CDC. It's called LinkPlus. And we explored a couple -- used a couple of ways to explore the cutoff points based on literature suggestions. It comes up with a fairly similar range of cutoff weight.

So we eventually determined using a method proposed by Blakely et al. So based on this method, we are able to calculate the PPV for the linkage produced at different cutoff points, and we determined to choose the cutoff point at the weight which produced over 90% PPV linkage. In our dataset, that magic number is 20. So we see, by using probabilistic linkage, we increase our linkage rate from 61 to 70. It's about a 15% increase.

And we also do find that, among those records which link by probabilistic linkage but not deterministic linkage, 90% is due to a discrepancy on only one of the five linkage variables. So we confirmed that probabilistic linkage, it has the linkage rate over deterministic linkage by reducing false negative links. But in our example, the increase is only moderate.

So to summarize my talk, we think record linkage is a powerful tool to enhance the capability of a postmarket device surveillance system, and it is doable even if we don't have a unique personal identifier to do the linkage. However, in terms of how to choose what is the best way to do record linkage, it is to understand, and although probabilistic linkage could perform better when data quality is low, but it's practically more challenging.

And that is the end of my talk. Thank you for your attention.

(Applause.)

DR. LYSTIG: Thank you very much.

DR. SETOGUCHI: Good afternoon. I'd like to thank the organizers, too, for giving me the opportunity to talk, and I'm going to start with my fifth or fourth slide.

So throughout the day today, you've heard many people talking about leveraging data sources or infrastructures, and that's the theme of my talk today. I'm going to give you some examples about how we can leverage existing data sources for postmarketing surveillance studies.

And, again, efficiency. I think it's the nature of the human that you want to sort of spend less effort to get more and, of course, the validity or getting the unbiased estimate in the sense of epidemiology, I heard many people talking about bias in a different way, meaning different things. But what I'm talking about is getting the effectiveness and safety right; that's an unbiased estimate. I think we need that. But to get there, you want to spend less effort in terms of cost and time.

So this is actually the list of study designs that are listed in the draft guidance, and what I'm focusing on today is the prospective cohort study, retrospective cohort study, prospective and then retrospective study. And I'm not directly talking about case controls, but the example that could be done in a case-control design as well, and these studies are called -- in an epidemiology sense, called analytic observational studies. And the goal of that study is to really get an estimate for safety and effectiveness.

And if you remember the hot discussion in the previous session about needs of comparison groups, to do these types of studies you absolutely need a comparison group. Without that you cannot do these studies.

And in the drug world or pharmacoepidemiology, we often use claims databases, large databases, and conduct retrospective analysis. And the challenges in using these databases are really threefold. So one is that not all important data components to control bias and understand the heterogeneity of the safety are not available in existing data, so you may need to do additional data collection or data linkage. And, again, we don't need everything here. You need just information to get the estimate right.

Another point is that you may not be able to detect early safety signals for new products because you always -- to get these types of data, you always have a time lag to get access to the data. When it's important to get the safety signal as soon as possible, retrospective analysis might not be the way to go.

Another point which I didn't point in the handout, but an important one is that in drug safety or pharmacoepidemiology, we have NDIs in these databases, so you can get very detailed information, longitudinal information on drug use. However, in claims data, we don't have those equivalent IDs for devices, so that's another huge limitation of using these types of data.

So I'm going to give you some examples sort of linking different databases or using infrastructure using currently ongoing data or some proposal that I write down before. So example one is leveraging claims data and clinical device registries to assess long-term outcomes after device implantation. So I'm not going to go into the details.

This is what Natasha talked about. This is a study that we got funded by CMS and AHRQ to look at the safety and effectiveness of ICDs. I'm still leading this study because I was at Harvard before and now I'm at Duke.

And another component of the same project is looking at the same thing, which is comparative effectiveness of current stenting. And, again, we're taking the same approach, linking registries to the claims data. By doing that, we can get long-term outcomes with minimum loss to follow-up from claims, and we can get hard endpoints like death and hospitalizations.

Another advantage of doing this is we can get detailed information on devices and disease. CERT captured six from registries, which we lack in the claims data.

Another point that people often don't realize is that actually registries tend to have detailed information only on the target disease or target devices, whereas claims would only get detailed information, but we have general information about the patients. So things like comorbidities or the comorbidities that are not related to the target disease, we can get some information from the claims. So using both datasets you can get a better adjustment for confounding.

Another thing I want to mention is that data linkage came up in the previous session, that at Duke we've linked many cardiovascular registries to Medicare data already.

Another example I want to mention is from the same project that we're doing, that when we're looking at carotid stenting comparing to CEA or medical management therapies, that we don't get information from the claims because we want to identify similar patients. Again, the comparison group cannot be just a comparison group, but it has to be similar to if you've got devices. So to get that, we need more information, and we want to sort of pick up the patients who have a similar severity of the disease, and to do that we need imaging data.

So this is actually a pilot study just limited to two institutions, Brigham and Women's Hospital and Massachusetts General Hospital, to get the vascular lab data and electronic health record data and link it to the claims data for long-term outcomes.

So just to give you a little bit more details about what we're doing, we had about 30,000 carotid ultrasound results between 2002 and 2009, and after going through the record and looking at eligibility, we have currently about 5,000 records eligible for the study. We've pooled electronic health records from the same patients, and then we link it to the Medicare data. Again, this is limited to the two institutions.

So the real question for the future development of this type of study is how we can expand this to other institutions. To understand that, we're conducting surveys to understand how each vascular -- not each, because we cannot survey everything, but taking a representative sample, we want to know how vascular labs are correcting the data and storing data so that we can apply what we've done in the study.

A third example is to use institutional device registries and personal health records to assess patient-reported outcomes. So this already came up in a previous session, that this is a similar thing that we want to know, not only the hard outcomes that we can identify in the claims data, we also care about patient-reported outcomes, more soft outcomes, because it's a part of the equation for benefit and risk. And to do that, we can sort of leverage existing data sources.

So Duke has done a pilot study to expand the existing patient portal to include clinical system data that that patient can have access to. Pooling about 400 unique clinical systems, linking it to the existing patient portal, we've created an extension for the cardiovascular patients, this personal health record system.

So the existing one only had a function for registrations, scheduling, administrative tasks, and with this extension, patients were able to access clinical records, including medication information and device information, also patient-entered data, and they were planning to expand this to all Duke patients.

And this is just an outlook from the patients, looking at the device information. As you see, they can see device type, implant date, device description, manufacturer, model number and serial number. But for the future, we want to have UDI here.

And in terms of the current status, we have about 300,000 patient accounts. We've delivered six million diagnostic results. It's been getting better and better over time in terms of looks and also more functions. We've done a survey using this PHR system assessing patient preference using web-based survey applications, which was administered through this PHR system, and we got very good response. In a short period of time, about 3,000 patients responded and completed the survey.

So what we're planning at this point is to use this tool to assess patient-reported outcomes after device implantation. It's in the planning phase. We want to use a validated instrument of quality of life or functional status and pain level, and we want to look at cardiovascular device patients as well as orthopedic devices.

The last example is something I recently proposed to a company that's going to launch a new product on the market, which is a hybrid design with prospective data collection and claims data for a newly marketed product.

So, again, the claims data have lags in data accumulation. CMS data typically have around a year for the time lag, and then commercial data have a shorter time lag, between three and six months, depending who you are and how you can get the access.

Anyway, if you need to identify short-term, immediately the safety concerns after marketing, weighting this time lag doesn't work very well. So to do that, you combine prospective data collection, and for short-term follow-up, not the long-term, for prospective data and linking that to a larger database, such as claims data, might be the way to go.

What we propose is also to look at the potential for validating the outcome, because sometimes the outcomes in the claims data, some acute events are typically validated, but others are not validated, so you could actually go back and validate the outcomes of interest.

So in terms of how it works, if you only have retrospective data after the launch on the market, the first time you can get the data and analyze the data would be after a year, because of the time lag. However, if you combined these two designs, you can get -- immediately see some signals from the prospective data collection, and then you can always go back and analyze the data later for short- and long-term outcomes by using claims data.

So, in conclusion, using existing resources and infrastructure to efficiently produce postmarket safety and effectiveness evidence is really important, and effective and valid use of existing data sources require knowledge on how the data are generated and appropriate methods to handle various methodological issues in observational studies.

Again, as I said, the timeliness, if the timeliness of data availability is important, then you might have to combine prospective design and retrospective design.

That's all I have.

(Applause.)

DR. NORMAND: So thank you very much for the invitation to speak today. And I was asked to talk about evidence synthesis.

And so I don't think I need to tell this crowd what evidence synthesis is, but I will, in terms of at least my perspective, why we need it and how to do it. And so I know you just showed me how to do this and I pressed it. Okay.

So here are three types of devices. And the reason why I'm showing you these is just to talk about the data features or the information features that are different across these three different devices. So one is an aesthetic device, it's a breast implant, one is a pediatric therapeutic device, and one is an adult therapeutic device.

And I've listed the indication, the device, and what the comparisons or alternatives are, and the outcomes. But, really, what I should have done is just only given you the very last line. And that last line really talks about what are the challenges in the postmarket setting if you were to look at the performance of that particular device and you were going to assess the safety and effectiveness. And so, although I listed some issues that are associated with specific examples, I'm sure you can list many more.

But if we think of, sort of, aesthetic devices, you may really have problems collecting information because perhaps patients or recipients really don't want to be identified as receiving one. I'm talking about those for cosmetic reasons, let's say. And insurances are not going to necessarily pay for it and so you have a different -- you have a real different set of challenges in sort of tracking these types of patients. There are obviously other issues associated with that.

If you think about pediatric devices -- and this may be true of other devices, adult devices, but you could argue that the data are sparse. And, moreover, maybe sometimes the data don't exist, they just are based on an adult indication, and you're making some inferences or extrapolating to say yes, this would work. And so, again, the type of information that you have before you go out to learn about how effective and safe these devices are, are a challenge in terms of the availability, the liability, validity, coverage, et cetera.

And then the last type of device, again, there might be other challenges, but one of them that I've written down is it could be -- and this was discussed in the earlier panel -- that perhaps they're used for different indications. And so that issue of making the relevant comparisons in terms of what's the comparator or comparators, and really, are you looking at or assessing the safety of the device as was intended and et cetera, et cetera. And sometimes it may be more difficult to identify the correct patient population, depending on the type of data that you do have available.

So what is evidence synthesis? It involves the development of techniques to combine multiple sources of quantitative evidence. I'm saying quantitative evidence purposely here. You could have qualitative evidence. I'm specifically talking about quantitative evidence.

Evidence synthesis is going to be beyond meta-analysis. I'm really thinking in terms of every type of data that you have available to you. And there was a question earlier today about Bayesian analysis in terms of prior information. A real Bayesian -- it's almost like you say real men or real women. But Bayesian, you sort of think about the condition on everything you knew up until the moment. And so it's not just the premarket data that you want a condition on or have available; you want sort of a whole history of what's available at time t, if you're going to think about time t plus one.

And so that would be, we could have individual data that are available to the investigator, to the regulator, to the sponsor, to the manufacturer. We might have cohort data in terms of aggregated summaries of data. And we may have data in the forms of -- from the published literature or unpublished literature. And so there are many types of data that I'm thinking of as evidence in terms of data summaries.

So when we say evidence, we're talking that it's going to be based on quantitative information or evidence, but at the same time, I think it's also imperative, from a quantitative standpoint, to define what we mean by that, what do we mean by "evidence in support of" or "evidence against"? And I think if I were to quiz you, and I won't do that at four o'clock in the afternoon -- I'm a statistician, I'm sure you couldn't care less about this, but if I were to pool you on that, you know, what is evidence? And I don 't know if people would say, well, it's the effect size, it's the statistical significance, if I ask for a quantitative summary, but there's a big debate about what that is.

And I think, by default, we -- the royal "we" -- we often think of the estimate, the numerical estimate that we get and the statistical significance of that. And just so an FYI, statistical p-values don't provide evidence as an FYI. I might say more at the end of the day about that, but we'll see if time permits. It really doesn't provide evidence for or against anything. Very strictly speaking.

So one of the questions in the post-market setting that you may be interested in -- and we always talk about this in terms of what you want to be able to answer. And, again, the first question that was raised, I think, earlier -- and I was at another meeting earlier today, and I'm really disappointed I couldn't be here earlier to hear the other talks, but -- so do you use the PMA data alone, the premarket data alone? And my answer would be no, you use all the information. But, again, we to think about that in terms of do you go back to the beginning of time or how far back do you go?

But, again, it would seem to me that if you're looking for a question about effectiveness and safety in the postmarket setting -- and, again, the specific question, that would dictate what type of information you should condition on or use.

You want to identify the target population in the postmarket setting. These are questions that were raised earlier. Are you talking about -- and you raised these questions. Are we talking about the specialized patients that participated in the clinical trial? Are we talking about the providers and the surgeons who are highly expert in these things? And so, again, what is the target population that you want to talk about safety and effectiveness, and they could be multiple populations. And, again, needing to make that crystallized.

And I think it's also very important to quantify the relative prevalence of the covariates or confounders or whatever, however you want to call them, in the premarket approval process if you're a regulator and say I want to see how it performed relative to the -- based on the premarket data in the target population. And I would submit that it would behoove everybody, if you could, for expected effectiveness. And that's a pretty easy calculation. And I'm saying easy. I'm not saying it's right, but I bet you it's better than what's being done right now. And I don't mean that in a bad way, although it sounds like it's bad.

But you could be much more systematic in terms of actually saying, you know, here are the characteristics in the target population. You can download those from -- you know, I expect X percent to have heart disease, Y percent -- and just come up with some estimates rather than saying, you know, this is what we saw in our premarket trial, which you should do, but guess what? This is what it looks like, this is what we expect to see in the real world in terms of at least the covariates to predict what you would see. And I think that -- and forgive me, this may be done by the FDA and I may be speaking out of turn, but it seems to me, using more simulation-based methods to think about predicting or micro-simulation techniques to see what should I see, and then when something runs amok -- because you need something to compare it to.

And so because the outcomes will be completely missing for your target population, that's a prediction problem. And that, you know, we know. Statisticians, epidemiologists, know how to deal with a prediction problem and you could be -- you can bet on that with your prediction error, but at least it's quantitative and it's not seat-of-the-pants type thing.

So what information should be used in addition to -- I'm saying all the information to be used. But this comes back to the earlier conversation that we had. And so first of all, there are several different outcomes in any study, different -- even it's a similar type of device, we know that in the premarket approval package, there are safety endpoints, there are effectiveness endpoints, there are device endpoints, and so we know there's more than one endpoint, one outcome that we should be collecting.

If you're thinking about a particular device where several manufacturers make that device -- so there are going to be several different device manufacturers. There are going to be several different patient groups that we think of in terms of who gets a particular device, and this may be that -- what I mean by that is you would expect a device, let's say the effectiveness of the device to be pretty homogeneous among that group of people.

Now, clinical trials are generally small, and they look at small homogeneous groups of people. And so depending on how much information you had in the premarket approval package, you may think everybody -- you know, when you go out to the -- people hate the real world. When you go into the real world -- I don't hate the real world; I live in the real world, but, you know, people hate that term, real world. But when you think about that in terms of, will it behave slightly differently in patients that are slightly more comorbid? And so thinking about those a priori on how to group that, I think, is a very important topic.

And then we heard several talks prior to me, much nicer talks, in terms of data sources. So we have claims data. We have electronic health record data. I told you we have literature, summaries from the statistical --statistical summaries from the literature, lots of different types of data sources. We have data sources, you know, as you've heard before, inside the U.S., outside the U.S. There's a lot of information out there, and how do we use it?

I think I have, like, three equations. So the idea here is -- and this is what we're doing, anyhow. We're taking all the information; we want to make a summary. And so we make a summary, what do you do? You may not do it, but we write down an equation because we want to be very transparent.

And so I'm thinking of this in the following way. Suppose we have, for the K device group -- and here I said device group. It could be a manufacturer, we could have capital pay manufacturers of a particular coronary stent. Or it could be a characteristic of a device we've talked about before, metal-on-metal hips and ceramic-on-ceramic hips, so it could be a characteristic. For a bunch of different devices, K -- and I'm using, as I said, device group very generally. And then you have the outcome for a patient group implanted with that particular device.

And so basically what we're saying is that there is some expectation of that average outcome. And I put everything assuming it's continuous. I know we often have binary outcomes or time-to-event outcomes, but simply think of it is that for each type of outcome for the K device, we've got some overall rate we expect, and there's some variance associated with that rate.

But we know that pretend, if we're looking at drug-eluting stents, that somehow all of those average rates of target vessel revascularization, or for hips, the Harris Hip Score, they all are going to be related somehow, you know. One device company might have a lower risk than others, but in general, they're going to be related. And so we're thinking well, we don't know this for sure. That certainly differs across the various device groups.

Now, this is a very important equation. I know statisticians say they're all important, but this is important. And the reason why it's important is, fundamentally, we're assuming -- and this is why I asked about comparison groups earlier because this says that I can combine the Medtronic -- it's good that you're sitting here -- the Medtronic device with an Abbott device, and basically they come from the same pool. Now, we're not saying they're exactly the same. There's a distribution at variance in terms of the performance of the device 1 versus device 2. It's not going to be -- well, it may be on a single line, but there's going to be some variability.

This says you can pool the data across devices. This says you can pool the data across device characteristics. And I submit to you, we do that all the time. And so the question really is, all this is doing is formalizing it.

What the second equation says is really that the outcome, the average outcome, this is the estimator for the average outcome for the K device, is really a linear combination of the outcome from all -- the average for all the devices, so it could be all the coronary stent devices, plus the average that's basically due to that specific device.

And that's a simple linear combination; it has nice properties. But the reason why it's nice is you will have smaller variance for your estimator here than looking at the data alone. And a smaller estimator in the variance means you don't need as much sample size, you could be more precise about your inferences, et cetera.

So let's talk about an example where we actually have looked at this, where rather than the K device, we've actually looked at this in terms of the K characteristic. And we've done this with hip implants, people have done this for lots of different things, but we've done it for total hip implants where the bearing surface is the major characteristic over which we want to combine.

So we're basically saying, this schematic is saying that perhaps there are two devices -- this is obviously not correct, but two devices on the market that have ceramic-on-ceramic material, and for each one of those devices, there are a number of different outcomes that you can measure. This happens to be the Harris Hip Score, the revision rate, an adverse event. And then the second device in terms of, let's say, the premarket approval also had a Harris Hip Score, a revision rate, and et cetera. And then we have another device group that happens to be characterized by metal-on-metal.

So you've got that, and it could be that for this particular -- and it happens to be the fact that metal-on-metal, there's no PMA data available, right, for metal-on-metal because of, you know, when they were approved and all that kind of stuff.

Now, the literature is going to have a summary about these devices, of characteristics of these devices. But it might not necessarily say it's the company's one device; it might just say it's a ceramic-on-ceramic device. And it may have just one outcome and not the other outcomes.

A registry, depending on the registry, we could check all of these off because typically, not all the registries -- but if I finished my slides and clicked off, I could have checked some of these boxes and not checked others.

Claims data, there is no unique device identifier, we're not going to get a patient-reported Harris Hip Score, but we could get revision rates and some adverse event rates. And while I didn't do all this, I could check some of these boxes, but the point is this is a big missing data problem, and we know how to deal with missing data problems. We may not like it. We don't like it, we wish they were not missing, but we can learn information -- you know, the earlier slide talked about borrowing information to learn, to get a better estimate where maybe there wasn't enough sample size, right?

Well, what you can do is you can actually use the information from where it's complete, to learn about where it's incomplete or where it is, because incomplete means maybe no information or maybe we only have two observations.

I won't go through this. I'll just have this slide and then some conclusions.

So what are the assumptions? There are a lot of assumptions that we are making, and as a statistician, it's my job to say, here are the assumptions, and some of them I can assess statistically and some of them it's based on the manufacturers, it's based on the clinicians, it's based on the patients, to try and say that makes sense or that doesn't make sense.

And so one of these ideas is exchangeability. What I mean by that is that it was this assumption that average outcome from the K device depends only on the mean and the variance. I could say that the average outcome also depended on characteristics of the device, but what that means is that borrowing anything else, if you told me I had device 1 and device 2 and they both, you know, were implanted in similar patients, I can't tell you which one would have a higher rate of revision versus the other. That's what exchangeability means.

Now, it may mean -- and so that's all it says. It says I already know the patient characteristics, I've taken that effect out, so now I have two patients who look exactly the same, and the only difference is they were implanted with device 1 versus device 2. There may be some situations where that doesn't hold.

And so while the effects themselves may differ, we can consider them drawn from a distribution. They're not identical. Maybe sometimes one will be larger than the other or the one's other side. I have no a priori ordering of it. You may have an opinion about it, but your knowledge says I have no reason to believe that.

There are some assumptions about the coherence, and what I mean by that is there may be not -- there may exist no direct comparisons of some of these. So it could be that we have in the -- whatever data we have, there's been no direct comparison of a ceramic-on-ceramic versus a metal-on-metal or there may be no -- which is probably more likely -- device 1 versus device 2 that are both ceramic-on-ceramic. But I want to learn it because as a patient -- this is what comparative effectiveness is about -- as a patient, I want to know, do I get device 1 or device 2? They both are ceramic. Tell me which one to pick. And so that is unlikely to be conducted by -- well, let me stop there.

So how do you learn that? So there will be no direct evidence about that, but there may be some indirect evidence, and to get that indirect evidence, there's needs to be some assumptions made. And that's basically what this stuff says. It sort of says you've got to -- we've got to be coherent about A is better than B and A is better than C; B can only be better than C in certain situations. And then there are some assumptions we have to make about the variance components.

This is all related to evidence synthesis in terms of combining information -- that's what evidence synthesis is -- in order to make a statement or an inference about the effectiveness of devices or safety of devices.

So I'm going to conclude to say, how does what I just talked about evidence synthesis relate to 522 studies? And hopefully that's obvious. But I would argue that I think there's probably -- from my perspective, I think there's probably more need for formal prediction and formal methods for extrapolation on both safety and effectiveness using the data from the premarket setting to predict or extrapolate what's going to happen in the postmarket setting, and that you have to determine what you expect to see in order to say, well, there's a problem, but also to help bolster inferences when the premarket data are sparse.

I think there's an increasing need for the incorporation of uncertainty, and this is often -- well, the way we -- so probability distributions, you need to be able to account for uncertainty, but it's critically important in the analysis of surveillance studies, only because often there's no randomization. And although uncertainty always needs to be included in any inferential aspect, it certainly needs to be done in that setting, and there in the observational study, there's a lot more noise and you want to make sure, at the end of the day, when you say something, you're not overstating anything.

And my final comment is quantifying evidence. And so this is that little thing about the p-values. You need to measure the evidence for a hypothesis and, you know, Stats 101, a p-value does not measure evidence for a hypothesis. If you remember from Stats 101, you say that, you know, you cannot reject the null hypothesis, so you never accept the null hypothesis. You cannot reject it or you reject it. It's not that accept the alternative, you reject it. So p-values don't provide a quantitative assessment or evidence of a hypothesis. And this sounds philosophical, and who cares, but it actually has important -- it really matters in many ways.

And so once you think about that other metrics would actually give you quantitative evidence or measures of a support of one hypothesis versus another, or the support of hypothesis 1 and support of hypothesis 2, it's much more direct evidence based in a metric for measuring evidence.

So with that I believe that's my last slide, and I thank you for your attention.

(Applause.)

DR. RITCHEY: Good afternoon. Around this time of day I usually, you know, start fading off, and I'm guessing that many of you are doing the same thing. And so when I start fading off, I start thinking about, you know, my favorite movie, and I'd rather be there. No, my favorite movie is Memento, and I love the fact that it starts at the end and then goes backwards.

And then I start thinking about, you know, I'd like to be reading, that'd be fun. One of my favorite books starts on chapter five and then goes back to the beginning and goes through chapter one, and when you get to the place where five should've been, you realize that five really was the best place to start because it was more exciting. So I'm going to start with my conclusions.

(Laughter.)

DR. RITCHEY: And so the big conclusions for my talk are that innovation is needed. If we can leverage infrastructure, if we can improve methodology, then we'll be in a better place to look at the postmarket questions that arise, and we'll be able to address them, whether it's through 522 or whether it's through another tool that we have available.

And my other big conclusion is that communication is key in this realm, communication about everything, logistics, the study design, the assumptions that you have for any decisions that are made, and then communication about the questions themselves and the questions that arise.

So in this talk, I'll talk a bit about postmarket surveillance and review that. I'll walk through the types of postmarket surveillance again and then talk on the methodologies that are applicable to 522 questions. I have a few examples, but I'll breeze through those because I think it's more important for us to go to a panel and have a good discussion there.

I'm going to highlight again that postmarket surveillance is active, systematic, scientifically valid collection, analysis, and interpretation of data or other information about a marketed device. And this can be in many different realms. Postmarket surveillance could include bench or lab studies, it could include animal studies, or other nonclinical studies.

And I know we've put a lot of emphasis today on clinical studies, but I do think that it's important to recognize that there may be postmarket questions that arise that are best addressed via bench or animal tests.

Postmarket surveillance can also be a traditional study type of thing, an RCT, a cohort study, a case control, those things.

And then postmarket surveillance may be best addressed via surveillance techniques, via enhanced surveillance or active surveillance. And I'm adding in cross-sectional studies here for the purpose that if we're not looking at something and we need to know something, we need to know the answer to a question very quickly, then using a surveillance type of technique to do a cross-sectional study may be appropriate.

We may go through and do more aggregated types of things, have a prospective and retrospective study, a meta-analysis, or other clinical study. And then these novel designs, evidence synthesis, hybrid study design, leveraging of claims data of EHRs, those can all be included in postmarket surveillance as well.

So the most important thing in designing a 522 study is to design the postmarket surveillance to address the question that's identified in the order.

Dr. Normand talked about there are certain questions that are specific to different device areas. Aesthetic devices look different from pediatric devices.

Also, when we're looking at a study plan for a 522, we may approve the study plan, we may say it's approvable and send some questions, or it may be not approved. It may be not approved for one of two reasons. The study design may be absolutely beautiful, but the study doesn't address the postmarket surveillance question. And that's a design issue. Or it could be that the study is not approved because, looking into that particular study, it's unlikely that collecting data in that mechanism would lead to answering the question. That's also a design issue. So designing the 522 study to address the postmarket surveillance question is very important.

And we talked this morning a bit about the questions that may be there to address. Obtaining more experience with change in the use environment or the patient population is something like moving from inpatient use to home use, from looking at a general population to the device being used in older or younger patients, or being used in patients with increased comorbidities. And here our questions may be about utilization, they may be about safety, or they may be about effectiveness.

Dr. Krulewitch went through several different questions that might show up in a 522 order, and I wanted to walk through those questions again, to talk about how they might be addressed via a study design.

Typically, if you see the word surveilling, we're looking for a surveillance study. So if it's, in surveilling over the next 36 months, how is the device used, that's a utilization question addressed via enhanced surveillance or active.

We may want to confirm the nature, severity, or frequency of an adverse event that's seen in a new safety signal that arises. And here it's typically safety, and we may be asking a question like, what is the periprocedural rate of the primary safety endpoint in patients treated with the device? And here a traditional cohort study may be appropriate, active surveillance may be appropriate, but it might be a device that's newly on the market, and there a hybrid study design might be the most appropriate thing, like Dr. Setoguchi talked about.

We may be interested in long-term performance or infrequent adverse events, especially when premarket data was limited. And here we're looking at safety or effectiveness, we're looking for rare events, and it may be clinical or nonclinical.

A potential question that may arise in this realm is, is the rate of effectiveness with use of the device different from the rate seen in patients with a comparator treatment at 36 months post-procedure? I'm so glad I have the words comparator treatment in there. And here this might be a good place to leverage evidence synthesis, like Dr. Normand mentioned.

Another question in this realm might be, in those who received the device, what's their quality of life at a long-term endpoint like 36 months? And here a cohort study may be used, a cross-sectional study of patients who have had the device for a while may be appropriate, or leveraging personal health records, like we talked about multiple times through the day, may be appropriate.

Another question that we may have is how to better define the association between devices and serious adverse events. And this is when the SAE is noticed after the device is marketed or if there's a change in the profile of the SAEs or an increase in the rate of serious adverse events.

We may ask, among all patients who are undergoing treatment for a disease, what proportion is exposed to this device? And then, of those who are exposed to this device, how many have this particular event? And here a surveillance or a more typical study design may be appropriate, but leveraging claims data or leveraging EHR data may be useful as well.

And then this one I debated putting it here, and I am not putting this question here to imply that I think quality of life is typically a serious adverse event, but more to say that patient-reported outcomes, something like pain, may be something that would be considered a serious adverse event. And here, if you're looking at patient-reported outcomes, talking to the patient and leveraging their personal health records may be appropriate.

Some of the nuts and bolts of all of this are that we ask, if a 522 order issued, that companies provide us with details. We like to have details. We like to have details about the study design. We like to know about the population. We like to know about the data source. We like to know about the statistical assumptions that lead up to the decisions that were made.

I love that Dr. Normand always has an assumption slide when she talks because then I know exactly what she's thinking. And we ask a lot of questions about assumptions. So if you tell us your assumptions up front, it's a much better place to start.

We also like to know about the methodology that's chosen. As Danica mentioned earlier, we've thought a lot about our recommendations for a study design and have worked with a lot of different subject matter experts from a lot of different areas within the Center in order to come up with those recommendations. So if you're choosing a different methodology, we'd love to know how you got to that decision. It helps us in talking through the process. And then we like to know details about timelines as well.

So, again, communication is key, talking with the FDA about logistics, and there's content phone calls within 30 days of the order, and then further discussions in or after reviews and that type of thing. We also encourage communication across multiple stakeholders. We encourage industry to talk with users of registries, EHR, claims data, those who have enhanced surveillance and other data sources. And we also encourage you to talk within your group. Information from complaints handling or medical reps may be really useful in starting to work toward answering a 522 question.

So looking forward, we really do want to leverage the existing infrastructure and move forward with infrastructure so that as more questions arise, that there's larger amounts of infrastructure available for us to address those questions. We also encourage the use of these novel methodologies that have been mentioned here today.

Thank you very much.

(Applause.)

DR. LYSTIG: I request our panelists to please come up front, that presented earlier.

Okay. So we already have a person from the floor waiting to question, so please go ahead.

DR. ELOFF: Ben Eloff, FDA. My question is for Dr. Sedrakyan and maybe partially for --

DR. RITCHEY: Can you turn that microphone on?

DR. ELOFF: Is it not on? It's on. I'm just quiet. It's the end of the day. I've been talking too much elsewhere, unfortunately.

Ben Eloff, FDA.

Dr. Sedrakyan, you mentioned leveraging our partnerships, our public/private partnerships, to interpret studies when they're finished. I'd like to ask for you to clarify the context of interpreting the 522 study, which obviously is a regulated, mandated study, in a partnership context versus what we do here at the FDA in terms of our own regulatory mandate to not only issue the 522 order but potentially make decisions based upon those data and other data that we have at our disposal.

DR. SEDRAKYAN: Absolutely, Ben, I think that can be understood in a number of ways, what I said. I didn't mean, of course, interpretation from a regulatory perspective. I think what I meant by MDEpiNet being a forum for bringing those important clinical issues up and assisting FDA in the interpretation of any clinical aspects and scientific aspects that might not be easily understood, say certain realities on the ground, from registries and all.

DR. ELOFF: Thanks. So to paraphrase, to use those data in the context of the multi-stakeholder forum, to understand and discuss scientific issues and the nuances as they arise, and let each individual stakeholder take that information back to inform their own decisions, be it industry, academic, payers, us at FDA, so on and so forth.

DR. SEDRAKYAN: It basically would be up to FDA how much they would like to -- what would be MDEpiNet's role in this context? Obviously it would be defined by you, when and how much input do you need from researchers and the stakeholders.

DR. ELOFF: Okay, thank you very much.

DR. STEINBUCH: Hi, Michael Steinbuch, J&J.

This is for Dr. Chih-Ying Chen. A quick question. You know, you described the probabilistic record linkage system. How similar, if you care to comment, is this record linkage system is what's used by the National Death Index?

DR. CHEN: I believe, in the National Death Index, they do the deterministic linkage, so they look at when you want to ask them to search for all the causes of death, you provide these, like, several variables, for example, SSN or you provide gender, date of birth, or you provide first name and last name of the patient in your dataset to them and they will match that for you.

I couldn't say for sure, but it sounds to me it's like using deterministic linkage, but I never really know. I think I couldn't comment on what they exactly use on that.

DR. LYSTIG: So as the moderator, I'll ask a follow-up question related to that. It was interesting in the presentation, in that you sort of said that there are a variety of means to determine whether or not to include or exclude, one or the other, patients inside a large database. So you're trying to say, do I think there's a match that I'm going to then declare a smaller database?

And I'm wondering if you've looked at the possibility of saying that you could consider patients to be contributing differentially, based on how unique you thought they were. So, right, if you had replications and you knew there were replications, then they would maybe contribute the same amount of information as a single unique subject.

So I'm wondering the extent to which you could sort of consider, you know, using all of the patients in your large set, but weighting them differentially according to how certain you were that they contributed new information.

DR. CHEN: I think that's a very fresh perspective. But I must say, as an epidemiologist, I think I probably need to consult to biostatistics how to do it.

(Laughter.)

DR. CHEN: Yes, but I think that's interesting. Yeah. One concern I might have in that instance is, you know, my being in epidemiology, we more tend to like to have specifics, so we're more prompt to have specific answers. We have smaller sample size, but we're pretty sure -- fairly sure of the actual match, and then we do that, within that. So I would say that it's interesting to explore, but we probably need to do it a different way.

DR. SETOGUCHI: I think the problem with that approach is you never know what's the right weight it would be for.

DR. LYSTIG: Sure, yeah.

DR. NORMAND: But you're using a zero-one rule right now.

DR. SETOGUCHI: Huh?

DR. NORMAND: You're using a zero-one rule right now.

DR. SETOGUCHI: For the deterministic, yes, but probabilistic is not zero-one.

DR. NORMAND: Oh, it's not. So the probabilistic, you're using everybody.

DR. SETOGUCHI: Yes, exactly. You're taking the best weight from the duplicates.

DR. NORMAND: Okay.

DR. LYSTIG: All right, I could ask a question later. We can get someone else first. Go ahead.

MR. BROWN: Scott Brown, Covidien Peripheral Vascular.

Actually, you just covered one of the questions I was going to ask, so I'll jump to the next one, which is, so Dr. Chen's presentation about the probabilistic linkage, I mean, it clearly makes sense from a statistical standpoint. I can see applications for it. Here's my general question to everyone.

There's something that makes people feel squeaky about fuzzy logic or soft matches of this type. And I see some nods going on. I have done work. Dr. Normand mentioned at one point that we know we how to deal with missing data, and we do statistically. I've done it before in many different settings, and there's always a process of explaining to people that you're not making things up, that all you're doing is extending the laws of probability in an appropriate way --

DR. SETOGUCHI: We're not making it up.

MR. BROWN: -- to the values that were missing or that. But long story short, what I'm looking for is for the perspective of everyone, not just the statisticians, on when you hear of the notion of probabilistic matching of data, does that immediately sound like a credible thing to you, which it does to me, but I'm a statistician, or would you expect to get pushback on, well, you just -- you know, you kind of made this up? It's an opinion question.

DR. NORMAND: Can I? So my reaction to that is I think we've misled many people by not being very clear in terms of using these zero-one rules, that we're only going to include smaller samples. So I think we've got ground to make up, in terms of how we present.

So my answer is people will be worried, and if I say they shouldn't be, that's not I think they should be, and it's because I believe we've misled people in a false sense of security from the other ways we've approached things. And so this is -- and you're going to agree with me -- this is righter -- righter, listen to that -- that's better, it's more rigorous, it reflects the uncertainty. It really is an educational challenge to say, you know, you were falsely secure with the zero-one rule.

DR. SETOGUCHI: I wanted to say that the probabilistic linkage is actually commonly used to link the census data and the way that they developed -- the method was developed to link, like, data with names and addresses, or you could have different ways of spelling names or different ways of expressing addresses, like street for street spelled out versus like st. kind of thing.

So I think, again, these are other commonly used methods. And in our case, where we don't have such variables, we don't have such bigness. But what we looked at in the data is really comparing what we were able to link by the zero-one method versus probabilistic, that we found, as she said, only one out of five variables was the difference, and sometimes it happened just at the level of date of birth, not month and years or maybe year of birth kind thing.

So that sort of gave us a sense that we're probably picking up the right patients by using the probabilistic method. And, again, there's no way to say this is the right method.

MR. BROWN: Yeah. Like Dr. Normand said, I knew I was going to agree with you before I asked the question. What I wanted to hear was the details. And thank you. In particular, I didn't know that about the census. It's nice to be able to tell people about real life, you know, things that everyone's familiar with where this has been done.

DR. SEDRAKYAN: The general policy context for this probabilistic matching is also important. I mean, we're really doing this because we can't all do direct matching, right? And that's an important policy challenge.

For example, the American Joint Replacement Registry is collecting Social Security numbers, and they're having a very difficult time convincing hospitals to participate because of that. I don't know if
Dr. Rankin is still here. But it has been an important issue because they want to link directly. They would like to have Social Security numbers and directly link to the outcomes, and they would like to make a case that this process that we have in place, policy that we have in place, is not allowing us to create efficient systems.

So that should be on a table, in a way. Particularly for the public health mission that FDA has, probabilistic versus direct matching, I think should still be on the table.

DR. CHEN: And I want to add a point on this, is even when we get a Social Security number, there still would be error in Social Security numbers. So we think we have the right answer by having that, but there's still error there. So that's why probabilistic linkage is actually so very fascinating to me, yeah.

MR. BROWN: Thank you. Thank you all.

DR. BLAKE: Kathy Blake, raising a question from the clinical standpoint, which is that we actually looked at this when we were developing performance measurements for the management of implanted defibrillators, for which there are relatively low-frequency complications, and what we bumped up against was that we tried probabilistic matching. We actually got some help from Sana Al-Khatib at Duke to look and see if there would be a way to pool all of the data from given clinicians, and the best that we could get was about an 80% match.

And so that then, coupled with the problem of the relatively low frequency of complications, meant that as we kind of tested these ideas in focus groups with our clinicians, we got tremendous pushback, because they said, I can't afford, as a clinician, to have one false attribution to my experience and my outcome.

And so we're now starting to talk about -- and no one has mentioned this, it's interesting today -- the whole idea that we are the last country in what we might call the developing nations and developed nations that does not have a unique medical personal identifier.

And so I would just set those words out there for ongoing discussion because many, many of us want to be able to have this data used, used well, everybody's paying for it. We'd like to be accurate, but that is one of the limitations we face, and clinicians will push back.

DR. SETOGUCHI: I just wanted to sort of clarify a few points. I think I know the work that you're talking about. It's actually involving my colleagues in a group, that I don't think they're using probabilistic method. They're using deterministic based on multiple identifiers.

And another point, that 80% is actually a very good linkage rate in this case, because let me explain a little bit more. Matching ICD patients, who are over 65 years of age, to Medicare data, we don't expect that everybody would be in the Medicare data, because Medicare data only collects information on patients who are on pay-for-service Medicare. So we're missing more than 20% or more or less of 20% of the patients there who are actually on the HMO Medicare. So if you get 80%, you're almost making it 100%.

Okay. So then, I think they did two things, which is actually very close to what we did. We found about 80% of the time, that we found that the linkage -- I mean 80% of the expected linkage we got two. And I don't think -- yeah. But that's similar to ours, and the reason for that is again in terms of using multiple variables. Relying zero-one linkage, you lose people by these errors in these variables.

DR. NORMAND: And I think the other thing to think about is the tradeoff, so you do it or you don't do it. And so then the question is you don't want a false attribution. I can say I can understand the clinician's point of view, you may think I don't, but I could understand that point of view. But then the other -- what about the other side of the patient population?

So there are tradeoffs in terms of saying it has to be absolutely right or we're not going to do it. And I understand. And so it's one of those things to think about. Okay, if we do it or if we don't do it, what's the relative error? And I understand the profiling, and I profile for Massachusetts -- I understand. But it's that real issue in terms of, if it's not -- if there's one false positive, we're done. I don't think you could tolerate that type of -- that type of discussion, to me, is we've got to get over that. I know it's easy for me to say, but no, I understand.

No, I understood, but -- no, understood completely. But it's funny because -- well, in any event.

DR. SEDRAKYAN: Can I add something to which started, really, the national health identifier line? One way to go about this is to have a particular registry identifier. I don't know if it will help address the question that you raised, but if we can't get a national health identifier, if we do have a registry identifier and if it's a number that people will have, say, for the rest of their life and you have a system in place that captures patient information only at the in-hospital level, but you have 100% participation of all providers, of all surgeons, you automatically have an efficient system of follow-up, say, people who are going to come back to get care and it is related to the original procedure.

See, I'm using an example in orthopedics. It might not be as applicable to cardiovascular. The most important outcome is revision surgery. So they're going to come back to the same hospital or to a different hospital to get their revision surgery. Now, you have all the hospitals and all systems participating, and all the information you capture at the in-hospital level, you get enormous efficiency. You don't have to do any linkages, potentially, because you automatically get a system in place that has a follow-up. But that requires national infrastructure. It's the European kind of model.

But, again, a registry identifier might be something that we can come up with. Maybe it's easier than a national health identifier. Just to put it out there.

DR. SETOGUCHI: For the future. Your future.

DR. LYSTIG: So I think actually this concept of the national identifier is one that I'd like to hear a bit more discussion around, you know, either in terms of what you would do if you had access to this or the possible barriers to getting it. You know, I think it's a creative idea, Art, for doing that, but the first thing, I would I think that there'd be a lot of backlash against something that was sort of a back door personal identifier because I think a lot of people just are conceptually opposed to the concept that you could uniquely identify them in a system.

I mean, you'd have to find a way to make patients agree to have this data made available without them, because I think we see that there would be lots of additional value that you could use to do different types of linkage beyond what they might have given in some informed consent.

DR. RITCHEY: I think this may be another case where we're attributing the patient not wanting to do something, when that may or may not be the case. I think that if we had something like a registry identifier, where a patient could get the information and go in and fill it out themselves, or the patient could automatically get their record and send it in or something like that, we're allowing them to tell us more about what's going on with them.

And as long as we can streamline that in a way that's useful for them, I think that they may want to contribute more because that's giving someone information, giving a researcher information so they can collaborate and they can all work together and figure out what's going on. And I think, especially in the 522 instance, when there's a question that arises for safety or effectiveness, patients want to know more and are willing to contribute.

DR. SEDRAKYAN: Absolutely, I think that it's an issue about patients do want to know, particularly facing these disasters in the past couple of years, related to metal-on-metal, breast implants, or defibrillator leads, people want to know what they've got and how likely that they will face a long-term problem. So I think that they might be more interested in participating in registries like this.

Potentially, if we also have a registry identifier, that will allow us to track people over time without having to do CMS linkage or claims linkage, thinking about a billion other linkages. Efficiency. I think that's what coming from.

DR. LYSTIG: With that I'd like to thank our panel today for their presentations and discussion.

(Applause.)

DR. LYSTIG: And, Danica, you can come up right now.

DR. MARINAC-DABIC: I'd like to thank you all for coming and for making this day very productive. I think it was so obvious, you can almost -- the actual commitment and the fashion and the interest, and we're going to try to make the best out of the input that we received.

I can tell you that our Center is working very hard on developing the strategy for the postmarket national infrastructure. We are participating in this. And also just to illustrate our commitment, you know, in May of last year we held this public meeting that convened all the International Consortium of Orthopedic Registries here. In December we hosted the IDEAL meeting, when we brought all of these framework people that have been working with us from the IDEAL group from Europe, that Art was presenting. Today we're hosting this workshop to get your input.

Then in May we have MDEpiNet third annual workshop, mid-May. Then we also have post-approval studies workshop, again, geared toward convening the community around the center of -- around the issue of how we are going to design better post-approval studies in the context of what the vision for the future is. And, finally, we have the registries study. Again, what is the utilization of the different types of registries and the utility in the postmarket setting? All of this in this year. So a lot of opportunities for you to comment, all opportunities for you to work with us.

But the one theme that I heard throughout the whole day was we need to be interacting more in all phases of the TPLC, and especially in 522, you know, when there is a signal, you know, we should get input from industry sooner, from our external stakeholders sooner.

I see also a huge value for the MDEpiNet one, especially when it comes to be a partnership, that we can expand it from the current FDA-academia collaboration to actually partnership that will encompass industry and payers and other stakeholders that are going to be part of this. So at that point, we can draw from the expertise of all these stakeholders at any stage. Whenever there is a particular issue, we can actually quickly gather the evidence and identify what needs to be done.

So we are very determined to change the paradigm. This is the opportunity for us to utilize the momentum. We are at the stage now when we have availability of these very important databases. We would like to be able to draw on all the expertise there is in our country and abroad to make this system work for the American people.

So, again, thank you all for your input and for spending the day with us here. And also I would like to extend the thanks to my entire division for their work and specifically to Mary Beth Ritchey and Naomi Herz and Samantha Jacobs for the heavy lifting in preparation of this conference. Thank you.

(Applause.)

(Whereupon, at 4:48 p.m., the meeting was adjourned.)

C E R T I F I C A T E

This is to certify that the attached proceedings in the matter of:

DESIGN AND METHODOLOGY FOR POSTMARKET SURVEILLANCE STUDIES

UNDER SECTION 522 OF THE FEDERAL FOOD, DRUG AND COSMETIC ACT

March 7, 2012

Silver Spring, Maryland

were held as herein appears, and that this is the original transcription thereof for the files of the Food and Drug Administration, Center for Devices and Radiological Health.

___________________________

CATHY BELKA

Official Reporter